RVPO:基于方差正则化的风险敏感对齐

05-08 08:00

阅读原文→
现有无评论者RLHF方法通过算术平均聚合多目标奖励,易导致约束忽视:单一目标的高分可能掩盖其他关键目标(如安全性或格式)的严重失败,从而隐藏影响可靠对齐的低性能瓶颈奖励。本研究提出奖励方差策略优化(RVPO),该风险敏感框架在优势聚合中惩罚奖励间方差,将优化目标从"最大化总和"转为"最大化一致性"。分析表明,RVPO能有效识别并提升瓶颈奖励的贡献,在安全性、格式遵循等多目标对齐任务中实现更均衡的策略优化

原文内容

RVPO:基于方差正则化的风险敏感对齐

research area Methods and Algorithms, research area Speech and Natural Language Processing

content type paperpublished May 2026

RVPO: Risk-Sensitive Alignment via Variance Regularization

AuthorsIvan Montero, Tomasz Jurczyk, Bhuwan Dhingra

View publication

Copy Bibtex

Current critic-less RLHF methods aggregate multi-objective rewards via an arithmetic mean, leaving them vulnerable to constraint neglect: high-magnitude success in one objective can numerically offset critical failures in others (e.g., safety or formatting), masking low-performing “bottleneck” rewards vital for reliable multi-objective alignment. We propose Reward-Variance Policy Optimization (RVPO), a risk-sensitive framework that penalizes inter-reward variance during advantage aggregation, shifting the objective from “maximize sum” to “maximize consistency.” We show via Taylor expansion that a LogSumExp (SoftMin) operator effectively acts as a smooth variance penalty. We evaluate RVPO on rubric-based medical and scientific reasoning with up to 17 concurrent LLM-judged reward signals (Qwen2.5-3B/7B/14B) and on tool-calling with rule-based constraints (Qwen2.5-1.5B/3B). By preventing the model from neglecting difficult constraints to exploit easier objectives, RVPO improves overall scores on HealthBench (0.261 vs. 0.215 for GDPO at 14B, p < 0.001) and maintains competitive accuracy on GPQA-Diamond without the late-stage degradation observed in other multi-reward methods, demonstrating that variance regularization mitigates constraint neglect across model scales without sacrificing general capabilities.

Related readings and updates.

On the Limited Generalization Capability of the Implicit Reward Model Induced by Direct Preference Optimization

October 9, 2024research area Methods and Algorithms, research area Speech and Natural Language Processingconference EMNLP

Reinforcement Learning from Human Feedback (RLHF) is an effective approach for aligning language models to human preferences. Central to RLHF is learning a reward function for scoring human preferences. Two main approaches for learning a reward model are 1) training an explicit reward model as in RLHF, and 2) using an implicit reward learned from preference data through methods such as Direct Preference Optimization (DPO). Prior work has shown…

Read more

Only Pay for What Is Uncertain: Variance-Adaptive Thompson Sampling

May 3, 2024research area Data Science and Annotation, research area Methods and Algorithmsconference ICLR

Most bandit algorithms assume that the reward variances or their upper bounds are known, and that they are the same for all arms. This naturally leads to suboptimal performance and higher regret due to variance overestimation. On the other hand, underestimated reward variances may lead to linear regret due to committing early to a suboptimal arm. This motivated prior works on variance-adaptive frequentist algorithms, which have strong…

Read more

Bottom banner

Discover opportunities in Machine Learning.

Our research in machine learning breaks new ground every day.

Work with us

原文图片

原文图片

链接抓取:https://arxiv.org/abs/2605.05750

Computer Science > Machine Learning

arXiv:2605.05750 (cs)

[Submitted on 7 May 2026]

Title:RVPO: Risk-Sensitive Alignment via Variance Regularization

Authors:Ivan Montero, Tomasz Jurczyk, Bhuwan Dhingra

View a PDF of the paper titled RVPO: Risk-Sensitive Alignment via Variance Regularization, by Ivan Montero and 2 other authors

View PDF HTML (experimental)

Abstract:Current critic-less RLHF methods aggregate multi-objective rewards via an arithmetic mean, leaving them vulnerable to constraint neglect: high-magnitude success in one objective can numerically offset critical failures in others (e.g., safety or formatting), masking low-performing "bottleneck" rewards vital for reliable multi-objective alignment. We propose Reward-Variance Policy Optimization (RVPO), a risk-sensitive framework that penalizes inter-reward variance during advantage aggregation, shifting the objective from "maximize sum" to "maximize consistency." We show via Taylor expansion that a LogSumExp (SoftMin) operator effectively acts as a smooth variance penalty. We evaluate RVPO on rubric-based medical and scientific reasoning with up to 17 concurrent LLM-judged reward signals (Qwen2.5-3B/7B/14B) and on tool-calling with rule-based constraints (Qwen2.5-1.5B/3B). By preventing the model from neglecting difficult constraints to exploit easier objectives, RVPO improves overall scores on HealthBench (0.261 vs. 0.215 for GDPO at 14B, \(p < 0.001\)) and maintains competitive accuracy on GPQA-Diamond without the late-stage degradation observed in other multi-reward methods, demonstrating that variance regularization mitigates constraint neglect across model scales without sacrificing general capabilities.

Comments: 17 pages, 5 figures
Subjects: Machine Learning (cs.LG); Computation and Language (cs.CL)
Cite as: arXiv:2605.05750 [cs.LG]
(or arXiv:2605.05750v1 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2605.05750 Focus to learn more arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Ivan Montero [view email]
[v1] Thu, 7 May 2026 06:43:05 UTC (944 KB)

Full-text links:

Access Paper:

view license

Current browse context:

cs.LG

< prev | next >

new | recent | 2026-05

Change to browse by:

cs
cs.CL

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×

loading...

Data provided by:

Bookmark

BibSonomy Reddit

Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer (What is the Explorer?)

Connected Papers Toggle

Connected Papers (What is Connected Papers?)

Litmaps Toggle

Litmaps (What is Litmaps?)

scite.ai Toggle

scite Smart Citations (What are Smart Citations?)

Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle

alphaXiv (What is alphaXiv?)

Links to Code Toggle

CatalyzeX Code Finder for Papers (What is CatalyzeX?)

DagsHub Toggle

DagsHub (What is DagsHub?)

GotitPub Toggle

Gotit.pub (What is GotitPub?)

Huggingface Toggle

Hugging Face (What is Huggingface?)

ScienceCast Toggle

ScienceCast (What is ScienceCast?)

Demos

Replicate Toggle

Replicate (What is Replicate?)

Spaces Toggle

Hugging Face Spaces (What is Spaces?)

TXYZ.AI (What is TXYZ.AI?)

Related Papers

Recommenders and Search Tools

Link to Influence Flower

Influence Flower (What are Influence Flowers?)

Core recommender toggle

CORE Recommender (What is CORE?)

IArxiv recommender toggle

IArxiv Recommender (What is IArxiv?)

About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)