Papers
arxiv:2602.00913

Do Schwartz Higher-Order Values Help Sentence-Level Human Value Detection? When Hard Gating Hurts

Published on Jan 31
Authors:
,

Abstract

Research examines sentence-level human value detection using transformer models under compute constraints, finding that hierarchical structure enforcement through hard masking is detrimental while calibration and ensembling techniques yield better performance.

AI-generated summary

Sentence-level human value detection is typically framed as multi-label classification over Schwartz values, but it remains unclear whether Schwartz higher-order (HO) categories provide usable structure. We study this under a strict compute-frugal budget (single 8 GB GPU) on ValueEval'24 / ValuesML (74K English sentences). We compare (i) direct supervised transformers, (ii) HOrightarrowvalues pipelines that enforce the hierarchy with hard masks, and (iii) PresencerightarrowHOrightarrowvalues cascades, alongside low-cost add-ons (lexica, short context, topics), label-wise threshold tuning, small instruction-tuned LLM baselines (le10B), QLoRA, and simple ensembles. HO categories are learnable from single sentences (e.g., the easiest bipolar pair reaches Macro-F_1approx0.58), but hard hierarchical gating is not a reliable win: it often reduces end-task Macro-F_1 via error compounding and recall suppression. In contrast, label-wise threshold tuning is a high-leverage knob (up to +0.05 Macro-F_1), and small transformer ensembles provide the most consistent additional gains (up to +0.02 Macro-F_1). Small LLMs lag behind supervised encoders as stand-alone systems, yet can contribute complementary errors in cross-family ensembles. Overall, HO structure is useful descriptively, but enforcing it with hard gates hurts sentence-level value detection; robust improvements come from calibration and lightweight ensembling.

Community

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.00913 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.00913 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.