Shiyunee commited on
Commit
7184df2
·
verified ·
1 Parent(s): f9cf7e3

Force re-upload all files

Browse files
Files changed (5) hide show
  1. .DS_Store +0 -0
  2. .mdl +0 -0
  3. .msc +0 -0
  4. .mv +1 -0
  5. README.md +128 -5
.DS_Store ADDED
Binary file (6.15 kB). View file
 
.mdl ADDED
Binary file (60 Bytes). View file
 
.msc ADDED
Binary file (32.9 kB). View file
 
.mv ADDED
@@ -0,0 +1 @@
 
 
1
+ Revision:master,CreatedAt:1761027950
README.md CHANGED
@@ -1,5 +1,128 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
- The data and models are already prepared, but due to connectivity issues with Hugging Face, we have not been able to upload them yet. We are actively working to resolve this. If you would like to reproduce the results from the paper, please refer to our GitHub repository.
5
- https://github.com/Trustworthy-Information-Access/Annotation-Efficient-Universal-Honesty-Alignment
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ This is the official repo of the paper [Annotation-Efficient Universal Honesty Alignment](https://arxiv.org/abs/2510.17509)
4
+
5
+ This repository provides modules that extend **Qwen2.5-7B-Instruct** with the ability to generate accurate confidence scores *before* response generation, indicating how likely the model is to answer a given question correctly across tasks. We offer two types of modules—**LoRA + Linear Head** and **Linear Head**—along with model parameters under three training settings:
6
+
7
+ 1. **Elicitation (greedy):** Trained on all questions (over 560k) using self-consistency-based confidence annotations.
8
+ 2. **Calibration-Only (right):** Trained on questions with explicit correctness annotations.
9
+ 3. **EliCal (hybrid):** Initialized from the Elicitation model and further trained on correctness-labeled data.
10
+
11
+ For both **Calibration-Only** and **EliCal** settings, we provide models trained with different amounts of annotated data (1k, 2k, 3k, 5k, 8k, 10k, 20k, 30k, 50k, 80k, 200k, 560k+). Since **LoRA + Linear Head** is the main configuration used in our paper, the following description is based on this setup.
12
+
13
+ In our model, **LoRA is applied to all linear layers** with **r = 8** and **α = 16**. The **Linear Head** is added to the final layer of the model and takes as input the internal state of the **last token** from the final layer. It predicts a **confidence score between 0 and 1**, representing the model’s **estimated probability of answering the question correctly**.
14
+
15
+ # Model Architecture
16
+
17
+ ```python
18
+ class LMWithVectorHead(nn.Module):
19
+ def __init__(self, model_name, lora_config, output_dim=1):
20
+ super().__init__()
21
+ backbone = AutoModel.from_pretrained(model_name, device_map='cpu')
22
+ # backbone.config.use_cache = False
23
+ self.peft_model = get_peft_model(backbone, lora_config)
24
+ self.config = backbone.config
25
+ hidden_size = backbone.config.hidden_size
26
+ self.vector_head = nn.Linear(hidden_size, output_dim) # 输出维度为 1
27
+
28
+ def gradient_checkpointing_enable(self, gradient_checkpointing_kwargs=None):
29
+ """启用梯度检查点,并处理可能的额外参数"""
30
+ self.peft_model.enable_input_require_grads()
31
+ if gradient_checkpointing_kwargs is not None:
32
+ self.peft_model.gradient_checkpointing_enable(**gradient_checkpointing_kwargs)
33
+ else:
34
+ self.peft_model.gradient_checkpointing_enable()
35
+
36
+ def forward(self, input_ids, attention_mask=None, labels=None):
37
+ # if hasattr(self.peft_model, "gradient_checkpointing"):
38
+ # print(f"✅ 梯度检查点已启用 - 当前模式: {self.peft_model.is_gradient_checkpointing}")
39
+ # else:
40
+ # print("❌ 梯度检查点未正确初始化")
41
+ outputs = self.peft_model(
42
+ input_ids=input_ids,
43
+ attention_mask=attention_mask,
44
+ return_dict=True
45
+ )
46
+ # 获取最后一个 token 的隐藏状态
47
+ last_hidden = outputs.last_hidden_state # [B, T, H]
48
+ cls_hidden = last_hidden[:, -1, :] # [B, H]
49
+ logits = self.vector_head(cls_hidden) # [B, 1]
50
+ logits = torch.sigmoid(logits).squeeze(-1) # 添加 sigmoid 并压缩至 [B]
51
+
52
+ loss = None
53
+ if labels is not None:
54
+ loss_fct = nn.MSELoss() # 使用 MSE 损失
55
+ loss = loss_fct(logits, labels) # 计算 logits 和 labels 的 MSE
56
+
57
+ return CausalLMOutput(
58
+ loss=loss,
59
+ logits=logits
60
+ )
61
+ ```
62
+
63
+ # Inference
64
+
65
+ This shows how to load the model. For more details, please refer to [Github Repo](https://github.com/Trustworthy-Information-Access/Annotation-Efficient-Universal-Honesty-Alignment/blob/master/honesty_alignment/eval_one_conf.py).
66
+
67
+ ```python
68
+ base_model = AutoModel.from_pretrained(args.model_path)
69
+
70
+ # 2. 加载训练好的LoRA适配器到基础模型上
71
+ peft_model = PeftModel.from_pretrained(
72
+ base_model, # 使用基础模型,而不是model.peft_model
73
+ args.lora_path,
74
+ adapter_name="default"
75
+ )
76
+
77
+ # 3. 创建完整模型结构
78
+ lora_config = LoraConfig(
79
+ r=args.r,
80
+ lora_alpha=args.alpha,
81
+ target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
82
+ "gate_proj", "up_proj", "down_proj"],
83
+ lora_dropout=args.lora_dropout,
84
+ bias="none",
85
+ )
86
+ model = LMWithVectorHead(args.model_path, lora_config)
87
+
88
+ # 4. 替换为已加载LoRA的模型
89
+ model.peft_model = peft_model
90
+
91
+ # 5. 加载分类头权重
92
+ state_dict = torch.load(args.vector_head_path, map_location=device)
93
+ model.vector_head.load_state_dict(state_dict)
94
+
95
+ # 6. 激活适配器并移动到设备
96
+ model.peft_model.set_adapter("default")
97
+ model = model.to(device)
98
+
99
+ # 评估模式
100
+ model.eval()
101
+ ```
102
+
103
+ # Files
104
+
105
+ ```sh
106
+ /lora
107
+ ├── greedy_answer_conf
108
+ │ └── long_qa
109
+ │ └── batchsize16_accumulation8_epochs10_weightdecay0.1_r8_alpha16_loradropout0.0 (training configuration)
110
+ │ ├── best_checkpoints
111
+ │ │ ├── lora_epoch_best/ # Path to LoRA module
112
+ │ │ └── vector_head_epoch_best.pt # Path to Linear Head weights
113
+ │ └── test_losses.json # Test loss for each epoch
114
+
115
+ ├── hybrid_answer_conf
116
+ │ └── long_qa
117
+ │ ├── batchsize16_accumulation8_epochs10_weightdecay0.1_r8_alpha16_loradropout0.0 (560k samples)
118
+ │ ├── batchsize16_accumulation8_epochs50_weightdecay0.1_r8_alpha16_loradropout0.0_1k_training_samples (1k samples)
119
+ │ └── batchsize16_accumulation8_epochs50_weightdecay0.1_r8_alpha16_loradropout0.0_2k_training_samples (2k samples)
120
+
121
+ └── right_answer_conf
122
+ └── long_qa
123
+ └── ... # Same format as above
124
+
125
+ /mlp
126
+ ...
127
+ ```
128
+