Cooolder commited on
Commit
23138d4
·
verified ·
1 Parent(s): 20ed250

Upload 2 files

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +393 -55
  3. assets/1.png +3 -0
.gitattributes CHANGED
@@ -37,3 +37,4 @@ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
  checkpoint-6000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
38
  checkpoint-6500/tokenizer.json filter=lfs diff=lfs merge=lfs -text
39
  checkpoint-6695/tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
37
  checkpoint-6000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
38
  checkpoint-6500/tokenizer.json filter=lfs diff=lfs merge=lfs -text
39
  checkpoint-6695/tokenizer.json filter=lfs diff=lfs merge=lfs -text
40
+ assets/1.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,75 +1,413 @@
1
  ---
2
- library_name: transformers
3
- license: other
4
- base_model: Qwen/Qwen3-4B-Instruct-2507
 
 
 
5
  tags:
6
- - llama-factory
7
- - full
8
- - generated_from_trainer
9
- model-index:
10
- - name: sft_direct
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # sft_direct
 
 
18
 
19
- This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the scope_sft_direct dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.3041
 
 
22
 
23
- ## Model description
24
 
25
- More information needed
26
 
27
- ## Intended uses & limitations
 
 
 
 
28
 
29
- More information needed
30
 
31
- ## Training and evaluation data
 
 
 
32
 
33
- More information needed
34
 
35
- ## Training procedure
36
 
37
- ### Training hyperparameters
 
 
 
 
38
 
39
- The following hyperparameters were used during training:
40
- - learning_rate: 1e-05
41
- - train_batch_size: 1
42
- - eval_batch_size: 2
43
- - seed: 42
44
- - gradient_accumulation_steps: 8
45
- - total_train_batch_size: 8
46
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
- - lr_scheduler_type: cosine
48
- - lr_scheduler_warmup_ratio: 0.1
49
- - num_epochs: 1.0
50
 
51
- ### Training results
52
 
53
- | Training Loss | Epoch | Step | Validation Loss |
54
- |:-------------:|:------:|:----:|:---------------:|
55
- | 0.3289 | 0.0747 | 500 | 0.3394 |
56
- | 0.3445 | 0.1494 | 1000 | 0.3315 |
57
- | 0.3383 | 0.2241 | 1500 | 0.3268 |
58
- | 0.3352 | 0.2987 | 2000 | 0.3224 |
59
- | 0.3027 | 0.3734 | 2500 | 0.3197 |
60
- | 0.3272 | 0.4481 | 3000 | 0.3166 |
61
- | 0.3216 | 0.5228 | 3500 | 0.3138 |
62
- | 0.3223 | 0.5975 | 4000 | 0.3108 |
63
- | 0.3346 | 0.6722 | 4500 | 0.3076 |
64
- | 0.3098 | 0.7468 | 5000 | 0.3065 |
65
- | 0.2938 | 0.8215 | 5500 | 0.3059 |
66
- | 0.315 | 0.8962 | 6000 | 0.3046 |
67
- | 0.301 | 0.9709 | 6500 | 0.3040 |
68
 
 
 
69
 
70
- ### Framework versions
 
 
71
 
72
- - Transformers 4.57.1
73
- - Pytorch 2.7.1+cu126
74
- - Datasets 3.6.0
75
- - Tokenizers 0.22.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ language:
4
+ - multilingual
5
+ base_model:
6
+ - Qwen/Qwen3-4B-Instruct-2507
7
+ pipeline_tag: text-generation
8
  tags:
9
+ - Model Routing
10
+ - LLM reasoning
 
 
 
 
11
  ---
12
 
13
+ # SCOPE-Direct: Scalable and Controllable Outcome Performance Estimator (Direct Prediction)
14
+ [📄 Paper (arXiv:2601.22323)](https://www.arxiv.org/abs/2601.22323)
15
 
16
+ This repository accompanies the paper "**Models Under SCOPE: Scalable and Controllable Routing via Pre-hoc Reasoning**", which introduces SCOPE (Scalable and Controllable Outcome Performance Estimator) — a new framework for large language model (LLM) routing.
17
+ SCOPE reframes model routing as a pre-hoc estimation problem: instead of directly selecting a model from a fixed candidate set, it predicts each model's expected performance (correctness) and inference cost (token length) before execution, based on the model's historical behaviors on similar queries. This enables training-free generalization to unseen models and allows users to flexibly control the trade-off between accuracy and cost through a budget-aware utility function.
18
+ Overall, SCOPE provides a scalable, explainable, and controllable solution for allocating test-time compute across heterogeneous model portfolios.
19
 
20
+ <p align="center">
21
+ <img src="assets/1.png" width="500">
22
+ </p>
23
+ The figure above illustrates the core difference between traditional routers and SCOPE.
24
+ Conventional LLM routers treat routing as a closed-set classification problem, simply memorizing model names and selecting one model per query. In contrast, SCOPE reasons over models' past behaviors, explicitly predicting outcome correctness and token cost, and then makes a budget-aware decision based on these estimates. This design allows SCOPE to generalize to unseen models and supports dynamic cost–accuracy control at inference time.
25
 
26
+ **SCOPE-Direct** is the direct prediction variant that outputs predictions without explicit chain-of-thought analysis, making it faster and more efficient for production use cases.
27
 
28
+ ## Model Description
29
 
30
+ - **Task**: Performance prediction for LLMs
31
+ - **Base Model**: Qwen/Qwen3-4B-Instruct-2507
32
+ - **Training**: Supervised Fine-Tuning (SFT)
33
+ - **Input**: Target question + k anchor questions with performance data
34
+ - **Output**: Predicted length (tokens) and correctness (yes/no)
35
 
36
+ ## Intended Use
37
 
38
+ SCOPE-Direct is designed to:
39
+ - Predict whether an LLM will answer a question correctly before running expensive inference
40
+ - Estimate the output token length for resource planning
41
+ - Enable efficient LLM routing and selection
42
 
43
+ ## Quick Start
44
 
45
+ ### Installation
46
 
47
+ ```bash
48
+ pip install transformers>=4.51.0 torch datasets
49
+ # For vLLM inference (optional but recommended)
50
+ pip install vllm
51
+ ```
52
 
53
+ ### Input Format
 
 
 
 
 
 
 
 
 
 
54
 
55
+ SCOPE-Direct uses the following prompt format (without Analysis):
56
 
57
+ ```
58
+ ### Task
59
+ You are a performance prediction expert. Given a target question, 5 anchor questions with their performance results, and a target AI model, predict how the model will perform on the target question, specifically the output length and correctness.
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
+ ### Target Model
62
+ {model_name}
63
 
64
+ Example 1:
65
+ Question: {anchor_question_1}
66
+ Performance: {len: {length}, correct: {yes/no}}
67
 
68
+ Example 2:
69
+ Question: {anchor_question_2}
70
+ Performance: {len: {length}, correct: {yes/no}}
71
+
72
+ ...
73
+
74
+ ### Target Question
75
+ {your_target_question}
76
+
77
+ ### Output Format (STRICT)
78
+ Predicted Performance: {len: [integer], correct: [yes/no]}
79
+
80
+ ### Output:
81
+ ```
82
+
83
+ ### Output Format
84
+
85
+ The model directly outputs:
86
+ ```
87
+ Predicted Performance: {len: 256, correct: yes}
88
+ ```
89
+
90
+ ---
91
+
92
+ ## Inference Methods
93
+
94
+ ### Method 1: Using Transformers (Recommended for Single Inference)
95
+
96
+ ```python
97
+ from transformers import AutoModelForCausalLM, AutoTokenizer
98
+
99
+ # Load model
100
+ model_name = "Cooolder/SCOPE-Direct"
101
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
102
+ model = AutoModelForCausalLM.from_pretrained(
103
+ model_name,
104
+ torch_dtype="auto",
105
+ device_map="auto"
106
+ )
107
+
108
+ # Prepare the prompt (see "Prompt Examples" section below)
109
+ prompt = """### Task
110
+ You are a performance prediction expert. Given a target question, 5 anchor questions with their performance results, and a target AI model, predict how the model will perform on the target question, specifically the output length and correctness.
111
+
112
+ ### Target Model
113
+ Qwen/Qwen3-8B-Instruct
114
+
115
+ Example 1:
116
+ Question: What is the capital of France?
117
+ Performance: {len: 45, correct: yes}
118
+
119
+ Example 2:
120
+ Question: Solve: 2 + 2 = ?
121
+ Performance: {len: 32, correct: yes}
122
+
123
+ Example 3:
124
+ Question: Explain quantum entanglement in simple terms.
125
+ Performance: {len: 512, correct: yes}
126
+
127
+ Example 4:
128
+ Question: What is the 50th prime number?
129
+ Performance: {len: 128, correct: no}
130
+
131
+ Example 5:
132
+ Question: Write a haiku about programming.
133
+ Performance: {len: 78, correct: yes}
134
+
135
+ ### Target Question
136
+ What is the derivative of x^3 + 2x^2 - 5x + 7?
137
+
138
+ ### Output Format (STRICT)
139
+ Predicted Performance: {len: [integer], correct: [yes/no]}
140
+
141
+ ### Output:"""
142
+
143
+ # Format as chat message
144
+ messages = [{"role": "user", "content": prompt}]
145
+ text = tokenizer.apply_chat_template(
146
+ messages,
147
+ tokenize=False,
148
+ add_generation_prompt=True,
149
+ )
150
+
151
+ # Generate
152
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
153
+ generated_ids = model.generate(
154
+ **model_inputs,
155
+ max_new_tokens=64, # Direct prediction needs fewer tokens
156
+ temperature=0.7,
157
+ top_p=0.8,
158
+ top_k=20,
159
+ )
160
+
161
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
162
+ response = tokenizer.decode(output_ids, skip_special_tokens=True)
163
+ print(response)
164
+ ```
165
+
166
+ ### Method 2: Using vLLM (Recommended for Batch Inference)
167
+
168
+ ```python
169
+ import os
170
+ import re
171
+ from vllm import LLM, SamplingParams
172
+
173
+ # Load model with vLLM
174
+ model_name = "Cooolder/SCOPE-Direct"
175
+ llm = LLM(
176
+ model=model_name,
177
+ dtype="bfloat16",
178
+ gpu_memory_utilization=0.90,
179
+ max_model_len=8192,
180
+ trust_remote_code=True,
181
+ )
182
+
183
+ # Prepare prompts (batch processing)
184
+ prompts = []
185
+ raw_prompt = """### Task
186
+ You are a performance prediction expert. Given a target question, 5 anchor questions with their performance results, and a target AI model, predict how the model will perform on the target question, specifically the output length and correctness.
187
+
188
+ ### Target Model
189
+ Qwen/Qwen3-8B-Instruct
190
+
191
+ Example 1:
192
+ Question: What is the capital of France?
193
+ Performance: {len: 45, correct: yes}
194
+
195
+ Example 2:
196
+ Question: Solve: 2 + 2 = ?
197
+ Performance: {len: 32, correct: yes}
198
+
199
+ Example 3:
200
+ Question: Explain quantum entanglement in simple terms.
201
+ Performance: {len: 512, correct: yes}
202
+
203
+ Example 4:
204
+ Question: What is the 50th prime number?
205
+ Performance: {len: 128, correct: no}
206
+
207
+ Example 5:
208
+ Question: Write a haiku about programming.
209
+ Performance: {len: 78, correct: yes}
210
+
211
+ ### Target Question
212
+ What is the derivative of x^3 + 2x^2 - 5x + 7?
213
+
214
+ ### Output Format (STRICT)
215
+ Predicted Performance: {len: [integer], correct: [yes/no]}
216
+
217
+ ### Output:"""
218
+
219
+ # Wrap in Qwen3 chat template
220
+ chat_prompt = f"<|im_start|>user\n{raw_prompt}<|im_end|>\n<|im_start|>assistant\n"
221
+ prompts.append(chat_prompt)
222
+
223
+ # Sampling parameters
224
+ sampling_params = SamplingParams(
225
+ temperature=0.6,
226
+ max_tokens=64, # Direct prediction needs fewer tokens
227
+ top_p=0.95,
228
+ top_k=20,
229
+ n=8, # Generate multiple samples for better confidence estimation
230
+ stop=["<|im_end|>", "<|endoftext|>"],
231
+ stop_token_ids=[151645, 151643]
232
+ )
233
+
234
+ # Run inference
235
+ outputs = llm.generate(prompts, sampling_params)
236
+
237
+ # Parse results
238
+ for output in outputs:
239
+ for single_output in output.outputs:
240
+ response = single_output.text.strip()
241
+ print(response)
242
+ print("-" * 50)
243
+ ```
244
+
245
+ ### Parsing the Output
246
+
247
+ ```python
248
+ import re
249
+
250
+ def parse_prediction(response: str):
251
+ """Parse SCOPE-Direct model output to extract predictions."""
252
+ # Clean up formatting variations
253
+ response = response.replace('**Predicted Performance:**', 'Predicted Performance:')
254
+ response = response.replace('**Predicted Performance**:', 'Predicted Performance:')
255
+
256
+ # Parse len and correct
257
+ len_match = re.search(r'len:\s*(\d+)', response)
258
+ correct_match = re.search(r'correct:\s*(yes|no)', response, re.IGNORECASE)
259
+
260
+ if not len_match or not correct_match:
261
+ return None
262
+
263
+ return {
264
+ 'predicted_length': int(len_match.group(1)),
265
+ 'predicted_correct': correct_match.group(1).lower()
266
+ }
267
+
268
+ # Example usage
269
+ result = parse_prediction(response)
270
+ print(f"Predicted Length: {result['predicted_length']}")
271
+ print(f"Predicted Correct: {result['predicted_correct']}")
272
+ ```
273
+
274
+ ---
275
+
276
+ ## Anchor and Prompt Examples
277
+
278
+ ### Example 1: Math Question Prediction
279
+
280
+ ```python
281
+ anchor_text = """Example 1:
282
+ Question: What is 15 + 27?
283
+ Performance: {len: 28, correct: yes}
284
+
285
+ Example 2:
286
+ Question: Calculate the area of a circle with radius 5.
287
+ Performance: {len: 156, correct: yes}
288
+
289
+ Example 3:
290
+ Question: Solve the quadratic equation x^2 - 5x + 6 = 0.
291
+ Performance: {len: 245, correct: yes}
292
+
293
+ Example 4:
294
+ Question: What is the integral of sin(x)?
295
+ Performance: {len: 89, correct: yes}
296
+
297
+ Example 5:
298
+ Question: Prove that the square root of 2 is irrational.
299
+ Performance: {len: 478, correct: no}
300
+ """
301
+
302
+ target_question = "Find the limit of (x^2 - 1)/(x - 1) as x approaches 1."
303
+ model_name = "Qwen/Qwen3-8B-Instruct"
304
+ ```
305
+
306
+ ### Example 2: Coding Question Prediction
307
+
308
+ ```python
309
+ anchor_text = """Example 1:
310
+ Question: Write a Python function to check if a number is even.
311
+ Performance: {len: 67, correct: yes}
312
+
313
+ Example 2:
314
+ Question: Implement binary search in Python.
315
+ Performance: {len: 234, correct: yes}
316
+
317
+ Example 3:
318
+ Question: Write a function to reverse a linked list.
319
+ Performance: {len: 312, correct: yes}
320
+
321
+ Example 4:
322
+ Question: Implement a LRU cache in Python.
323
+ Performance: {len: 456, correct: no}
324
+
325
+ Example 5:
326
+ Question: Write a recursive function to compute Fibonacci numbers.
327
+ Performance: {len: 178, correct: yes}
328
+ """
329
+
330
+ target_question = "Write a Python function to find the longest palindromic substring."
331
+ model_name = "deepseek-ai/DeepSeek-V2-Chat"
332
+ ```
333
+
334
+ ### Example 3: General Knowledge Prediction
335
+
336
+ ```python
337
+ anchor_text = """Example 1:
338
+ Question: Who wrote "Romeo and Juliet"?
339
+ Performance: {len: 34, correct: yes}
340
+
341
+ Example 2:
342
+ Question: What is the chemical formula for water?
343
+ Performance: {len: 42, correct: yes}
344
+
345
+ Example 3:
346
+ Question: Explain the theory of relativity.
347
+ Performance: {len: 687, correct: yes}
348
+
349
+ Example 4:
350
+ Question: What year did World War II end?
351
+ Performance: {len: 51, correct: yes}
352
+
353
+ Example 5:
354
+ Question: Who was the 23rd President of the United States?
355
+ Performance: {len: 89, correct: no}
356
+ """
357
+
358
+ target_question = "What is the speed of light in a vacuum?"
359
+ model_name = "meta-llama/Llama-3-70B-Instruct"
360
+ ```
361
+
362
+ ---
363
+
364
+ ## Using with Cooolder/kshot_inference_direct Dataset
365
+
366
+ The model is designed to work with the [Cooolder/kshot_inference_direct](https://huggingface.co/datasets/Cooolder/kshot_inference_direct) dataset:
367
+
368
+ ```python
369
+ from datasets import load_dataset
370
+
371
+ # Load the dataset
372
+ dataset = load_dataset("Cooolder/kshot_inference_direct", split="train")
373
+
374
+ # Each sample contains:
375
+ # - id: unique identifier
376
+ # - prompt: pre-formatted prompt with anchors and target question
377
+ # - gt_is_correct: ground truth correctness
378
+ # - gt_token_count: ground truth token count
379
+ # - source_model: the target model being predicted
380
+ # - retrieved_anchors: the anchor questions used
381
+
382
+ # Example: Run inference on the dataset
383
+ for sample in dataset:
384
+ prompt = sample['prompt']
385
+ # Wrap in chat template and run inference...
386
+ ```
387
+
388
+ ---
389
+
390
+ ## Performance Tips
391
+
392
+ 1. **Multiple Sampling**: Generate 8+ samples and aggregate predictions for better accuracy
393
+ 2. **Temperature**: Use 0.6-0.7 for balanced diversity
394
+ 3. **Batch Processing**: Use vLLM for high-throughput batch inference
395
+ 4. **Anchor Selection**: Choose anchors similar to your target question domain
396
+
397
+ ## Citation
398
+
399
+ ```bibtex
400
+ @misc{cao2026modelsscopescalablecontrollable,
401
+ title={Models Under SCOPE: Scalable and Controllable Routing via Pre-hoc Reasoning},
402
+ author={Qi Cao and Shuhao Zhang and Ruizhe Zhou and Ruiyi Zhang and Peijia Qin and Pengtao Xie},
403
+ year={2026},
404
+ eprint={2601.22323},
405
+ archivePrefix={arXiv},
406
+ primaryClass={cs.LG},
407
+ url={https://arxiv.org/abs/2601.22323},
408
+ }
409
+ ```
410
+
411
+ ## License
412
+
413
+ Apache 2.0
assets/1.png ADDED

Git LFS Details

  • SHA256: 81c01b698ca8cc157d1886549c45f0a36c69774fd17cd3a55c69442dd6155d0b
  • Pointer size: 131 Bytes
  • Size of remote file: 440 kB