The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Nanbeige4-3B-Base: Blind Spots and Improvement Plan
In this work, I am considering the followingn model: https://huggingface.co/Nanbeige/Nanbeige4-3B-Base
The goal is to find some of its blindspots and explain what failed, why it likely failed and how to improve the model with supervised fine-tuning and RLHF.
Load model + tokenizer
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
MODEL_ID = "Nanbeige/Nanbeige4-3B-Base"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_ID,
trust_remote_code=True,
use_fast=False,
)
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
trust_remote_code=True,
torch_dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float32,
device_map="auto" if torch.cuda.is_available() else None,
)
Generation helper
def generate(prompt: str, max_new_tokens: int = 64) -> str:
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
output_ids = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
new_tokens = output_ids[0, inputs["input_ids"].shape[1]:]
return tokenizer.decode(new_tokens, skip_special_tokens=True).strip()
Now that we have the model and the generation pipeline working, we define some of the cases and see how does the model responds to them.
CASES = [
{"case_id": "arith_1", "category": "arithmetic", "prompt": "Answer with digits only. 17*19 =", "expected": "323"},
{"case_id": "arith_2", "category": "arithmetic", "prompt": "Answer with digits only. 1000-487 =", "expected": "513"},
{"case_id": "calendar_1", "category": "calendar", "prompt": "Return only the weekday name. 2024-02-29 was a", "expected": "Thursday"},
{"case_id": "string_1", "category": "string_ops", "prompt": "Output only the reversed string: drawer", "expected": "reward"},
{"case_id": "code_1", "category": "code_reasoning", "prompt": "Give directly only the result. What is [10,20,30][1] in Python?", "expected": "20"},
{"case_id": "sort_1", "category": "sorting", "prompt": "Sort ascending and output only comma-separated numbers: 9,1,4,1,5", "expected": "1,1,4,5,9"},
]
From the evaluation we notice that the following main failure modes:
- Arithmetic mistakes on simple deterministic expressions.
- Calendar/date reasoning mistakes.
- Format-following issues (extra text when only a number/token was requested).
- Symbolic precision issues (counting/sorting/string transforms).
- Occasional factual misses.
In short: the model can be fluent, but reliability drops on tasks requiring exactness.
Fine-Tuning Plan
We may use supervised fine-tuning (SFT) with strict output constraints:
- Deterministic tasks: arithmetic, counting, sorting, string transforms, date/calendar.
- Format-control tasks: number-only, one-token-only, JSON-only answers.
- Short factual QA with verified labels.
- Basic code reasoning tasks with executable ground truth.
This approach works because we are targeting the model’s real errors instead of training on random generic data. Since we observed failures in arithmetic, counting, and formatting, we train on those exact patterns so the model gets focused practice where it is weak. We also use exact, verifiable labels to reduce ambiguity, which helps the model learn cleaner decision boundaries. Finally, by adding format-constrained tasks (number-only, JSON-only, one-token answers), we teach the model to be correct and to answer in the exact structure we request.
How to Build the Dataset
Practical dataset pipeline:
- We start by programmatically generating deterministic examples (math, date, string, sort, count). For instance, we can generate arithmetic problems ( “What is 47 × 23?” ), date calculations ( “What day of the week was 12 March 1998?” ) or small algorithmic tasks ( “Sort the list [12, 4, 19, 3]” ).
- We may also mine model mistakes and add corrected versions as hard examples. For example, if the model repeatedly makes mistakes on percentage calculations or edge cases in code, we keep those prompts and attach the correct solution.
- We can add a curated factual slice from trusted sources. This might include questions derived from textbooks, documentation or domain experts. For instance: “Summarize the key idea behind Bayes’ theorem.” ...
- Another idea it to validate labels with scripts (exact match / regex validators). For example, a validator could check that a numerical answer matches the computed value or that generated code passes unit tests.
- We may use LLMs to generate synthetic tasks and expand the dataset, as a strong language model can help generate more problem variations.
We should expect the first clear improvements with about 10k-30k high-quality examples. To get more stable performance across different categories, we likely need around 50k-150k examples.
RLHF Helps
After SFT, RLHF can improve output behavior, especially format compliance and concise answers.
We would need to define an appropriate reward function :
- Reward correct final answer.
- Reward correct output format (no extra text).
- Penalize verbosity when prompt asks for short outputs.
We can thus expect:
- Better reliability on strict prompts.
- Cleaner outputs for evaluation-style tasks.
- Improved consistency without changing the task definition.
- Downloads last month
- 9