A possible formation scenario of the Gaia ID 3425577610762832384: inner binary merger inside a triple common envelope
Paper
• 2501.05139 • Published
This repository contains a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 for structured output generation tasks (JSON, YAML, XML, TOML, CSV).
Fine-tuned for structured output generation with improved template alignment and response-focused learning.
add_generation_prompt=True during training to match inference behavior| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen3-4B-Instruct-2507 |
| Method | QLoRA (4-bit quantization) |
| Dataset | u-10bei/structured_data_with_cot_dataset_512_v2 |
| Max Sequence Length | 512 |
| Epochs | 1 |
| Learning Rate | 2e-6 |
| LoRA Rank (r) | 64 |
| LoRA Alpha | 128 |
| LoRA Dropout | 0.0 |
| LoRA Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Batch Size | 2 (per device) |
| Gradient Accumulation | 16 |
| Effective Batch Size | 32 |
| Warmup Ratio | 0.1 |
| Weight Decay | 0.05 |
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Model IDs
base_model_id = "Qwen/Qwen3-4B-Instruct-2507"
adapter_id = "astom-M/qwen3-4b-struct-eval-v3-colab"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
# Load base model
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
torch_dtype=torch.float16,
device_map="auto",
)
# Apply LoRA adapter
model = PeftModel.from_pretrained(model, adapter_id)
# Prepare input
messages = [
{"role": "system", "content": "You are a helpful assistant that generates structured outputs."},
{"role": "user", "content": "Generate a JSON object with name and age fields for a person named Alice who is 25 years old."}
]
# Apply chat template
input_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True # Important: matches training
)
# Tokenize
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
# Generate
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
do_sample=True,
top_p=0.9,
)
# Decode
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
u-10bei/structured_data_with_cot_dataset_512_v2This model was trained in full compliance with competition rules:
Qwen3-4B-Instruct-2507u-10bei/structured_data_with_cot_dataset_512_v2If you use this adapter, please cite the original Qwen3 model:
@article{qwen3-2507,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
url={https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507}
}
This adapter inherits the license from the base model Qwen/Qwen3-4B-Instruct-2507 (Apache 2.0).
Framework Versions:
Base model
Qwen/Qwen3-4B-Instruct-2507