Qwen3 Forward LoRA for Self-Alignment with Instruction Backtranslation
This repository contains a LoRA-finetuned forward instruction-following model based on Qwen/Qwen3-1.7B.
The model was trained on a curated synthetic instruction-response dataset produced using a backward model and self-curation pipeline inspired by the paper Self-Alignment with Instruction Backtranslation.
The forward model learns:
p(y | x)
where:
- x = instruction
- y = response
Model Details
Model Description
This model is a LoRA adapter trained on top of Qwen/Qwen3-1.7B for instruction-following generation.
It was fine-tuned using high-quality synthetic instruction-response pairs created by:
- training a backward model on OpenAssistant-Guanaco
- generating instructions from LIMA responses
- filtering the generated pairs with prompt-based quality scoring
This repository corresponds to Step 4 of the assignment pipeline.
- Developed by: Hengming
- Funded by [optional]: Course assignment / academic use
- Shared by [optional]: Hengming
- Model type: Causal language model with LoRA adapters
- Language(s) (NLP): English
- License: Apache-2.0 for this repository; please also follow the terms of the base model and datasets
- Finetuned from model [optional]: Qwen/Qwen3-1.7B
Model Sources
- Repository: https://huggingface.co/Hengming0805/qwen3-forward-lora-assignment3
- Base model: https://huggingface.co/Qwen/Qwen3-1.7B
- Curated dataset: https://huggingface.co/datasets/Hengming0805/self-alignment-curated-assignment3
- Paper: Self-Alignment with Instruction Backtranslation (arXiv:2308.06259)
Uses
Direct Use
This model is intended for:
- instruction-following generation
- assignment-scale experiments in self-alignment
- testing whether curated synthetic data can improve a forward model
Downstream Use
This model can be used as a lightweight adapter for generating responses to user instructions in an instruction-tuning setup.
Out-of-Scope Use
This model is not intended for:
- production deployment
- high-stakes use cases
- legal, medical, or financial advice
- factual QA requiring strong reliability
- broad safety-sensitive automation
Bias, Risks, and Limitations
This model has important limitations:
- It was trained on only a small curated synthetic dataset.
- The training set contains just 18 high-quality examples in this uploaded version.
- It may overfit to formatting patterns or instruction styles in the curated data.
- It may produce incomplete, repetitive, or generic responses.
- It inherits biases and limitations from the base model and the synthetic data pipeline.
Recommendations
Users should:
- regard this as an assignment-scale demonstration rather than a production-ready system
- manually inspect outputs
- avoid high-risk applications
- expect limited generalization due to the small training dataset
How to Get Started with the Model
Use the code below to load the base model and this LoRA adapter.
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model_id = "Qwen/Qwen3-1.7B"
adapter_id = "Hengming0805/qwen3-forward-lora-assignment3"
tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
trust_remote_code=True,
device_map="auto"
)
model = PeftModel.from_pretrained(base_model, adapter_id)
prompt = """You are a helpful assistant.
### Instruction:
Explain the difference between RAM and ROM in simple words.
### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=120,
do_sample=True,
temperature=0.7,
top_p=0.9
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))