DSAA6000Q Forward LoRA Adapter

This repository contains a LoRA adapter produced for DSAA6000Q Assignment 3 using the self-alignment pipeline from Self Alignment with Instruction Backtranslation.

Summary

Forward instruction-following model trained on the curated self-alignment dataset.

Base model

  • Qwen/Qwen3-1.7B

Training data

  • Dataset repo: zhenchonghu/dsaa6000q-a3-curated
  • Local curated file: /kaggle/working/dsaa6000q_outputs/dsaa6000q_a3/curated/curated_pairs.jsonl
  • Training rows: 1

Notes

  • This is a PEFT LoRA adapter, not a fully merged checkpoint.
  • The model was trained with a standalone Kaggle-compatible notebook.
Downloads last month
56
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zhenchonghu/dsaa6000q-a3-forward

Finetuned
Qwen/Qwen3-1.7B
Adapter
(367)
this model