YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

T-Bench Qwen SFT Multi-Task NAT v1

Model Description

This model is fine-tuned from Qwen3-8B using Negative-Aware Training (NAT) on multiple terminal bench tasks.

Training Details

  • Base Model: Qwen/Qwen3-8B
  • Training Method: Negative-Aware Training (NAT)
  • Tasks: 5 tasks (fix-git, log-summary-date-ranges, pypi-server, regex-log, cancel-async-tasks)
  • Epochs: 300
  • Learning Rate: 5e-5
  • Batch Size: 2

Dataset Composition

  • Total samples: 26 per epoch
  • Positive examples: 20 (4 per task)
  • Negative examples: 6 (from fix-git only)

NAT Strategy

Negative examples teach universal anti-patterns:

  1. Hallucinated arguments (message_title, message_description)
  2. Looping behavior after task completion
  3. Wrong command format

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Aznaur/tbench-qwen-sft-multitask-nat-v1")
tokenizer = AutoTokenizer.from_pretrained("Aznaur/tbench-qwen-sft-multitask-nat-v1")

Performance

Trained for 300 epochs with NAT to improve tool usage and avoid common failure patterns.

Paper Reference

Based on "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agents" (arXiv 2402.11651)

Downloads last month
-
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support