YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
FLAN-T5-Small Fine-Tuned on Red Hat Documentation
Overview
Fine-tuned FLAN-T5-Small for question-answering on Red Hat documentation using LoRA and 4-bit quantization. Trained on redhat-docs_dataset (55,741 rows).
Model Details
- Base Model:
google/flan-t5-small - Fine-Tuning: LoRA (r=8, alpha=32, target_modules=["q", "v"])
- Quantization: 4-bit (nf4)
Dataset
- Fields:
title,content,command,url - Artifacts:
data/redhat-docs_dataset.jsonl,data/formatted_dataset.jsonl,data/tokenized_dataset.jsonl
Training
- Hardware: T4 GPU, CUDA 11.8
- Epochs: 2
- Batch Size: 32 (4 per-device, 8 gradient accumulation)
Usage
Load with peft and transformers for Red Hat queries.
License
MIT License. Verify dataset licensing.
Contact
See GitHub.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support