File size: 1,317 Bytes
4f80e7c 1663c44 4f80e7c 7e22977 4f80e7c 7e22977 818684c 4f80e7c 0719e04 4f80e7c 818684c 7e22977 4f80e7c 0719e04 4f80e7c 7e22977 818684c 7e22977 4f80e7c 0719e04 4f80e7c 0719e04 4f80e7c 818684c 4f80e7c 1663c44 4f80e7c | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | # CLI-LoRA-TinyLLaMA
Fine-tuned **TinyLLaMA-1.1B** model using **QLoRA** on a custom CLI Q&A dataset (Git, Bash, tar/gzip, grep, venv) for the Fenrir Security Internship Task.
---
## ๐ง Project Overview
- **Base model**: [TinyLLaMA/TinyLLaMA-1.1B-Chat-v1.0](https://huggingface.co/TinyLLaMA/TinyLLaMA-1.1B-Chat-v1.0)
- **Fine-tuning method**: QLoRA
- **Library**: `transformers`, `peft`, `trl`, `datasets`
- **Training file**: [`training.ipynb`](./training.ipynb)
---
## ๐ง Objective
To fine-tune a small language model on real-world command-line Q&A data (no LLM-generated text) and build a command-line chatbot agent capable of providing accurate CLI support.
---
## ๐ Files Included
- `training.ipynb`: Full training notebook (cleaned, token-free)
- `adapter_config.json`: LoRA adapter configuration
- `adapter_model.safetensors`: Trained adapter weights
- `eval_logs.json`: Sample evaluation results (accuracy, loss, etc.)
- `README.md`: This file
---
## ๐ Results
| Metric | Value |
|--------------|---------------|
| Training Loss| *<your value>* |
| Eval Accuracy| *<your value>* |
| Epochs | *<your value>* |
---
## ๐ Sample Q&A
```bash
Q: How to stash changes in Git?
A: Use `git stash` to save your changes temporarily. Retrieve later using `git stash pop`.
|