File size: 3,276 Bytes
ee065ac 5dc9227 8079c79 5dc9227 8079c79 5dc9227 ee065ac 5dc9227 ee065ac 5dc9227 ee065ac 5dc9227 ee065ac 5dc9227 ee065ac 5dc9227 ee065ac 5dc9227 ee065ac 5dc9227 ee065ac 5dc9227 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 |
---
library_name: transformers
tags:
- llama
- lora-merged
- math-tutor
license: llama3.1
language:
- en
base_model:
- Sashank-810/LFT_Final_FineTuned_Increased_Metrics
---
# LFT + IDC Math Tutor (LoRA-merged)
Summary: A math-tutor student model with an integrated IDC critic adapter merged into the base Llama-3.1-8B-Instruct (LoRA weights merged into base). Intended for math tutoring and doubt clarification.
## Model Details
- Base: meta-llama/Llama-3.1-8B-Instruct
- Finetuned for: math tutoring + IDC-style critique/fix
- Precision: FP16/BF16 compatible
- Hardware: Single-GPU inference recommended
## Intended Use
- Educational tutoring, step-by-step math help, critique-and-fix of student answers.
## Out-of-Scope
- Safety-sensitive, legal, medical, or any harmful/abusive use.
## How to Use (Transformers)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
name = "Sashank-810/IDC_Global_Merged"
tok = AutoTokenizer.from_pretrained(name)
model = AutoModelForCausalLM.from_pretrained(name, torch_dtype="auto", device_map="auto")
prompt = "Explain the derivative of sin(x)."
out = model.generate(--tok(prompt, return_tensors="pt").to(model.device), max_new_tokens=128)
print(tok.decode(out[0], skip_special_tokens=True))
```
## How to Use (vLLM)
```bash
python -m vllm.entrypoints.api_server \
--model Sashank-810/IDC_Global_Merged \
--dtype auto \
--tensor-parallel-size 1
```
## License & Responsible Use
- Use responsibly for education; avoid harmful or malicious outputs.
---
# 📊 Evaluation Results (Llama 3.1-8B-Instruct Base vs Fine‑Tuned)
## ✅ Structured Evaluation Summary
--Total Questions:-- 2617
### Base Model Performance
- --Correct:-- 625
- --Accuracy:-- 23.88%
### Fine‑Tuned Model Performance
- --Correct:-- 916
- --Accuracy:-- 35.00%
### 🎯 Improvement
- --Accuracy Gain:-- +11.12 percentage points
- --Improved Answers:-- 483
- --Regressed Answers:-- 192
---
# 📝 Text Generation Metrics
## Base Model
--BLEU:-- 38.24
--ROUGE-1:-- 0.2947
--ROUGE-2:-- 0.0934
--ROUGE-L:-- 0.2936
--METEOR:-- 0.1633
<details>
<summary>Full Base Model Metrics</summary>
```json
{
"bleu": {
"score": 38.24172039700722,
"counts": [2214, 1378, 1110, 875],
"totals": [3765, 2033, 1740, 1462],
"precisions": [58.80, 67.78, 63.79, 59.85],
"bp": 0.612276654279684,
"sys_len": 3765,
"ref_len": 5612
},
"rouge": {
"rouge1": 0.29469964396406867,
"rouge2": 0.09342261992242887,
"rougeL": 0.2935582970928785,
"rougeLsum": 0.2940696059343364
},
"meteor": {
"meteor": 0.16327044830765994
}
}
```
</details>
---
## Fine‑Tuned Model
--BLEU:-- 59.31
--ROUGE-1:-- 0.4423
--ROUGE-2:-- 0.1247
--ROUGE-L:-- 0.4424
--METEOR:-- 0.2478
<details>
<summary>Full Fine‑Tuned Metrics</summary>
```json
{
"bleu": {
"score": 59.31334282676538,
"counts": [3324, 2048, 1600, 1201],
"totals": [5734, 3124, 2659, 2219],
"precisions": [57.97, 65.55, 60.17, 54.12],
"bp": 1.0,
"sys_len": 5734,
"ref_len": 5612
},
"rouge": {
"rouge1": 0.4423208144549374,
"rouge2": 0.1247048391679649,
"rougeL": 0.4424399985443162,
"rougeLsum": 0.4414589284956114
},
"meteor": {
"meteor": 0.24778242330127054
}
}
``` |