Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,8 @@ library_name: transformers
|
|
| 18 |
|
| 19 |
This is a Large Language Model (LLM) fine-tuned to solve math problems with detailed, step-by-step explanations and accurate answers. The base model used is Llama 3.1 with 8 billion parameters, which has been quantized to 4-bit using QLoRA (Quantized Low-Rank Adaptation) and PEFT (Parameter-Efficient Fine-Tuning) through the Unsloth framework.
|
| 20 |
|
|
|
|
|
|
|
| 21 |
## Model Details
|
| 22 |
|
| 23 |
- **Base Model**: Llama 3.1 (8 Billion parameters)
|
|
@@ -27,6 +29,7 @@ This is a Large Language Model (LLM) fine-tuned to solve math problems with deta
|
|
| 27 |
- **Training Environment**: Google Colab (free tier), NVIDIA T4 GPU (16GB VRAM), 12GB RAM
|
| 28 |
- **Dataset Used**: TIGER-Lab/MathInstruct (Yue, X., Qu, X., Zhang, G., Fu, Y., Huang, W., Sun, H., Su, Y., & Chen, W. (2023). MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning. *arXiv preprint arXiv:2309.05653*.
|
| 29 |
), 560 selected math problems and solutions
|
|
|
|
| 30 |
|
| 31 |
## Capabilities
|
| 32 |
|
|
|
|
| 18 |
|
| 19 |
This is a Large Language Model (LLM) fine-tuned to solve math problems with detailed, step-by-step explanations and accurate answers. The base model used is Llama 3.1 with 8 billion parameters, which has been quantized to 4-bit using QLoRA (Quantized Low-Rank Adaptation) and PEFT (Parameter-Efficient Fine-Tuning) through the Unsloth framework.
|
| 20 |
|
| 21 |
+
Other Homework Solver Models include [Science_Homework_Solver_Llama318B](https://huggingface.co/justsomerandomdude264/Science_Homework_Solver_Llama318B) and [SocialScience_Homework_Solver_Llama318B](https://huggingface.co/justsomerandomdude264/SocialScience_Homework_Solver_Llama318B)
|
| 22 |
+
|
| 23 |
## Model Details
|
| 24 |
|
| 25 |
- **Base Model**: Llama 3.1 (8 Billion parameters)
|
|
|
|
| 29 |
- **Training Environment**: Google Colab (free tier), NVIDIA T4 GPU (16GB VRAM), 12GB RAM
|
| 30 |
- **Dataset Used**: TIGER-Lab/MathInstruct (Yue, X., Qu, X., Zhang, G., Fu, Y., Huang, W., Sun, H., Su, Y., & Chen, W. (2023). MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning. *arXiv preprint arXiv:2309.05653*.
|
| 31 |
), 560 selected math problems and solutions
|
| 32 |
+
- **Git Repo**: The git repo on my github account is [justsomerandomdude264/Homework_Solver_LLM](https://github.com/justsomerandomdude264/Homework_Solver_LLM)
|
| 33 |
|
| 34 |
## Capabilities
|
| 35 |
|