Update README.md
Browse files
README.md
CHANGED
|
@@ -89,12 +89,19 @@ special_tokens:
|
|
| 89 |
This model is a fine tuned [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on a synthetic dataset of C++ β Python code translations from Codeforces.
|
| 90 |
|
| 91 |
π¦ GitHub repo: [DemoVersion/cf-llm-finetune](https://github.com/DemoVersion/cf-llm-finetune)
|
|
|
|
| 92 |
π Dataset Creation [DATASET.md](https://github.com/DemoVersion/cf-llm-finetune/blob/main/DATASET.md)
|
|
|
|
| 93 |
π Training [TRAIN.md](https://github.com/DemoVersion/cf-llm-finetune/blob/main/TRAIN.md)
|
|
|
|
| 94 |
π Dataset on Hugging Face: [demoversion/cf-cpp-to-python-code-generation](https://huggingface.co/datasets/demoversion/cf-cpp-to-python-code-generation)
|
| 95 |
|
| 96 |
For dataset generation, training, and inference check the [Github repo](https://github.com/DemoVersion/cf-llm-finetune).
|
| 97 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
## Model description
|
| 99 |
|
| 100 |
A lightweight LLaMA 3.2 model fine-tuned for competitive programming code translation, from ICPC-style C++ to Python using LoRA adapters.
|
|
|
|
| 89 |
This model is a fine tuned [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on a synthetic dataset of C++ β Python code translations from Codeforces.
|
| 90 |
|
| 91 |
π¦ GitHub repo: [DemoVersion/cf-llm-finetune](https://github.com/DemoVersion/cf-llm-finetune)
|
| 92 |
+
|
| 93 |
π Dataset Creation [DATASET.md](https://github.com/DemoVersion/cf-llm-finetune/blob/main/DATASET.md)
|
| 94 |
+
|
| 95 |
π Training [TRAIN.md](https://github.com/DemoVersion/cf-llm-finetune/blob/main/TRAIN.md)
|
| 96 |
+
|
| 97 |
π Dataset on Hugging Face: [demoversion/cf-cpp-to-python-code-generation](https://huggingface.co/datasets/demoversion/cf-cpp-to-python-code-generation)
|
| 98 |
|
| 99 |
For dataset generation, training, and inference check the [Github repo](https://github.com/DemoVersion/cf-llm-finetune).
|
| 100 |
|
| 101 |
+
**π Main medium article**: [Toward fine-tuning Llama 3.2 using PEFT for Code Generation](https://medium.com/@haddadhesam/towards-fine-tuning-llama-3-2-using-peft-for-code-generation-63e3991c26db)
|
| 102 |
+
|
| 103 |
+
**π Medium article for inference with GGUF format**: [How to inference with GGUF format](https://haddadhesam.medium.com/one-file-to-rule-them-all-gguf-for-local-llm-testing-and-deployment-208b85934434)
|
| 104 |
+
|
| 105 |
## Model description
|
| 106 |
|
| 107 |
A lightweight LLaMA 3.2 model fine-tuned for competitive programming code translation, from ICPC-style C++ to Python using LoRA adapters.
|