Instructions to use smangrul/starcoder-3b-hugcoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use smangrul/starcoder-3b-hugcoder with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2-3b") model = PeftModel.from_pretrained(base_model, "smangrul/starcoder-3b-hugcoder") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 14 |
|
| 15 |
# starcoder-3b-hugcoder
|
| 16 |
|
| 17 |
-
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on
|
| 18 |
It achieves the following results on the evaluation set:
|
| 19 |
- Loss: 0.5545
|
| 20 |
|
|
|
|
| 14 |
|
| 15 |
# starcoder-3b-hugcoder
|
| 16 |
|
| 17 |
+
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on [smangrul/hug_stack](https://huggingface.co/datasets/smangrul/hug_stack) dataset.
|
| 18 |
It achieves the following results on the evaluation set:
|
| 19 |
- Loss: 0.5545
|
| 20 |
|