Instructions to use ashwincv0112/code-llama-python-finetune2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use ashwincv0112/code-llama-python-finetune2 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("codellama/CodeLlama-7b-Python-hf") model = PeftModel.from_pretrained(base_model, "ashwincv0112/code-llama-python-finetune2") - Notebooks
- Google Colab
- Kaggle
Commit ·
ee89aff
1
Parent(s): 8d15c36
Update tokenizer_config.json
Browse files- tokenizer_config.json +1 -1
tokenizer_config.json
CHANGED
|
@@ -31,7 +31,7 @@
|
|
| 31 |
"prefix_token": "▁<PRE>",
|
| 32 |
"sp_model_kwargs": {},
|
| 33 |
"suffix_token": "▁<SUF>",
|
| 34 |
-
"tokenizer_class": "
|
| 35 |
"unk_token": {
|
| 36 |
"__type": "AddedToken",
|
| 37 |
"content": "<unk>",
|
|
|
|
| 31 |
"prefix_token": "▁<PRE>",
|
| 32 |
"sp_model_kwargs": {},
|
| 33 |
"suffix_token": "▁<SUF>",
|
| 34 |
+
"tokenizer_class": "CodeLlamaTokenizer",
|
| 35 |
"unk_token": {
|
| 36 |
"__type": "AddedToken",
|
| 37 |
"content": "<unk>",
|