Instructions to use Vo1dAbyss/DeepSeek-R1-Distill-Qwen-7B-Python-4bit-V2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Vo1dAbyss/DeepSeek-R1-Distill-Qwen-7B-Python-4bit-V2 with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Vo1dAbyss/DeepSeek-R1-Distill-Qwen-7B-Python-4bit-V2", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use Vo1dAbyss/DeepSeek-R1-Distill-Qwen-7B-Python-4bit-V2 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Vo1dAbyss/DeepSeek-R1-Distill-Qwen-7B-Python-4bit-V2 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Vo1dAbyss/DeepSeek-R1-Distill-Qwen-7B-Python-4bit-V2 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Vo1dAbyss/DeepSeek-R1-Distill-Qwen-7B-Python-4bit-V2 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="Vo1dAbyss/DeepSeek-R1-Distill-Qwen-7B-Python-4bit-V2", max_seq_length=2048, )
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,8 +9,17 @@ tags:
|
|
| 9 |
license: apache-2.0
|
| 10 |
language:
|
| 11 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
|
|
|
|
|
|
|
|
|
|
| 14 |
# Uploaded model
|
| 15 |
|
| 16 |
- **Developed by:** Vo1dAbyss
|
|
|
|
| 9 |
license: apache-2.0
|
| 10 |
language:
|
| 11 |
- en
|
| 12 |
+
datasets:
|
| 13 |
+
- Vezora/Tested-143k-Python-Alpaca
|
| 14 |
+
- iamtarun/python_code_instructions_18k_alpaca
|
| 15 |
+
- jtatman/python-code-dataset-500k
|
| 16 |
+
- flytech/python-codes-25k
|
| 17 |
+
- fxmeng/CodeFeedback-Python105K
|
| 18 |
---
|
| 19 |
|
| 20 |
+
A finetuned model trained on **5 datasets** with a total of **876000 rows**.
|
| 21 |
+
This model was an **experiment**, as I wanted to train a model with a lot of python code and see the results.
|
| 22 |
+
|
| 23 |
# Uploaded model
|
| 24 |
|
| 25 |
- **Developed by:** Vo1dAbyss
|