Llama-3.2-3B-Python-Finetuned-00
This model is a fine-tuned version of LLaMA-3.2-3B-Instruct, optimized specifically for high-precision Python code generation. By training on structured instruction-code pairs, the model has been transformed from a general-purpose conversational assistant into a specialized tool that generates clean, concise Python code while minimizing conversational "chatter" and filler.
Model Details
Model Description
- Developed by: kirubel1738
- Model type: Large Language Model (Causal LM)
- Language(s) (NLP): English (Instructions), Python (Output)
- License: Apache-2.0
- Finetuned from model: unsloth/Llama-3.2-3B-Instruct
Uses
Direct Use
The model is intended for developers who need a lightweight, efficient Python code generator. It follows the Alpaca instruction format and is optimized to provide functional code directly without extensive tutorials or bulky docstrings.
Out-of-Scope Use
- General chit-chat or non-programming creative writing.
- Generation of non-Python programming languages (performance is not guaranteed).
- Production of malicious code, malware, or exploit generation.
Bias, Risks, and Limitations
- Naming Conventions: The model occasionally uses
camelCasedue to dataset bias, rather than strictly following PEP 8 (snake_case). - Algorithm Bias: The model tends to favor explicit algorithmic implementations (e.g., manual 2-pointer loops for palindromes) over high-level Pythonic one-liners, reflecting an "educational/interview" bias in the training data.
- Error Handling: It may return simple error strings (e.g.,
"n cannot be negative") instead of raising robust Python exceptions.
How to Get Started with the Model
Use the code below to run the model in 4-bit precision for maximum efficiency:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "kirubel1738/Llama-3.2-3B-Python-Finetuned-00"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_4bit=True
)
alpaca_prompt = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{}
### Response:
"""
instruction = "Write a Python function to check if a number is prime."
prompt = alpaca_prompt.format(instruction)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=150, temperature=0.7, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for kirubel1738/Llama-3.2-3B-Python-Finetuned-00
Base model
meta-llama/Llama-3.2-3B-Instruct Finetuned
unsloth/Llama-3.2-3B-Instruct