Instructions to use iravikr/phi1-numpy-pandas-qlora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use iravikr/phi1-numpy-pandas-qlora with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="iravikr/phi1-numpy-pandas-qlora")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("iravikr/phi1-numpy-pandas-qlora", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use iravikr/phi1-numpy-pandas-qlora with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "iravikr/phi1-numpy-pandas-qlora" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "iravikr/phi1-numpy-pandas-qlora", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/iravikr/phi1-numpy-pandas-qlora
- SGLang
How to use iravikr/phi1-numpy-pandas-qlora with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "iravikr/phi1-numpy-pandas-qlora" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "iravikr/phi1-numpy-pandas-qlora", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "iravikr/phi1-numpy-pandas-qlora" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "iravikr/phi1-numpy-pandas-qlora", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use iravikr/phi1-numpy-pandas-qlora with Docker Model Runner:
docker model run hf.co/iravikr/phi1-numpy-pandas-qlora
Phi-1 NumPy + Pandas Fine-Tuning (QLoRA)
This repository provides a fine-tuned variant of :contentReference[oaicite:0]{index=0} that improves Python code generation for NumPy and Pandasβbased data analysis tasks.
The model addresses a known limitation of the base Phi-1 model, which was trained primarily on Python scripts using a very limited set of standard libraries.
π Motivation
The original Phi-1 model documentation states that 99.8% of its Python training data only uses:
As a result, the base model may:
- Hallucinate APIs for external libraries
- Misuse common NumPy / Pandas functions
- Struggle with real-world data analysis workflows
This project demonstrates that targeted, low-rank fine-tuning (QLoRA) can significantly mitigate this limitation.
π§ What This Model Learns
This fine-tuning focuses exclusively on:
numpypandas
Covered Operations
- CSV loading (
read_csv) - Column selection and filtering
- GroupBy aggregations (
mean,sum,count) - NumPy array operations
- Pandas + NumPy interoperability (
np.where,.to_numpy())
Explicitly NOT Included
- Visualization libraries (matplotlib, seaborn)
- Machine learning libraries (sklearn, torch)
- SQL, file systems, or OS operations
ποΈ Training Details
- Base model:
microsoft/phi-1 - Fine-tuning method: QLoRA (4-bit quantization)
- LoRA rank (r): 16
- Epochs: 2β3
- Dataset size: ~1,500 instructionβcode pairs
- Training type: Instruction tuning (code-only responses)
- Optimizer: paged AdamW (8-bit)
Only LoRA adapter weights were trained; the base model weights remain unchanged.
π¦ Repository Contents
Depending on the repository variant, this repo contains:
Adapter Repository (Recommended)
adapter_model.bin(or.safetensors)adapter_config.json- Tokenizer files
Full Model Repository (If merged)
- Full merged model weights
- Tokenizer
- No PEFT dependency required
Usage
Option 1 β Load Adapter (Recommended)
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained(
"microsoft/phi-1",
load_in_4bit=True,
device_map="auto"
)
model = PeftModel.from_pretrained(
base_model,
"iravikr/phi1-numpy-pandas-qlora"
)
tokenizer = AutoTokenizer.from_pretrained(
"iravikr/phi1-numpy-pandas-qlora"
)
Model tree for iravikr/phi1-numpy-pandas-qlora
Base model
microsoft/phi-1