Instructions to use Nerdsking/Nerdsking-python-coder-7B-i with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Nerdsking/Nerdsking-python-coder-7B-i with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Nerdsking/Nerdsking-python-coder-7B-i", filename="nerdsking-python-coder-7B-i-Q6_K_i.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Nerdsking/Nerdsking-python-coder-7B-i with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I # Run inference directly in the terminal: llama-cli -hf Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I # Run inference directly in the terminal: llama-cli -hf Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I # Run inference directly in the terminal: ./llama-cli -hf Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I # Run inference directly in the terminal: ./build/bin/llama-cli -hf Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
Use Docker
docker model run hf.co/Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
- LM Studio
- Jan
- vLLM
How to use Nerdsking/Nerdsking-python-coder-7B-i with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Nerdsking/Nerdsking-python-coder-7B-i" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Nerdsking/Nerdsking-python-coder-7B-i", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
- Ollama
How to use Nerdsking/Nerdsking-python-coder-7B-i with Ollama:
ollama run hf.co/Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
- Unsloth Studio new
How to use Nerdsking/Nerdsking-python-coder-7B-i with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Nerdsking/Nerdsking-python-coder-7B-i to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Nerdsking/Nerdsking-python-coder-7B-i to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Nerdsking/Nerdsking-python-coder-7B-i to start chatting
- Pi new
How to use Nerdsking/Nerdsking-python-coder-7B-i with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Nerdsking/Nerdsking-python-coder-7B-i with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
Run Hermes
hermes
- Docker Model Runner
How to use Nerdsking/Nerdsking-python-coder-7B-i with Docker Model Runner:
docker model run hf.co/Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
- Lemonade
How to use Nerdsking/Nerdsking-python-coder-7B-i with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Nerdsking/Nerdsking-python-coder-7B-i:Q6_K_I
Run and chat with the model
lemonade run user.Nerdsking-python-coder-7B-i-Q6_K_I
List all available models
lemonade list
Model Details
Nerdsking-python-coder-7B-i is a 7B parameter partially uncensored model focused in coding. It was massively trained in python, therefore despite the fact it can code in other languages as well, the performance will be not in the same level as the one achieved while using python.
Key Characteristics:- Parameter count: 7B
- Primary domain: Python programming
- Secondary capabilities: General coding, technical English
- Training focus: Python logic, standard library usage, algorithmic reasoning
- Alignment: Partially uncensored (developer-oriented)
Nerdsking Python Coder Family
๐ง Nerdsking Python Coder 3B-i
๐ง Nerdsking Python Coder 7B-i
Benchmark
After intense refining, Nerdsking-python-coder-7B-i has achieved 86.99 in HumanEval (bf16), ranking it amongst the highest-performing Python-focused 7B models ever reported on HumanEval. Surpassing even much bigger models in that area.
Benchmark details (164 tasks):- official HumanEval execution protocol - test suites executed via
exec() - zero-shot pass@1
- dtype == "bfloat16"
- temperature = 0.1
- do_sample = False
- evaluated on fully merged weights
- Prompting: Chat-formatted with a fixed system prompt (โYou are an expert Python coding assistant.โ)
- Quantization: None (unquantized weights - bf16)
The configuration above is fully disclosed to support reproducibility and fair comparison.
Note: Quantized variants (INT4/INT6) may exhibit lower HumanEval scores due to reduced numerical precision.
Comparison Table
| Model name | Approx. HumanEval Pass@1 (%) | Notes / Source |
|---|---|---|
| Nerdsking-python-coder-7B-i | 86.99 | Evaluated score (zero-shot, strict HumanEval pass@1, using unquantized weigths bf16) |
| Qwen2.5-Coder-7B | ~74โ76 | Community evaluation (OpenCompass run); figures vary by harness/settings |
| DeepSeek-Coder-6.7B | ~72โ73 | Official DeepSeek report and independent replications; close to strict HumanEval protocol |
| CodeLlama-7B | ~33โ35 | Meta technical report |
| Wizard Coder 7B* | ~57โ59 | Community benchmarks; strong instruction-following but less consistent zero-shot behavior |
Benchmark tool used
https://github.com/nerdskingcom/gguf-humaneval-benchmark
Install it using:
pip install gguf-humaneval-benchmark
Instructions after install:
gguf-humaneval-benchmark --help
S.o.n.n.
The model was treated under "s.o.n.n." (single omni neural network), a concept created by IPMN at Nerdsking.com that is both a precise way of fine tunning/altering existing models, as well a foundational concept for a broader AI architecture standard currently under active research and development.
When applied to pre-existing models, allows:- parameter-preserving refinement methodology
- focused global behavioral shaping, instead of task-local adapters
- avoidance of fragmentation, very common in multi-adapter or task-siloed approaches
Quick Start (Inference)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Nerdsking/Nerdsking-python-coder-7B-i"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="bfloat16",
device_map="auto"
)
prompt = "Write a Python function that checks if a number is prime."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Ethical & Safety Notes
This model is intended for technical and research use. Due to relaxed alignment constraints, outputs should be reviewed before deployment in production or public-facing systems.
Citation
If you use this model in research and/or benchmarking, please cite:
Nerdsking-python-coder-7B-i, Iran Necho (IPMN) / Nerdsking.com
- Downloads last month
- 345
5-bit
6-bit
8-bit
docker model run hf.co/Nerdsking/Nerdsking-python-coder-7B-i: