Instructions to use beowolx/CodeNinja-1.0-OpenChat-7B-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use beowolx/CodeNinja-1.0-OpenChat-7B-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="beowolx/CodeNinja-1.0-OpenChat-7B-GGUF", filename="codeninja-1.0-openchat-7b.Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use beowolx/CodeNinja-1.0-OpenChat-7B-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M
Use Docker
docker model run hf.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use beowolx/CodeNinja-1.0-OpenChat-7B-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "beowolx/CodeNinja-1.0-OpenChat-7B-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "beowolx/CodeNinja-1.0-OpenChat-7B-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M
- Ollama
How to use beowolx/CodeNinja-1.0-OpenChat-7B-GGUF with Ollama:
ollama run hf.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M
- Unsloth Studio new
How to use beowolx/CodeNinja-1.0-OpenChat-7B-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for beowolx/CodeNinja-1.0-OpenChat-7B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for beowolx/CodeNinja-1.0-OpenChat-7B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for beowolx/CodeNinja-1.0-OpenChat-7B-GGUF to start chatting
- Docker Model Runner
How to use beowolx/CodeNinja-1.0-OpenChat-7B-GGUF with Docker Model Runner:
docker model run hf.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M
- Lemonade
How to use beowolx/CodeNinja-1.0-OpenChat-7B-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull beowolx/CodeNinja-1.0-OpenChat-7B-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.CodeNinja-1.0-OpenChat-7B-GGUF-Q4_K_M
List all available models
lemonade list
CodeNinja: Your Advanced Coding Assistant
Overview
CodeNinja is an enhanced version of the renowned model openchat/openchat-3.5-1210. It having been fine-tuned through Supervised Fine Tuning on two expansive datasets, encompassing over 400,000 coding instructions. Designed to be an indispensable tool for coders, CodeNinja aims to integrate seamlessly into your daily coding routine.
Key Features
Expansive Training Database: CodeNinja has been refined with datasets from glaiveai/glaive-code-assistant-v2 and TokenBender/code_instructions_122k_alpaca_style, incorporating around 400,000 coding instructions across various languages including Python, C, C++, Rust, Java, JavaScript, and more.
Flexibility and Scalability: Available in a 7B model size, CodeNinja is adaptable for local runtime environments.
Advanced Code Completion: With a substantial context window size of 8192, it supports comprehensive project-level code completion.
Prompt Format
CodeNinja maintains the same prompt structure as OpenChat 3.5. Effective utilization requires adherence to this format:
GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:
🚨 Important: Ensure the use of <|end_of_turn|> as the end-of-generation token.
Adhering to this format is crucial for optimal results.
Usage Instructions
Using LM Studio
The simplest way to engage with CodeNinja is via the quantized versions on LM Studio. Ensure you select the "OpenChat" preset, which incorporates the necessary prompt format. The preset is also available in this gist.
Using the Transformers Library
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Initialize the model
model_path = "beowolx/CodeNinja-1.0-OpenChat-7B"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
# Load the OpenChat tokenizer
tokenizer = AutoTokenizer.from_pretrained("openchat/openchat-3.5-1210", use_fast=True)
def generate_one_completion(prompt: str):
messages = [
{"role": "user", "content": prompt},
{"role": "assistant", "content": ""} # Model response placeholder
]
# Generate token IDs using the chat template
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
# Produce completion
generate_ids = model.generate(
torch.tensor([input_ids]).to("cuda"),
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Process the completion
completion = tokenizer.decode(generate_ids[0], skip_special_tokens=True)
completion = completion.split("\n\n\n")[0].strip()
return completion
License
CodeNinja is licensed under the MIT License, with model usage subject to the Model License.
Contact
For queries or support, please open an issue in the repository.
- Downloads last month
- 63
4-bit
5-bit
8-bit