Instructions to use afrideva/pip-code-bandit-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use afrideva/pip-code-bandit-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="afrideva/pip-code-bandit-GGUF") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("afrideva/pip-code-bandit-GGUF", dtype="auto") - llama-cpp-python
How to use afrideva/pip-code-bandit-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="afrideva/pip-code-bandit-GGUF", filename="pip-code-bandit.Q2_K.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use afrideva/pip-code-bandit-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf afrideva/pip-code-bandit-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf afrideva/pip-code-bandit-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf afrideva/pip-code-bandit-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf afrideva/pip-code-bandit-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf afrideva/pip-code-bandit-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf afrideva/pip-code-bandit-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf afrideva/pip-code-bandit-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf afrideva/pip-code-bandit-GGUF:Q4_K_M
Use Docker
docker model run hf.co/afrideva/pip-code-bandit-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use afrideva/pip-code-bandit-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "afrideva/pip-code-bandit-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "afrideva/pip-code-bandit-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/afrideva/pip-code-bandit-GGUF:Q4_K_M
- SGLang
How to use afrideva/pip-code-bandit-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "afrideva/pip-code-bandit-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "afrideva/pip-code-bandit-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "afrideva/pip-code-bandit-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "afrideva/pip-code-bandit-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use afrideva/pip-code-bandit-GGUF with Ollama:
ollama run hf.co/afrideva/pip-code-bandit-GGUF:Q4_K_M
- Unsloth Studio new
How to use afrideva/pip-code-bandit-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for afrideva/pip-code-bandit-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for afrideva/pip-code-bandit-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for afrideva/pip-code-bandit-GGUF to start chatting
- Docker Model Runner
How to use afrideva/pip-code-bandit-GGUF with Docker Model Runner:
docker model run hf.co/afrideva/pip-code-bandit-GGUF:Q4_K_M
- Lemonade
How to use afrideva/pip-code-bandit-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull afrideva/pip-code-bandit-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.pip-code-bandit-GGUF-Q4_K_M
List all available models
lemonade list
pip-code-bandit-GGUF
Quantized GGUF model files for pip-code-bandit from PipableAI
Original Model Card:
pip-code-bandit
Objective
Given a goal and tools, can AI intelligently use the tools to reach the goal?
What if it has a meagre 1.3b params/neurons akin to that of an owl? Can it follow instructions and plan to reach a goal?
It can!
Releasing pip-code-bandit and pipflow
A model and a library to manage and run goal-oriented agentic system.
Model attributes
-- number of params ~ 1.3b [2.9 Gb GPU memory footprint]
-- sequence length ~ 16.3k [Can go higher but will show performance degradation]
-- license - apache 2.0
-- instruction following , RL tuned.
-- tasks:
1. complex planning(plan) of sequential function calls | a list of callables and goal
2. corrected plan | feedback instructions with error
3. function calling | doc or code and goal
4. code generation | plan and goal
5. code generation | goal
6. doc generation | code
7. code generation | doc
8. file parsed to json | any raw data
9. sql generation | schema, question, instructions and examples
How did we build it?
We used a simulator to simulate environments where the model could play games to achieve goals, given a set of actions available to it. All the model could do was find the right action and config to incur a positive reward. The reward policy is around the concept of a model going to a stable state of zero net sum reward for both good and bad behaviour. In this setup, the model, which was pre-trained on code, function documentation, and similar OS datasets, was RL-tuned for reliability and instruction-following.
License
complete open-sourced - apache 2.0. License
Usage
NOTE:
If you wish to try this model without utilizing your GPU, we have hosted the model on our end. To execute the library using the hosted model, initialize the generator as shown below:
pip3 install git+https://github.com/PipableAI/pipflow.git
from pipflow import PipFlow
generator = PipFlow()
We have hosted the model at https://playground.pipable.ai/infer. Hence, one can also make a POST request to this endpoint with the following payload:
{
"model_name": "PipableAI/pip-code-bandit",
"prompt": "prompt",
"max_new_tokens": "400"
}
curl -X 'POST' \
'https://playground.pipable.ai/infer' \
-H 'accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'model_name=PipableAI%2Fpip-code-bandit&prompt="YOUR PROMPT"&max_new_tokens=400'
Alternatively, you can directly access the UI endpoint at https://playground.pipable.ai/docs#/default/infer_infer_post.
Library Usage
To directly use the model's capabilities without putting extra effort into schemas and prompts, try to use pipflow.
For detailed usage, refer to the colab_notebook
Model Usage
pip install transformers accelerate torch
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from accelerate import Accelerator
model =AutoModelForCausalLM.from_pretrained("PipableAI/pip-code-bandit",torch_dtype=torch.bfloat16,device_map="auto")
tokenizer = tokenizer = AutoTokenizer.from_pretrained("PipableAI/pip-code-bandit")
new_tokens = 600
prompt = """
<question>
Generate a python function for adding two numbers.
</question>
<code>
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=new_tokens)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
response = response.split("<code>")[1].split("</code>")[0]
print(response)
Prompt
prompt = f"""<example_response>{--question , --query}</example_response><function_code>{code}</function_code>
<question>Give one line description of the python code above in natural language.</question>
<doc>"""
prompt = f"""<example_response>{example of some --question: , --query}</example_response><schema>{schema with cols described}</schema>
<question>Write a sql query to ....</question>
<sql>"""
Team
Avi Kothari, Gyan Ranjan, Pratham Gupta, Ritvik Aryan Kalra, Soham Acharya
- Downloads last month
- 65
Model tree for afrideva/pip-code-bandit-GGUF
Base model
PipableAI/pip-code-bandit