Instructions to use arnavgrg/phi-2-nf4-fp16-upscaled with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use arnavgrg/phi-2-nf4-fp16-upscaled with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="arnavgrg/phi-2-nf4-fp16-upscaled", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("arnavgrg/phi-2-nf4-fp16-upscaled", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("arnavgrg/phi-2-nf4-fp16-upscaled", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use arnavgrg/phi-2-nf4-fp16-upscaled with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "arnavgrg/phi-2-nf4-fp16-upscaled" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "arnavgrg/phi-2-nf4-fp16-upscaled", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/arnavgrg/phi-2-nf4-fp16-upscaled
- SGLang
How to use arnavgrg/phi-2-nf4-fp16-upscaled with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "arnavgrg/phi-2-nf4-fp16-upscaled" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "arnavgrg/phi-2-nf4-fp16-upscaled", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "arnavgrg/phi-2-nf4-fp16-upscaled" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "arnavgrg/phi-2-nf4-fp16-upscaled", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use arnavgrg/phi-2-nf4-fp16-upscaled with Docker Model Runner:
docker model run hf.co/arnavgrg/phi-2-nf4-fp16-upscaled
This is an upscaled fp16 variant of the original Phi-2 base model by Microsoft after it has been loaded with nf4 4-bit quantization via bitsandbytes. The main idea here is to upscale the linear4bit layers to fp16 so that the quantization/dequantization cost doesn't have to paid for each forward pass at inference time.
Note: The quantization operation to nf4 is not lossless, so the model weights for the linear layers are lossy, which means that this model will not work as well as the official base model.
To use this model, you can just load it via transformers in fp16:
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"arnavgrg/phi-2-nf4-fp16-upscaled",
device_map="auto",
torch_dtype=torch.float16,
)
- Downloads last month
- 9