Instructions to use hatanp/gpt-fi with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hatanp/gpt-fi with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="hatanp/gpt-fi")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("hatanp/gpt-fi") model = AutoModelForCausalLM.from_pretrained("hatanp/gpt-fi") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use hatanp/gpt-fi with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "hatanp/gpt-fi" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hatanp/gpt-fi", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/hatanp/gpt-fi
- SGLang
How to use hatanp/gpt-fi with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "hatanp/gpt-fi" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hatanp/gpt-fi", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "hatanp/gpt-fi" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hatanp/gpt-fi", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use hatanp/gpt-fi with Docker Model Runner:
docker model run hf.co/hatanp/gpt-fi
Commit History
Update README.md 0ccd035
Update README.md ad00283
Added model versions to README.md cf852d6
Update README.md cb28c8a
Created placeholder readme with examples 10c1f57
Upload model ca96e5a
Upload tokenizer d74e7d6
add training and evaluation scripts ceedef8
Vaino Hatanpaa commited on