Instructions to use Chat-Error/testing01 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Chat-Error/testing01 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Chat-Error/testing01")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Chat-Error/testing01") model = AutoModelForCausalLM.from_pretrained("Chat-Error/testing01") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Chat-Error/testing01 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Chat-Error/testing01" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Chat-Error/testing01", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Chat-Error/testing01
- SGLang
How to use Chat-Error/testing01 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Chat-Error/testing01" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Chat-Error/testing01", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Chat-Error/testing01" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Chat-Error/testing01", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Chat-Error/testing01 with Docker Model Runner:
docker model run hf.co/Chat-Error/testing01
File size: 640 Bytes
8072cde | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | {
"<pad>": 32000,
"<dummy01>": 32001,
"<dummy02>": 32002,
"<dummy03>": 32003,
"<dummy04>": 32004,
"<dummy05>": 32005,
"<dummy06>": 32006,
"<dummy07>": 32007,
"<dummy08>": 32008,
"<dummy09>": 32009,
"<dummy10>": 32010,
"<dummy11>": 32011,
"<dummy12>": 32012,
"<dummy13>": 32013,
"<dummy14>": 32014,
"<dummy15>": 32015,
"<dummy16>": 32016,
"<dummy17>": 32017,
"<dummy18>": 32018,
"<dummy19>": 32019,
"<dummy20>": 32020,
"<dummy21>": 32021,
"<dummy22>": 32022,
"<dummy23>": 32023,
"<dummy24>": 32024,
"<dummy25>": 32025,
"<dummy26>": 32026,
"<dummy27>": 32027,
"<dummy28>": 32028,
"<dummy29>": 32029,
"<dummy30>": 32030,
"<dummy31>": 32031
} |