Instructions to use CogwiseAI/testchatexample with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use CogwiseAI/testchatexample with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="CogwiseAI/testchatexample", trust_remote_code=True)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("CogwiseAI/testchatexample", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use CogwiseAI/testchatexample with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "CogwiseAI/testchatexample" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CogwiseAI/testchatexample", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/CogwiseAI/testchatexample
- SGLang
How to use CogwiseAI/testchatexample with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "CogwiseAI/testchatexample" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CogwiseAI/testchatexample", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "CogwiseAI/testchatexample" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CogwiseAI/testchatexample", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use CogwiseAI/testchatexample with Docker Model Runner:
docker model run hf.co/CogwiseAI/testchatexample
Rename requirements (1).txt to requirements.txt
Browse files- requirements (1).txt +0 -10
- requirements.txt +7 -0
requirements (1).txt
DELETED
|
@@ -1,10 +0,0 @@
|
|
| 1 |
-
peft@git+https://github.com/huggingface/peft.git@42a184f
|
| 2 |
-
bitsandbytes==0.39.0
|
| 3 |
-
torch==2.0.1
|
| 4 |
-
transformers
|
| 5 |
-
peft
|
| 6 |
-
accelerate
|
| 7 |
-
datasets==2.12.0
|
| 8 |
-
loralib==0.1.1
|
| 9 |
-
einops
|
| 10 |
-
scipy
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
requirements.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
bitsandbytes==0.39.0
|
| 2 |
+
torch==2.0.1
|
| 3 |
+
transformers==4.30.2
|
| 4 |
+
accelerate==0.20.3
|
| 5 |
+
loralib==0.1.1
|
| 6 |
+
einops==0.6.1
|
| 7 |
+
|