Instructions to use kaluaim/ChatTS-14B-handler with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use kaluaim/ChatTS-14B-handler with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="kaluaim/ChatTS-14B-handler", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("kaluaim/ChatTS-14B-handler", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use kaluaim/ChatTS-14B-handler with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "kaluaim/ChatTS-14B-handler" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "kaluaim/ChatTS-14B-handler", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/kaluaim/ChatTS-14B-handler
- SGLang
How to use kaluaim/ChatTS-14B-handler with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "kaluaim/ChatTS-14B-handler" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "kaluaim/ChatTS-14B-handler", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "kaluaim/ChatTS-14B-handler" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "kaluaim/ChatTS-14B-handler", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use kaluaim/ChatTS-14B-handler with Docker Model Runner:
docker model run hf.co/kaluaim/ChatTS-14B-handler
| Copyright 2024 Alibaba Cloud. All rights reserved. | |
| This software contains code that was originally developed and copyrighted by Alibaba Cloud. The original code is subject to the terms and conditions of the Apache License (Version 2.0), which can be found in the accompanying LICENSE file. | |
| ByteDance and Tsinghua University has made modifications and enhancements to the original code. The modifications are as follows: | |
| - Fine-tuned the model on the Qwen2.5-14B-Instruct model for ChatTS. | |
| - Modified `modeling_qwen2.py` and `configuration_qwen2.py` for the ChatTS model. | |
| - Modified the `README.md` file to provide some information about the usage of the modified model. | |
| Please note that any distribution of this software must include this NOTICE file intact, along with the original LICENSE file and any other relevant license information, to ensure compliance with all applicable copyright and licensing requirements. | |
| ByteDance and Tsinghua University | |
| December 2024 | |
| This NOTICE is provided to clarify the copyright status and licensing of the software, ensuring that all users and distributors are aware of their rights and obligations. | |