Text Generation
Transformers
TensorBoard
Safetensors
llama
Generated from Trainer
custom_code
text-generation-inference
Instructions to use flytech/togetherchat-dev-7b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use flytech/togetherchat-dev-7b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="flytech/togetherchat-dev-7b", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("flytech/togetherchat-dev-7b", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("flytech/togetherchat-dev-7b", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use flytech/togetherchat-dev-7b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "flytech/togetherchat-dev-7b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "flytech/togetherchat-dev-7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/flytech/togetherchat-dev-7b
- SGLang
How to use flytech/togetherchat-dev-7b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "flytech/togetherchat-dev-7b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "flytech/togetherchat-dev-7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "flytech/togetherchat-dev-7b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "flytech/togetherchat-dev-7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use flytech/togetherchat-dev-7b with Docker Model Runner:
docker model run hf.co/flytech/togetherchat-dev-7b
Training in progress, step 960, checkpoint
Browse files
last-checkpoint/adapter_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 40036040
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6cc3be11ba62ed932e5ba062221a599333547d99a3430e34a7aa1e2675b68c42
|
| 3 |
size 40036040
|
last-checkpoint/optimizer.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 20524127
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2d70ad7c2f47ec9cc0847b096720542445a3c71f383dae4fdda1bfc858d60d33
|
| 3 |
size 20524127
|
last-checkpoint/rng_state.pth
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 14575
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cb03a94214794d8ed500d90fca9e455d6a98b3eb0914a5acdce52e4d2823a8b7
|
| 3 |
size 14575
|
last-checkpoint/scheduler.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 627
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b1031cec1b3d5c56120bdee603429b1488cc4efbb04feaeb164186c47f9797d
|
| 3 |
size 627
|
last-checkpoint/trainer_state.json
CHANGED
|
@@ -1,9 +1,9 @@
|
|
| 1 |
{
|
| 2 |
"best_metric": null,
|
| 3 |
"best_model_checkpoint": null,
|
| 4 |
-
"epoch": 2.
|
| 5 |
"eval_steps": 60,
|
| 6 |
-
"global_step":
|
| 7 |
"is_hyper_param_search": false,
|
| 8 |
"is_local_process_zero": true,
|
| 9 |
"is_world_process_zero": true,
|
|
@@ -202,13 +202,26 @@
|
|
| 202 |
"eval_samples_per_second": 1.92,
|
| 203 |
"eval_steps_per_second": 0.24,
|
| 204 |
"step": 900
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 205 |
}
|
| 206 |
],
|
| 207 |
"logging_steps": 60,
|
| 208 |
"max_steps": 1011,
|
| 209 |
"num_train_epochs": 3,
|
| 210 |
"save_steps": 60,
|
| 211 |
-
"total_flos": 1.
|
| 212 |
"trial_name": null,
|
| 213 |
"trial_params": null
|
| 214 |
}
|
|
|
|
| 1 |
{
|
| 2 |
"best_metric": null,
|
| 3 |
"best_model_checkpoint": null,
|
| 4 |
+
"epoch": 2.8444444444444446,
|
| 5 |
"eval_steps": 60,
|
| 6 |
+
"global_step": 960,
|
| 7 |
"is_hyper_param_search": false,
|
| 8 |
"is_local_process_zero": true,
|
| 9 |
"is_world_process_zero": true,
|
|
|
|
| 202 |
"eval_samples_per_second": 1.92,
|
| 203 |
"eval_steps_per_second": 0.24,
|
| 204 |
"step": 900
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"epoch": 2.84,
|
| 208 |
+
"learning_rate": 0.0002,
|
| 209 |
+
"loss": 0.5142,
|
| 210 |
+
"step": 960
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"epoch": 2.84,
|
| 214 |
+
"eval_runtime": 312.6866,
|
| 215 |
+
"eval_samples_per_second": 1.919,
|
| 216 |
+
"eval_steps_per_second": 0.24,
|
| 217 |
+
"step": 960
|
| 218 |
}
|
| 219 |
],
|
| 220 |
"logging_steps": 60,
|
| 221 |
"max_steps": 1011,
|
| 222 |
"num_train_epochs": 3,
|
| 223 |
"save_steps": 60,
|
| 224 |
+
"total_flos": 1.5945703889043456e+17,
|
| 225 |
"trial_name": null,
|
| 226 |
"trial_params": null
|
| 227 |
}
|