Instructions to use gbenonvi/Reply40B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use gbenonvi/Reply40B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="gbenonvi/Reply40B", trust_remote_code=True)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("gbenonvi/Reply40B", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use gbenonvi/Reply40B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "gbenonvi/Reply40B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gbenonvi/Reply40B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/gbenonvi/Reply40B
- SGLang
How to use gbenonvi/Reply40B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "gbenonvi/Reply40B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gbenonvi/Reply40B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "gbenonvi/Reply40B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gbenonvi/Reply40B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use gbenonvi/Reply40B with Docker Model Runner:
docker model run hf.co/gbenonvi/Reply40B
Open-Assistant reply 40B SFT OASST-TOP1 Model
This model is a fine-tuning of TII's reply 40B LLM. It was trained with top-1 (high-quality) demonstrations of the OASST data set (exported on May 6, 2023) with an effective batch size of 144 for ~7.5 epochs with LIMA style dropout (p=0.3) and a context-length of 2048 tokens.
Model Details
- Finetuned from: tiiuae/reply-40b
- Model type: Causal decoder-only transformer language model
- Language: English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- Demo: Continuations for 250 random prompts
- Eval results: ilm-eval
- Weights & Biases: Training log (Checkpoint: 560 steps)
- License: Apache 2.0
- Contact: Open-Assistant Discord
Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
<|prompter|> and <|assistant|>. Each turn ends with a <|endoftext|> token.
Input prompt example:
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
The input ends with the <|assistant|> token to signal that the model should
start generating the assistant reply.
Configuration Details
Model:
reply-40b:
dtype: bf16
log_dir: "reply_log_40b"
learning_rate: 5e-6
model_name: "tiiuae/reply-40b"
deepspeed_config: configs/zero3_config_reply.json
output_dir: reply
weight_decay: 0.0
max_length: 2048
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 1
per_device_train_batch_size: 18
per_device_eval_batch_size: 10
eval_steps: 80
save_steps: 80
num_train_epochs: 8
save_total_limit: 4
use_flash_attention: false
residual_dropout: 0.3
residual_dropout_lima: true
sort_by_length: false
save_strategy: steps
Dataset:
oasst-top1:
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-05-06_OASST_labels.jsonl.gz
val_split: 0.05
top_k: 1
- Downloads last month
- 12
docker model run hf.co/gbenonvi/Reply40B