Instructions to use elinas/chronos-70b-v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use elinas/chronos-70b-v2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="elinas/chronos-70b-v2")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("elinas/chronos-70b-v2") model = AutoModelForCausalLM.from_pretrained("elinas/chronos-70b-v2") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use elinas/chronos-70b-v2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "elinas/chronos-70b-v2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "elinas/chronos-70b-v2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/elinas/chronos-70b-v2
- SGLang
How to use elinas/chronos-70b-v2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "elinas/chronos-70b-v2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "elinas/chronos-70b-v2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "elinas/chronos-70b-v2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "elinas/chronos-70b-v2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use elinas/chronos-70b-v2 with Docker Model Runner:
docker model run hf.co/elinas/chronos-70b-v2
chronos-70b-v2
This is the FP16 PyTorch / HF version of chronos-70b-v2 based on the Llama v2 Base model. This version will not fit on a consumer GPU, use a quantized type of model from those linked below!
Big thank you to the Pygmalion team for providing compute. Reach out to me if you would like individual credit.
This model is primarily focused on chat, roleplay, storywriting, with significantly improved reasoning and logic. It does not have any form of censorship, please use responsibly.
Chronos can generate very long outputs with coherent text, largely due to the human inputs it was trained on, and it supports context length up to 4096 tokens.
License
This model is strictly non-commercial (cc-by-nc-4.0) use only which takes priority over the LLAMA 2 COMMUNITY LICENSE AGREEMENT. If you'd like to discuss using it for your business, contact Elinas through Discord elinas, or X (Twitter) @officialelinas.
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. At the moment, only 70b models released will be under this license and the terms may change at any time (ie. a more permissive license allowing commercial use).
Model Usage
This model uses Alpaca formatting, so for optimal model performance, use it to start the dialogue or story, and if you use a frontend like SillyTavern ENABLE Alpaca instruction mode:
### Instruction:
Your instruction or question here.
### Response:
Not using the format will make the model perform significantly worse than intended.
Tips
Sampling and settings can make a significant difference for this model, so play around with them. I was also informed by a user that if you are using KoboldCPP that using the flag
--unbantokens may improve model performance significantly. This has not been tested by myself, but that is something to keep in mind.
Quantized Versions for Consumer GPU Usage
LlamaCPP Versions provided by @TheBloke
- Downloads last month
- 1,162