Instructions to use elinas/chronos007-70b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use elinas/chronos007-70b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="elinas/chronos007-70b")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("elinas/chronos007-70b") model = AutoModelForCausalLM.from_pretrained("elinas/chronos007-70b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use elinas/chronos007-70b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "elinas/chronos007-70b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "elinas/chronos007-70b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/elinas/chronos007-70b
- SGLang
How to use elinas/chronos007-70b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "elinas/chronos007-70b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "elinas/chronos007-70b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "elinas/chronos007-70b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "elinas/chronos007-70b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use elinas/chronos007-70b with Docker Model Runner:
docker model run hf.co/elinas/chronos007-70b
chronos007-70b fp16
This is a merge of Chronos-70b-v2 and model 007 at a ratio of 0.3 using the SLERP method, with Chronos being the parent model. This is an experimental model that has improved Chronos' logical and reasoning abilities while keeping the unique prose and general writing Chronos provides. This is an experiment for possible future Chronos models.
There are multiple different quantized versions that can be found below including GGUF, GPTQ, and AWQ thanks to @TheBloke
License
This model is strictly non-commercial (cc-by-nc-4.0) use only which takes priority over the LLAMA 2 COMMUNITY LICENSE AGREEMENT. If you'd like to discuss using it for your business, contact Elinas through Discord elinas, or X (Twitter) @officialelinas.
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. At the moment, only 70b models released will be under this license and the terms may change at any time (ie. a more permissive license allowing commercial use).
Model Usage
This model uses Alpaca formatting, so for optimal model performance, use it to start the dialogue or story, and if you use a frontend like SillyTavern ENABLE Alpaca instruction mode:
### Instruction:
Your instruction or question here.
### Response:
Not using the format will make the model perform significantly worse than intended.
Other versions
- Downloads last month
- 328