Instructions to use ContextualAI/archangel_sft-csft_pythia2-8b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ContextualAI/archangel_sft-csft_pythia2-8b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ContextualAI/archangel_sft-csft_pythia2-8b")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ContextualAI/archangel_sft-csft_pythia2-8b") model = AutoModelForCausalLM.from_pretrained("ContextualAI/archangel_sft-csft_pythia2-8b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ContextualAI/archangel_sft-csft_pythia2-8b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ContextualAI/archangel_sft-csft_pythia2-8b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ContextualAI/archangel_sft-csft_pythia2-8b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/ContextualAI/archangel_sft-csft_pythia2-8b
- SGLang
How to use ContextualAI/archangel_sft-csft_pythia2-8b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ContextualAI/archangel_sft-csft_pythia2-8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ContextualAI/archangel_sft-csft_pythia2-8b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ContextualAI/archangel_sft-csft_pythia2-8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ContextualAI/archangel_sft-csft_pythia2-8b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use ContextualAI/archangel_sft-csft_pythia2-8b with Docker Model Runner:
docker model run hf.co/ContextualAI/archangel_sft-csft_pythia2-8b
This repo contains the model checkpoints for:
- model family pythia2-8b
- optimized with the loss SFT+CSFT
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where <|user|> corresponds to the human's role and <|assistant|> corresponds to the LLM's role.
The human should speak first:
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens <|good|> and <|bad|> included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our code repository or blog which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite our work:
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
- Downloads last month
- 4