Instructions to use imone/CodeLlama_13B_with_EOT_token with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use imone/CodeLlama_13B_with_EOT_token with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("imone/CodeLlama_13B_with_EOT_token", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Code Llama 13B with End-of-turn (EOT) Token
This is the Code Llama 13B model with <|end_of_turn|> token added as id 32016 and other special tokens. The token input/output embedding is initialized as the mean of all existing input/output token embeddings, respectively.
Special tokens added:
{
"<|end_of_turn|>": 32016,
"<|verdict|>": 32017,
"<|PAD|>": 32018,
"<|PAD2|>": 32019,
}
- Downloads last month
- 6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("imone/CodeLlama_13B_with_EOT_token", dtype="auto")