Instructions to use RaphaelMourad/Mistral-Codon-v1-16M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use RaphaelMourad/Mistral-Codon-v1-16M with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="RaphaelMourad/Mistral-Codon-v1-16M")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Codon-v1-16M") model = AutoModelForCausalLM.from_pretrained("RaphaelMourad/Mistral-Codon-v1-16M") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use RaphaelMourad/Mistral-Codon-v1-16M with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "RaphaelMourad/Mistral-Codon-v1-16M" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RaphaelMourad/Mistral-Codon-v1-16M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/RaphaelMourad/Mistral-Codon-v1-16M
- SGLang
How to use RaphaelMourad/Mistral-Codon-v1-16M with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "RaphaelMourad/Mistral-Codon-v1-16M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RaphaelMourad/Mistral-Codon-v1-16M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "RaphaelMourad/Mistral-Codon-v1-16M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RaphaelMourad/Mistral-Codon-v1-16M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use RaphaelMourad/Mistral-Codon-v1-16M with Docker Model Runner:
docker model run hf.co/RaphaelMourad/Mistral-Codon-v1-16M
Model Card for Mistral-Codon-v1-16M (Mistral for coding DNA)
The Mistral-Codon-v1-16M Large Language Model (LLM) is a pretrained generative DNA sequence model with 16M parameters. It is derived from Mixtral-8x7B-v0.1 model, which was simplified for DNA: the number of layers and the hidden size were reduced. The model was pretrained using 24M coding DNA sequences (3000bp) from many different species (vertebrates, plants, bacteria, viruses, ...).
Model Architecture
Like Mixtral-8x7B-v0.1, it is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
- Mixture of Experts
Load the model from huggingface:
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Codon-v1-16M", trust_remote_code=True)
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Codon-v1-16M", trust_remote_code=True)
Calculate the embedding of a coding sequence
codon_dna = "TGA TGA TTG GCG CGG CTA GGA TCG GCT"
inputs = tokenizer(codon_dna, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]
# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256
Troubleshooting
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
Notice
Mistral-Codon-v1-16M is a pretrained base model for coding DNA.
Contact
Raphaël Mourad. raphael.mourad@univ-tlse3.fr
- Downloads last month
- 14