Text Generation
Transformers
Safetensors
mixtral
biology
protein-language-model
protein-generation
msa
multiple-sequence-alignment
few-shot-prompting
homolog-conditioned-generation
causal-lm
mixture-of-experts
text-generation-inference
Instructions to use protgpt3/ProtGPT3-MSA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use protgpt3/ProtGPT3-MSA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="protgpt3/ProtGPT3-MSA")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("protgpt3/ProtGPT3-MSA") model = AutoModelForCausalLM.from_pretrained("protgpt3/ProtGPT3-MSA") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use protgpt3/ProtGPT3-MSA with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "protgpt3/ProtGPT3-MSA" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "protgpt3/ProtGPT3-MSA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/protgpt3/ProtGPT3-MSA
- SGLang
How to use protgpt3/ProtGPT3-MSA with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "protgpt3/ProtGPT3-MSA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "protgpt3/ProtGPT3-MSA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "protgpt3/ProtGPT3-MSA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "protgpt3/ProtGPT3-MSA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use protgpt3/ProtGPT3-MSA with Docker Model Runner:
docker model run hf.co/protgpt3/ProtGPT3-MSA
File size: 1,584 Bytes
49b896a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 | {
"version": "1.0",
"truncation": null,
"padding": null,
"added_tokens": [
{
"id": 0,
"content": "<|pad|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
{
"id": 1,
"content": "<|bos|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
{
"id": 2,
"content": "<|eos|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
{
"id": 3,
"content": "<unk>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
}
],
"normalizer": null,
"pre_tokenizer": {
"type": "WhitespaceSplit"
},
"post_processor": null,
"decoder": null,
"model": {
"type": "WordLevel",
"vocab": {
"<|pad|>": 0,
"<|bos|>": 1,
"<|eos|>": 2,
"<unk>": 3,
"<gap>": 4,
"<no_gap>": 5,
"<query>": 6,
"<s>": 7,
"-": 8,
"1": 9,
"2": 10,
"A": 11,
"B": 12,
"C": 13,
"D": 14,
"E": 15,
"F": 16,
"G": 17,
"H": 18,
"I": 19,
"K": 20,
"L": 21,
"M": 22,
"N": 23,
"O": 24,
"P": 25,
"Q": 26,
"R": 27,
"S": 28,
"T": 29,
"U": 30,
"V": 31,
"W": 32,
"X": 33,
"Y": 34,
"Z": 35
},
"unk_token": "<unk>"
}
} |