Instructions to use GreatCaptainNemo/ProLLaMA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GreatCaptainNemo/ProLLaMA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="GreatCaptainNemo/ProLLaMA")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("GreatCaptainNemo/ProLLaMA") model = AutoModelForCausalLM.from_pretrained("GreatCaptainNemo/ProLLaMA") - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use GreatCaptainNemo/ProLLaMA with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "GreatCaptainNemo/ProLLaMA" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GreatCaptainNemo/ProLLaMA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/GreatCaptainNemo/ProLLaMA
- SGLang
How to use GreatCaptainNemo/ProLLaMA with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "GreatCaptainNemo/ProLLaMA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GreatCaptainNemo/ProLLaMA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "GreatCaptainNemo/ProLLaMA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GreatCaptainNemo/ProLLaMA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use GreatCaptainNemo/ProLLaMA with Docker Model Runner:
docker model run hf.co/GreatCaptainNemo/ProLLaMA
Update README.md
Browse files
README.md
CHANGED
|
@@ -107,4 +107,24 @@ if __name__ == '__main__':
|
|
| 107 |
with open(args.output_file,'w') as f:
|
| 108 |
f.write("\n".join(outputs))
|
| 109 |
print("All the outputs have been saved in",args.output_file)
|
| 110 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 107 |
with open(args.output_file,'w') as f:
|
| 108 |
f.write("\n".join(outputs))
|
| 109 |
print("All the outputs have been saved in",args.output_file)
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
# Input Format:
|
| 113 |
+
The instructions which you input to the model should follow the following format:
|
| 114 |
+
```text
|
| 115 |
+
[Generate by superfamily] Superfamily=<xxx>
|
| 116 |
+
or
|
| 117 |
+
[Determine superfamily] Seq=<yyy>
|
| 118 |
+
```
|
| 119 |
+
Here are some examples of the input:
|
| 120 |
+
```text
|
| 121 |
+
[Generate by superfamily] Superfamily=<Ankyrin repeat-containing domain superfamily>
|
| 122 |
+
```
|
| 123 |
+
```
|
| 124 |
+
#You can also specify the first few amino acids of the protein sequence:
|
| 125 |
+
[Generate by superfamily] Superfamily=<Ankyrin repeat-containing domain superfamily> Seq=<MKRVL
|
| 126 |
+
```
|
| 127 |
+
```
|
| 128 |
+
[Determine superfamily] Seq=<MAPGGMPREFPSFVRTLPEADLGYPALRGWVLQGERGCVLYWEAVTEVALPEHCHAECWGVVVDGRMELMVDGYTRVYTRGDLYVVPPQARHRARVFPGFRGVEHLSDPDLLPVRKR>
|
| 129 |
+
```
|
| 130 |
+
**See [this](https://github.com/Lyu6PosHao/ProLLaMA/blob/main/superfamilies.txt) on all the optional superfamilies.**
|