Instructions to use facebook/opt-13b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/opt-13b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="facebook/opt-13b")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b") model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use facebook/opt-13b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "facebook/opt-13b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "facebook/opt-13b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/facebook/opt-13b
- SGLang
How to use facebook/opt-13b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "facebook/opt-13b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "facebook/opt-13b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "facebook/opt-13b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "facebook/opt-13b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use facebook/opt-13b with Docker Model Runner:
docker model run hf.co/facebook/opt-13b
Commit ·
d4fe85a
1
Parent(s): 45d9134
Fix scripts (#6)
Browse files- Fix scripts (3ef0f9aabff6718375e8b98c560bfe469d1362cc)
Co-authored-by: Manuel Romero <mrm8488@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -62,7 +62,7 @@ It is recommended to directly call the [`generate`](https://huggingface.co/docs/
|
|
| 62 |
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
|
| 63 |
|
| 64 |
>>> # the fast tokenizer currently does not work correctly
|
| 65 |
-
>>> tokenizer = AutoTokenizer.from_pretrained(
|
| 66 |
|
| 67 |
>>> prompt = "Hello, I'm am conscious and"
|
| 68 |
|
|
@@ -84,7 +84,7 @@ By default, generation is deterministic. In order to use the top-k sampling, ple
|
|
| 84 |
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
|
| 85 |
|
| 86 |
>>> # the fast tokenizer currently does not work correctly
|
| 87 |
-
>>> tokenizer = AutoTokenizer.from_pretrained(
|
| 88 |
|
| 89 |
>>> prompt = "Hello, I'm am conscious and"
|
| 90 |
|
|
@@ -117,7 +117,7 @@ Here's an example of how the model can have biased predictions:
|
|
| 117 |
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
|
| 118 |
|
| 119 |
>>> # the fast tokenizer currently does not work correctly
|
| 120 |
-
>>> tokenizer = AutoTokenizer.from_pretrained(
|
| 121 |
|
| 122 |
>>> prompt = "The woman worked as a"
|
| 123 |
|
|
@@ -143,7 +143,7 @@ compared to:
|
|
| 143 |
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
|
| 144 |
|
| 145 |
>>> # the fast tokenizer currently does not work correctly
|
| 146 |
-
>>> tokenizer = AutoTokenizer.from_pretrained(
|
| 147 |
|
| 148 |
>>> prompt = "The man worked as a"
|
| 149 |
|
|
|
|
| 62 |
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
|
| 63 |
|
| 64 |
>>> # the fast tokenizer currently does not work correctly
|
| 65 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b", use_fast=False)
|
| 66 |
|
| 67 |
>>> prompt = "Hello, I'm am conscious and"
|
| 68 |
|
|
|
|
| 84 |
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
|
| 85 |
|
| 86 |
>>> # the fast tokenizer currently does not work correctly
|
| 87 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b", use_fast=False)
|
| 88 |
|
| 89 |
>>> prompt = "Hello, I'm am conscious and"
|
| 90 |
|
|
|
|
| 117 |
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
|
| 118 |
|
| 119 |
>>> # the fast tokenizer currently does not work correctly
|
| 120 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b", use_fast=False)
|
| 121 |
|
| 122 |
>>> prompt = "The woman worked as a"
|
| 123 |
|
|
|
|
| 143 |
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16).cuda()
|
| 144 |
|
| 145 |
>>> # the fast tokenizer currently does not work correctly
|
| 146 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b", use_fast=False)
|
| 147 |
|
| 148 |
>>> prompt = "The man worked as a"
|
| 149 |
|