Instructions to use facebook/opt-350m with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/opt-350m with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="facebook/opt-350m")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use facebook/opt-350m with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "facebook/opt-350m" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "facebook/opt-350m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/facebook/opt-350m
- SGLang
How to use facebook/opt-350m with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "facebook/opt-350m" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "facebook/opt-350m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "facebook/opt-350m" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "facebook/opt-350m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use facebook/opt-350m with Docker Model Runner:
docker model run hf.co/facebook/opt-350m
Adding ONNX file of this model
#42 opened 6 months ago
by
afilip
Corrected the spelling from modedling to modeling
#41 opened about 1 year ago
by
venugopalkadamba
Adding `safetensors` variant of this model
#40 opened over 1 year ago
by
SFconvertbot
Get attention weights of input tokens
👀 1
1
#39 opened about 2 years ago
by
Reynier
Has 331M params and not 350M params!
#38 opened about 2 years ago
by
ramaseshan1
Pretraining error
#37 opened over 2 years ago
by
saivineetha
Adding `safetensors` variant of this model
#36 opened over 2 years ago
by
seungahdev
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
2
#35 opened over 2 years ago
by
saivineetha
Adding Evaluation Results
#34 opened over 2 years ago
by
leaderboard-pr-bot
Adding `safetensors` variant of this model
#33 opened over 2 years ago
by
SFconvertbot
The last layer returns a wrong embedding dimension
#30 opened over 2 years ago
by
macleginn
Smudge error: Error downloading flax_model.msgpack
👍 1
#28 opened almost 3 years ago
by
julian-q
Can anybody help me how to evaluate opt-350m model in gpt4all, please?
#27 opened almost 3 years ago
by
alinh1803
Fixed typo
#26 opened almost 3 years ago
by
InfamousPlatypus
Not match vocab size in vocal.json and config.json
#25 opened almost 3 years ago
by
myleee
Adding `safetensors` variant of this model
#24 opened about 3 years ago
by
pdtgct
NameError: name 'init_empty_weights' is not defined when using load_in_8bit=True
5
#23 opened over 3 years ago
by
linkanjarad
Add evaluation results on the top_en_ config and test split of futin/feed
#22 opened over 3 years ago
by
autoevaluator
Add evaluation results on the sen_en_ config and test split of futin/feed
#21 opened over 3 years ago
by
autoevaluator
Add evaluation results on the sen_en config and test split of futin/feed
#20 opened over 3 years ago
by
autoevaluator
Add evaluation results on the sen_vi config and test split of futin/feed
#19 opened over 3 years ago
by
autoevaluator
Add evaluation results on the top_en config and test split of futin/feed
#18 opened over 3 years ago
by
autoevaluator
Add evaluation results on the top_vi config and test split of futin/feed
#17 opened over 3 years ago
by
autoevaluator
How to Run from Local Install
#16 opened over 3 years ago
by
PoppaDoc
Add evaluation results on the vi_3 config and test split of futin/guess
#15 opened over 3 years ago
by
autoevaluator
Add evaluation results on the en_3 config and test split of futin/guess
#14 opened over 3 years ago
by
autoevaluator
Add evaluation results on the vi config and test split of futin/guess
#13 opened over 3 years ago
by
autoevaluator
Add evaluation results on the en config and test split of futin/guess
#12 opened over 3 years ago
by
autoevaluator
Add evaluation results on the mathemakitten--winobias_antistereotype_test config and test split of mathemakitten/winobias_antistereotype_test
#10 opened over 3 years ago
by
autoevaluator
Add evaluation results on the jeffdshen--inverse_superglue_mixedp1 config and train split of jeffdshen/inverse_superglue_mixedp1
#9 opened over 3 years ago
by
autoevaluator
Add evaluation results on the jeffdshen--redefine_math_test0 config and train split of jeffdshen/redefine_math_test0
#8 opened over 3 years ago
by
autoevaluator
Add evaluation results on the mathemakitten--winobias_antistereotype_test config and test split of mathemakitten/winobias_antistereotype_test
#7 opened over 3 years ago
by
autoevaluator
Add evaluation results on the mathemakitten--winobias_antistereotype_dev config and validation split of mathemakitten/winobias_antistereotype_dev
#6 opened over 3 years ago
by
autoevaluator
Remove unused `activation_dropout`
#5 opened over 3 years ago
by
shijie-wu