Instructions to use facebook/opt-350m with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/opt-350m with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="facebook/opt-350m")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use facebook/opt-350m with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "facebook/opt-350m" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "facebook/opt-350m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/facebook/opt-350m
- SGLang
How to use facebook/opt-350m with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "facebook/opt-350m" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "facebook/opt-350m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "facebook/opt-350m" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "facebook/opt-350m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use facebook/opt-350m with Docker Model Runner:
docker model run hf.co/facebook/opt-350m
Smudge error: Error downloading flax_model.msgpack
#28
by julian-q - opened
Hi,
I'm trying to download opt-350m via lfs with git clone https://huggingface.co/facebook/opt-350m but for some reason I get the following error:
Cloning into 'opt-350m'...
remote: Enumerating objects: 112, done.
remote: Total 112 (delta 0), reused 0 (delta 0), pack-reused 112
Receiving objects: 100% (112/112), 556.34 KiB | 4.49 MiB/s, done.
Resolving deltas: 100% (57/57), done.
Downloading flax_model.msgpack (662 MB)
Error downloading object: flax_model.msgpack (e70fc22): Smudge error: Error downloading flax_model.msgpack (e70fc2225b84a806461f18321e9fc0761f1cb649801abe42a5e894b0e11ad891): [e70fc2225b84a806461f18321e9fc0761f1cb649801abe42a5e894b0e11ad891] Object does not exist: [404] Object does not exist
Errors logged to /mnt/workdisk/julian/opt-350m/.git/lfs/logs/20230704T012732.345408455.log
Use `git lfs logs last` to view the log.
error: external filter 'git-lfs filter-process' failed
fatal: flax_model.msgpack: smudge filter lfs failed
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
Any ideas? Thanks so much.