Instructions to use jprafael/mpt-7b-instruct-sharded with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jprafael/mpt-7b-instruct-sharded with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="jprafael/mpt-7b-instruct-sharded", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jprafael/mpt-7b-instruct-sharded", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("jprafael/mpt-7b-instruct-sharded", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use jprafael/mpt-7b-instruct-sharded with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "jprafael/mpt-7b-instruct-sharded" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jprafael/mpt-7b-instruct-sharded", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/jprafael/mpt-7b-instruct-sharded
- SGLang
How to use jprafael/mpt-7b-instruct-sharded with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "jprafael/mpt-7b-instruct-sharded" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jprafael/mpt-7b-instruct-sharded", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "jprafael/mpt-7b-instruct-sharded" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jprafael/mpt-7b-instruct-sharded", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use jprafael/mpt-7b-instruct-sharded with Docker Model Runner:
docker model run hf.co/jprafael/mpt-7b-instruct-sharded
mpt-7b-instruct: sharded
This is a version of the mpt-7b-instruct model, sharded to 2 GB chunks for low-RAM loading (i.e. Colab).
The weights are stored in bfloat16 so in theory you can run this on CPU, though it may take forever.
Original code and credits go to mpt-7b-storywriter-sharded.
See the community discussion on how to replicate this.
Please refer to the previously linked repo for details on usage/implementation/etc. This model was downloaded from the original repo under Apache-2.0 and is redistributed under the same license.
Basic Usage
Note when using: this is not an instruction-tuned model, so you need to give it sufficient input text to continue generating something on-topic with your prompt
Install/upgrade packages:
pip install -U torch transformers accelerate einops
Load the model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'jprafael/mpt-7b-instruct-sharded'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
revision='8d8911ad980f48f8a791e5f5876dea891dcbc064', # optional, but a good idea
device_map='auto',
load_in_8bit=False, # install bitsandbytes then set to true for 8-bit
)
model = torch.compile(model)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Then you can use model.generate() as you would normally - see the notebook for details.
- Downloads last month
- 24