Instructions to use stabilityai/stablelm-base-alpha-3b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use stabilityai/stablelm-base-alpha-3b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="stabilityai/stablelm-base-alpha-3b")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-base-alpha-3b") model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-base-alpha-3b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use stabilityai/stablelm-base-alpha-3b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "stabilityai/stablelm-base-alpha-3b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stabilityai/stablelm-base-alpha-3b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/stabilityai/stablelm-base-alpha-3b
- SGLang
How to use stabilityai/stablelm-base-alpha-3b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "stabilityai/stablelm-base-alpha-3b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stabilityai/stablelm-base-alpha-3b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "stabilityai/stablelm-base-alpha-3b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stabilityai/stablelm-base-alpha-3b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use stabilityai/stablelm-base-alpha-3b with Docker Model Runner:
docker model run hf.co/stabilityai/stablelm-base-alpha-3b
Sagemaker deployment not functional on default example
#7
by jcardenes - opened
I followed the sagemaker deployment steps but it results in prediction 400 error with gpt neox.
I got the same results for 3b and 7b.
Default steps:
from sagemaker.huggingface import HuggingFaceModel
import boto3
iam_client = boto3.client('iam')
role = iam_client.get_role(RoleName='YourRoleName')['Role']['Arn']
hub = {
'HF_MODEL_ID':'stabilityai/stablelm-base-alpha-7b',
'HF_TASK':'conversational'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': "Can you please let us know more details about your "
})
Error:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "\u0027gpt_neox\u0027"
}