Instructions to use mtgv/MobileLLaMA-2.7B-Base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mtgv/MobileLLaMA-2.7B-Base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="mtgv/MobileLLaMA-2.7B-Base")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("mtgv/MobileLLaMA-2.7B-Base") model = AutoModelForCausalLM.from_pretrained("mtgv/MobileLLaMA-2.7B-Base") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use mtgv/MobileLLaMA-2.7B-Base with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "mtgv/MobileLLaMA-2.7B-Base" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "mtgv/MobileLLaMA-2.7B-Base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/mtgv/MobileLLaMA-2.7B-Base
- SGLang
How to use mtgv/MobileLLaMA-2.7B-Base with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "mtgv/MobileLLaMA-2.7B-Base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "mtgv/MobileLLaMA-2.7B-Base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "mtgv/MobileLLaMA-2.7B-Base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "mtgv/MobileLLaMA-2.7B-Base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use mtgv/MobileLLaMA-2.7B-Base with Docker Model Runner:
docker model run hf.co/mtgv/MobileLLaMA-2.7B-Base
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mtgv/MobileLLaMA-2.7B-Base")
model = AutoModelForCausalLM.from_pretrained("mtgv/MobileLLaMA-2.7B-Base")Model Summery
MobileLLaMA-2.7B-Base is a Transformer with 2.7B billon paramters. We downscale LLaMA to facilitate the off-the-shelf deployment. To make our work reproducible, all the models are trained on 1.3T tokens from the RedPajama v1 dataset only. This benefits further research by enabling controlled experiments.
We extensively assess our models on two standard natural language benchmarks, for language understanding and common sense reasoning respectively. Experimental results show that our MobileLLaMA is on par with the most recent opensource models. MobileLLaMA 2.7B also demonstrates competitive performance to INCITE 3B (V1) and OpenLLaMA 3B (V1), while being about 40% faster than OpenLLaMA 3B on a Snapdragon 888 CPU as shown in our paper Table 5.
Model Sources
- Repository: https://github.com/Meituan-AutoML/MobileVLM
- Paper: https://arxiv.org/abs/2312.16886
How to Get Started with the Model
Model weights can be loaded with Hugging Face Transformers. Examples can be found at Github.
Datasets and Training
For our training details, please refer to our paper in section 4.1: MobileVLM: A Fast, Strong and Open Vision Language Assistant for Mobile Devices.
- Downloads last month
- 23
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="mtgv/MobileLLaMA-2.7B-Base")