Transformers documentation

vLLM

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.0.0rc2).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

vLLM

vLLM is a high-throughput inference engine for serving LLMs at scale. It continuously batches requests and keeps KV cache memory compact with PagedAttention.

Set model_impl="transformers" to load a model using the Transformers modeling backend.

from vllm import LLM

llm = LLM(model="meta-llama/Llama-3.2-1B", model_impl="transformers")
print(llm.generate(["The capital of France is"]))

Pass --model-impl transformers to the vllm serve command for online serving.

vllm serve meta-llama/Llama-3.2-1B \
    --task generate \
    --model-impl transformers

vLLM uses AutoConfig.from_pretrained() to load a model’s config.json file from the Hub or your Hugging Face cache. It checks the architectures field against its internal model registry to determine which vLLM model class to load. If the model isn’t in the registry, vLLM calls AutoModel.from_config() to load the Transformers model implementation.

Setting model_impl="transformers" bypasses the vLLM model registry and loads directly from Transformers. vLLM replaces most model modules (MoE, attention, linear, etc.) with its own optimized versions.

AutoTokenizer.from_pretrained() loads tokenizer files. vLLM caches some tokenizer internals to reduce overhead during inference. Model weights download from the Hub in safetensors format.

Resources

Update on GitHub