Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -180,7 +180,7 @@ response = processor.tokenizer.decode(outputs[0, input_ids.shape[1]:], skip_spec
180
  | Name | Description | Docs | Notebook |
181
  |------|-------------|------|----------|
182
  | [Transformers](https://github.com/huggingface/transformers) | Simple inference with direct access to model internals. | <a href="https://docs.liquid.ai/lfm/inference/transformers#vision-models">Link</a>| <a href="https://colab.research.google.com/drive/1WVQpf4XrHgHFkP0FnlZfx2nK8PugvQNZ?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
183
- | [vLLM](https://github.com/vllm-project/vllm) | High-throughput production deployments with GPU. | coming soon | coming soon |
184
  | [llama.cpp](https://github.com/ggml-org/llama.cpp) | Cross-platform inference with CPU offloading. | <a href="https://docs.liquid.ai/lfm/inference/llama-cpp#vision-models">Link</a> | <a href="https://colab.research.google.com/drive/1q2PjE6O_AahakRlkTNJGYL32MsdUcj7b?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
185
 
186
  ## 🔧 Fine-tuning
@@ -216,4 +216,4 @@ If you are interested in custom solutions with edge deployment, please contact [
216
  journal={arXiv preprint arXiv:2511.23404},
217
  year={2025}
218
  }
219
- ```
 
180
  | Name | Description | Docs | Notebook |
181
  |------|-------------|------|----------|
182
  | [Transformers](https://github.com/huggingface/transformers) | Simple inference with direct access to model internals. | <a href="https://docs.liquid.ai/lfm/inference/transformers#vision-models">Link</a>| <a href="https://colab.research.google.com/drive/1WVQpf4XrHgHFkP0FnlZfx2nK8PugvQNZ?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
183
+ | [vLLM](https://github.com/vllm-project/vllm) | High-throughput production deployments with GPU. | coming soon | <a href="https://colab.research.google.com/drive/1sUfQlqAvuAVB4bZ6akYVQPGmHtTDUNpF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
184
  | [llama.cpp](https://github.com/ggml-org/llama.cpp) | Cross-platform inference with CPU offloading. | <a href="https://docs.liquid.ai/lfm/inference/llama-cpp#vision-models">Link</a> | <a href="https://colab.research.google.com/drive/1q2PjE6O_AahakRlkTNJGYL32MsdUcj7b?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
185
 
186
  ## 🔧 Fine-tuning
 
216
  journal={arXiv preprint arXiv:2511.23404},
217
  year={2025}
218
  }
219
+ ```a