Instructions to use NeuML/gemma-4-tiny-random-litert-lm with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- LiteRT-LM
How to use NeuML/gemma-4-tiny-random-litert-lm with LiteRT-LM:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
Model Card for Gemma 4 LiteRT-LM
Tiny randomly initialized Gemma 4 LiteRT-LM model for testing.
Created using the following code.
# pip install litert-torch
from litert_torch.generative.export_hf.export import export
export(
model="optimum-intel-internal-testing/tiny-random-gemma4",
output_dir="output",
externalize_embedder=True,
# Disable quantization, None still defaults to a value
quantization_recipe="",
use_jinja_template=False,
)
- Downloads last month
- 126
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support