Instructions to use prithivida/all-MiniLM-L6-v2-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivida/all-MiniLM-L6-v2-gguf with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("prithivida/all-MiniLM-L6-v2-gguf", dtype="auto") - Notebooks
- Google Colab
- Kaggle
README.md exists but content is empty.
- Downloads last month
- 72
Hardware compatibility
Log In to add your hardware
4-bit
8-bit
16-bit
32-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for prithivida/all-MiniLM-L6-v2-gguf
Base model
nreimers/MiniLM-L6-H384-uncased