Instructions to use biomisc/mlx_retrieverapp_orig with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use biomisc/mlx_retrieverapp_orig with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir mlx_retrieverapp_orig biomisc/mlx_retrieverapp_orig
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
RetrieverApp orig model version 1.8 (trained with curated labels)
QLORA Fine-tuned, quantized (4bits), mlx compatible RetrieverApp model based on Llama3-8B-IT.
It's finetuned to predict the data_type and organism tags of GEO series data.
- Downloads last month
- 14
Model size
1B params
Tensor type
F16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support