Model Card for TinyMistral-248M-v2-4bit

A model to use with unsloth for running on cheap hardware, a la Kaggle and Colab free-tier.

Unsloth has some great ideas about optimization and integration. Very easy to get started with and use on commodity hardware.

Locutusque and Felladrin in particular are building some wonderful and quite useful small models.

Unsloth has focused on tinyllama, thus this focus on a great alternative, the TinyMistral series.

Model Details

Can be loaded through unsloth - rather than over-explaining, check out these resources:

You get the picture, small models make for better optimization, but this level of optimizing is needed for adoption, distillation, and reduced environmental impact.

Model Description

This isn't a useful model on its own - Uses unsloth's FastLanguageModel model loader, which handles a whole bunch of behind the scenes organization and complexity.

This is the model card of a 馃 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: All credit to Locutusque
  • Funded by: Copious pocket lint
  • Shared by: jtatman
  • Model type: Peft/BitsnBytes 4-bit ready adapter
  • Language(s) (NLP): None
  • License: Apache 2.x
  • Finetuned from model [optional]: Locutusque/TinyMistral-248M-v2
  • Not Finetuned, a direct save of a 4bit (obviously, not 4bit on disk as this stream format not yet implemented in BnB) adapter load for memory requirements

Uses

Use with unsloth library only

Out-of-Scope Use

Will not load on inference on demand or through native transformers - yet.

Downloads last month
1
Safetensors
Model size
0.3B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support