GGUF quants of futurehouse/ether0

Link to preprint/paper

As this model's training was primarily done on SMILES strings of organic molecules, and is therefore meant for conversations containing a good amount of them, I recommend using Q8_0, Q6_K, Q5_K_M, or Q5_K_S if your bandwidth for these particular sizes yield an acceptable performance for you. The perplexity of these quants should be good enough.

Using llama.cpp b5602 (commit 745aa5319b9930068aff5e87cf5e9eef7227339b)

The importance matrix was generated with calibration_datav3.txt.

All quants were generated/calibrated with the imatrix, including the K quants.

Quantized from BF16.

Downloads last month
131
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for redponike/ether0-GGUF