metadata
base_model:
- futurehouse/ether0
GGUF quants of futurehouse/ether0
As this model's training was primarily done on SMILES strings of organic molecules, and is therefore meant for conversations containing a good amount of them, I recommend using Q8_0, Q6_K, Q5_K_M, or Q5_K_S if your bandwidth for these particular sizes yield an acceptable performance for you. The perplexity of these quants should be good enough.
Using llama.cpp b5602 (commit 745aa5319b9930068aff5e87cf5e9eef7227339b)
The importance matrix was generated with calibration_datav3.txt.
All quants were generated/calibrated with the imatrix, including the K quants.
Quantized from BF16.