MFANN 8b version 0.8

image/png

fine-tuned on the MFANN dataset as it stands on 5/5/2024 as it is an ever expanding dataset

fine-tuned MFANN dataset on llama-3-8b

68.52 <- Average 63.23 <- ARC 84.06 <- HellaSwag 66.94 <- MMLU 59.91 <- TruthfulQA 72.45 <- WinoGrande 64.52 <- GSM8K

Downloads last month
11
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train netcat420/MFANNv0.8-GGUF