Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
eaddario
/
Dolphin3.0-R1-Mistral-24B-GGUF
like
1
Text Generation
GGUF
eaddario/imatrix-calibration
English
quant
experimental
conversational
arxiv:
2406.17415
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Dolphin3.0-R1-Mistral-24B-GGUF
251 GB
Ctrl+K
Ctrl+K
1 contributor
History:
40 commits
eaddario
Update README.md
71f46be
verified
11 months ago
imatrix
Regenerate imatrices
11 months ago
logits
Regenerate base model logits
11 months ago
scores
Add GGUF internal file structure
11 months ago
.gitattributes
Safe
1.65 kB
Update .gitattributes
about 1 year ago
.gitignore
Safe
6.78 kB
Update .gitignore
about 1 year ago
Dolphin3.0-R1-Mistral-24B-F16.gguf
47.2 GB
xet
Convert to GGUF @ F16
about 1 year ago
Dolphin3.0-R1-Mistral-24B-IQ3_M.gguf
10.8 GB
xet
Layer-wise quantization IQ3_M
11 months ago
Dolphin3.0-R1-Mistral-24B-IQ3_S.gguf
10.2 GB
xet
Layer-wise quantization IQ3_S
11 months ago
Dolphin3.0-R1-Mistral-24B-IQ4_NL.gguf
13.1 GB
xet
Layer-wise quantization IQ4_NL
11 months ago
Dolphin3.0-R1-Mistral-24B-Q3_K_L.gguf
11 GB
xet
Layer-wise quantization Q3_K_L
11 months ago
Dolphin3.0-R1-Mistral-24B-Q3_K_M.gguf
10.4 GB
xet
Layer-wise quantization Q3_K_M
11 months ago
Dolphin3.0-R1-Mistral-24B-Q3_K_S.gguf
9.78 GB
xet
Layer-wise quantization Q3_K_S
11 months ago
Dolphin3.0-R1-Mistral-24B-Q4_K_M.gguf
13.1 GB
xet
Layer-wise quantization Q4_K_M
11 months ago
Dolphin3.0-R1-Mistral-24B-Q4_K_S.gguf
12.6 GB
xet
Layer-wise quantization Q4_K_S
11 months ago
Dolphin3.0-R1-Mistral-24B-Q5_K_M.gguf
15.6 GB
xet
Layer-wise quantization Q5_K_M
11 months ago
Dolphin3.0-R1-Mistral-24B-Q5_K_S.gguf
14.9 GB
xet
Layer-wise quantization Q5_K_S
11 months ago
Dolphin3.0-R1-Mistral-24B-Q6_K.gguf
19.3 GB
xet
Layer-wise quantization Q6_K
11 months ago
Dolphin3.0-R1-Mistral-24B-Q8_0.gguf
23 GB
xet
Layer-wise quantization Q8_0
11 months ago
README.md
21.3 kB
Update README.md
11 months ago