Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
eaddario
/
OLMo-2-1124-7B-Instruct-GGUF
like
0
Text Generation
GGUF
eaddario/imatrix-calibration
English
quant
experimental
conversational
arxiv:
2501.00656
arxiv:
2411.15124
arxiv:
2406.17415
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
OLMo-2-1124-7B-Instruct-GGUF
94.2 GB
Ctrl+K
Ctrl+K
1 contributor
History:
22 commits
eaddario
Update README.md
7541a4a
verified
12 months ago
imatrix
Generate imatrices
12 months ago
logits
Generate base model logits
12 months ago
scores
Add GGUF internal file structure
12 months ago
.gitattributes
Safe
1.6 kB
Update .gitattributes
12 months ago
.gitignore
Safe
6.78 kB
Add .gitignore
12 months ago
OLMo-2-1124-7B-Instruct-F16.gguf
14.6 GB
xet
Convert safetensor to GGUF @ F16
12 months ago
OLMo-2-1124-7B-Instruct-IQ3_M.gguf
3.35 GB
xet
Layer-wise quantization IQ3_M
12 months ago
OLMo-2-1124-7B-Instruct-IQ3_S.gguf
3.11 GB
xet
Layer-wise quantization IQ3_S
12 months ago
OLMo-2-1124-7B-Instruct-IQ4_NL.gguf
3.97 GB
xet
Layer-wise quantization IQ4_NL
12 months ago
OLMo-2-1124-7B-Instruct-Q3_K_L.gguf
3.43 GB
xet
Layer-wise quantization Q3_K_L
12 months ago
OLMo-2-1124-7B-Instruct-Q3_K_M.gguf
3.24 GB
xet
Layer-wise quantization Q3_K_M
12 months ago
OLMo-2-1124-7B-Instruct-Q3_K_S.gguf
2.99 GB
xet
Layer-wise quantization Q3_K_S
12 months ago
OLMo-2-1124-7B-Instruct-Q4_K_M.gguf
4.01 GB
xet
Layer-wise quantization Q4_K_M
12 months ago
OLMo-2-1124-7B-Instruct-Q4_K_S.gguf
3.88 GB
xet
Layer-wise quantization Q4_K_S
12 months ago
OLMo-2-1124-7B-Instruct-Q5_K_M.gguf
4.83 GB
xet
Layer-wise quantization Q5_K_M
12 months ago
OLMo-2-1124-7B-Instruct-Q5_K_S.gguf
4.68 GB
xet
Layer-wise quantization Q5_K_S
12 months ago
OLMo-2-1124-7B-Instruct-Q6_K.gguf
5.9 GB
xet
Layer-wise quantization Q6_K
12 months ago
OLMo-2-1124-7B-Instruct-Q8_0.gguf
7.27 GB
xet
Layer-wise quantization Q8_0
12 months ago
README.md
21.3 kB
Update README.md
12 months ago