Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
cortexso
/
olmo-2
like
1
Follow
Cortex
88
Text Generation
GGUF
cortex.cpp
conversational
arxiv:
2501.00656
License:
other
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
e7ea6af
olmo-2
133 GB
4 contributors
History:
13 commits
Minh141120
Update README.md
e7ea6af
verified
11 months ago
.gitattributes
2.96 kB
Upload folder using huggingface_hub
11 months ago
README.md
1.38 kB
Update README.md
11 months ago
metadata.yml
53 Bytes
Update metadata.yml
11 months ago
model.yml
806 Bytes
Upload model.yml with huggingface_hub
about 1 year ago
olmo-2-1124-13b-instruct-q2_k.gguf
5.26 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-13b-instruct-q3_k_l.gguf
7.37 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-13b-instruct-q3_k_m.gguf
6.78 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-13b-instruct-q3_k_s.gguf
6.1 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-13b-instruct-q4_k_m.gguf
8.35 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-13b-instruct-q4_k_s.gguf
7.91 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-13b-instruct-q5_k_m.gguf
9.76 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-13b-instruct-q5_k_s.gguf
9.5 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-13b-instruct-q6_k.gguf
11.3 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-13b-instruct-q8_0.gguf
14.6 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-7b-instruct-q2_k.gguf
2.86 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-7b-instruct-q3_k_l.gguf
3.95 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-7b-instruct-q3_k_m.gguf
3.65 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-7b-instruct-q3_k_s.gguf
3.3 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-7b-instruct-q4_k_m.gguf
4.47 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-7b-instruct-q4_k_s.gguf
4.25 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-7b-instruct-q5_k_m.gguf
5.21 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-7b-instruct-q5_k_s.gguf
5.08 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-7b-instruct-q6_k.gguf
5.99 GB
xet
Upload folder using huggingface_hub
11 months ago
olmo-2-1124-7b-instruct-q8_0.gguf
7.76 GB
xet
Upload folder using huggingface_hub
11 months ago