CoNDeNse-AI

community
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

Andy-ML-And-AI  updated a model about 11 hours ago
CoNDeNse-AI/GLM-5.1-Qwen3-1.7B-CoNDeNse
Andy-ML-And-AI  published a model about 11 hours ago
CoNDeNse-AI/GLM-5.1-Qwen3-1.7B-CoNDeNse
Andy-ML-And-AI  updated a Space about 14 hours ago
CoNDeNse-AI/README
View all activity

Organization Card

CoNDeNse

Compress the knowledge. Keep the capability.

CoNDeNse is a research org built around one idea: small models don't have to be dumb. We take compact, efficient model architectures and train them on the reasoning traces and outputs of models many times their size — distilling capability downward without bloating parameter counts upward.

The name says it all: Condense. Take what's big. Make it small. Lose as little as possible.


Philosophy

  • No fluff. We don't chase benchmarks with tricks. We train honestly and report honestly.
  • Smol is serious. A 0.6B model that reasons is more useful than a 70B model you can't run.
  • Quality data > more data. Every dataset we use is curated, filtered, and purposefully scoped.
  • Reproducibility first. If you can't replicate it, it didn't happen.

Support CoNDeNse

CoNDeNse is a solo research effort. There's no lab, no grant, no GPU cluster behind this — just genuine curiosity and a conviction that small models deserve better training.

The best way to support the work right now is simple: download and use the models. Every download signals that this direction matters. If a model works well for you, star the repo, share it, or drop a comment on the model card.

If you want to go further — contributions, dataset suggestions, or collaboration ideas — open an issue or reach out directly.


License

All released models inherit the license of their respective base models. Dataset usage follows the terms of the original dataset authors. Training code is MIT.


CoNDeNse — because the best model is the one that actually runs.

datasets 0

None public yet