File size: 2,036 Bytes
6f932ea
 
 
 
 
 
 
 
63f30d4
6f932ea
 
 
388a087
6f932ea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d68b2e
 
6f932ea
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
license: llama4
library_name: transformers
base_model:
- meta-llama/Llama-4-Scout-17B-16E
---

<p align="center">
  <img src="images/RoadXpert-logo.png" alt="Logo" width="40%">
</p>


# RoadXpert-AI-V1-109B MoE

[Blog Post](https://www.deepcogito.com/research/cogito-v2-preview)

The Cogito v2 LLMs are instruction tuned generative models. All models are released under an open license for commercial use.

- Cogito v2 models are hybrid reasoning models. Each model can answer directly (standard LLM), or self-reflect before answering (like reasoning models).
- The LLMs are trained using **Iterated Distillation and Amplification (IDA)** - an scalable and efficient alignment strategy for superintelligence using iterative self-improvement.
- The models have been optimized for coding, STEM, instruction following and general helpfulness, and have significantly higher multilingual, coding and tool calling capabilities than size equivalent counterparts.
  - In both standard and reasoning modes, Cogito v2-preview models outperform their size equivalent counterparts on common industry benchmarks. 
- This model is trained in over 30 languages and supports long contexts (upto 10M tokens).

# Evaluations
Here is the model performance on some standard industry benchmarks:

<p align="left">
  <img src="images/cogito-v2-109b-benchmarks.png" alt="Logo" width="90%">
</p>

For detailed evaluations, please refer to the [Blog Post](https://www.deepcogito.com/research/cogito-v2-preview). 

# Usage
Here is a snippet below for usage with Transformers:

```python

```

## License
This repository and the model weights are licensed under the [Llama 4 Community License Agreement](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE) (Llama models' default license agreement). 
All fine-tuning, dataset curation, and optimization have been performed by **Arijit Rakshit (2025)** and others.

## Contact
If you would like to reach out to our team, send an email to [contact@deepcogito.com](contact@deepcogito.com).