File size: 1,699 Bytes
549cc0c
 
c43cc79
 
 
 
 
 
 
 
 
 
f53f6fa
9766fb3
92705f1
549cc0c
 
92705f1
549cc0c
9766fb3
b9713f5
92705f1
 
 
 
 
549cc0c
92705f1
 
549cc0c
92705f1
 
 
 
549cc0c
92705f1
 
 
549cc0c
92705f1
549cc0c
92705f1
 
549cc0c
92705f1
549cc0c
92705f1
 
 
 
549cc0c
92705f1
 
 
549cc0c
9766fb3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92705f1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
library_name: transformers
tags:
- CoT
- Code
license: apache-2.0
language:
- en
- zh
- ko
- ru
- de
base_model: Qwen/Qwen2.5-7B-Instruct
model_name: streamerbtw1002/Nexuim-R1-7B-Instruct
revision: main
---

## Model Details  

**Model Name:** streamerbtw1002/Nexuim-R1-7B-Instruct

**Developed by:** [James Phifer](https://nexusmind.tech/) (NexusMind.tech)  
**Funded by:** [Tristian](https://shuttleai.com/) (Shuttle.ai)  
**License:** Apache-2.0  
**Finetuned from:** Qwen/Qwen2.5-VL-7B-Instruct  
**Architecture:** Transformer-based LLM  

### Overview  
This model is designed to handle complex mathematical questions efficiently using Chain of Thought (CoT) reasoning.  

- **Capabilities:**  
  - General-purpose LLM  
  - Strong performance on multi-step reasoning tasks  
  - Able to respond to requests ethically while preventing human harm  

- **Limitations:**  
  - Not evaluated extensively  
  - May generate incorrect or biased outputs in certain contexts  

## Training Details  

**Dataset:** Trained on a **120k-line** CoT dataset for mathematical reasoning.  
**Training Hardware:** 1x A100 80GB GPU (Provided by Tristian at Shuttle.ai)  

## Evaluation  

**Status:** Not formally tested yet.  
**Preliminary Results:**  
- Provides detailed, well-structured answers  
- Performs well on long-form mathematical problems  

## Usage  
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer

model_id = "streamerbtw1002/Nexuim-R1-7B-Instruct"

config = AutoConfig.from_pretrained(
  model_id,
  revision="main"
)
model = AutoModel.from_pretrained(
  model_id,
  revision="main"
)
tokenizer = AutoTokenizer.from_pretrained(
  model_id,
  revision="main"
)

```