legolasyiu commited on
Commit
b0516e0
·
verified ·
1 Parent(s): 15a95c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -32
README.md CHANGED
@@ -11,11 +11,9 @@ language:
11
  - en
12
  ---
13
 
14
- # ReasomingCore‑3B
15
-
16
- **ReasomingCore‑3B** is a multilingual, reasoning‑enhanced large language model developed by EpitemeAI. Pretrained on vast amounts of publicly available data and instruction‑tuned to excel at nuanced reasoning, dialogue management, retrieval, and summarization tasks, it often outperforms many current open source and proprietary conversational models on a range of industry benchmarks.
17
- Supervised fine tuned with ReasoningCore-3B-0 with reasoning dataset.
18
 
 
19
 
20
  ---
21
 
@@ -23,11 +21,11 @@ Supervised fine tuned with ReasoningCore-3B-0 with reasoning dataset.
23
 
24
  - **Model Developer:** EpitemeAI
25
  - **Model Architecture:**
26
- ReasomingCore‑3B is an auto‑regressive language model built on an optimized transformer architecture. It incorporates specialized reasoning pathways and has been fine‑tuned using both supervised learning and reinforcement learning with human feedback (RLHF) to align with human expectations for clarity, accuracy, and safety in complex tasks.
27
 
28
  | | Training Data | Params | Input Modalities | Output Modalities | Context Length | GQA | Shared Embeddings | Token Count | Knowledge Cutoff |
29
  |--------------------------------|--------------------------------------------------|--------|-----------------------|------------------------------|----------------|-----|-------------------|----------------|-------------------|
30
- | **ReasomingCore‑3B (text only)** | A new mix of publicly available online data. | 3B | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
31
 
32
  - **Supported Languages:**
33
  Officially supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. While the pretraining included a broader range of languages, additional languages can be fine‑tuned in compliance with the community license and acceptable use policies.
@@ -55,12 +53,9 @@ Supervised fine tuned with ReasoningCore-3B-0 with reasoning dataset.
55
 
56
  ## How to Use
57
 
58
- ReasomingCore‑3B can be integrated using popular machine learning frameworks. Two primary methods are provided:
59
-
60
- ### Use with Transformers
61
-
62
 
63
- Use system prompt
64
  ```bash
65
  SYSTEM_PROMPT = """
66
  Respond in the following format:
@@ -73,6 +68,8 @@ Respond in the following format:
73
  """
74
  ```
75
 
 
 
76
  Ensure you have transformers version 4.43.0 or later installed:
77
 
78
  ```bash
@@ -90,24 +87,6 @@ pipe = pipeline(
90
  )
91
  print(pipe("The secret to effective reasoning is"))
92
  ```
93
-
94
- ### Test output 'aha moment'
95
- prompt: Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8,
96
-
97
- ```python
98
- Okay, so I need to continue the Fibonacci sequence. Let me start by recalling what the Fibonacci sequence is. It's a series of numbers where each number is the sum of the two preceding ones, usually starting with 0 and 1. Wait, but the problem says it starts with 1, 1, 2, 3, 5, 8. Hmm, that's a bit different. Maybe it's a variation or something. Let me check.
99
-
100
- Wait, the standard Fibonacci sequence starts with 0 and 1, but this one starts with 1 and 1. So maybe it's a shifted version? Let me see. If it starts with 1, 1, then the next number would be 1 + 1 = 2, which is correct. Then 2 + 1 = 3, 3 + 2 = 5, 5 + 3 = 8. So yes, that's the same as the standard sequence. So the next numbers would be 8 + 5 = 13, 13 + 8 = 21, 21 + 13 = 34, 34 + 21 = 55, 55 + 34 = 89, 89 + 55 = 144, and so on.
101
-
102
- But wait, the problem says it's the Fibonacci sequence. So maybe I should just stick with the standard sequence. Let me list them out again:
103
-
104
- 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144,...
105
-
106
- Yes, that seems right. So the next number would be 89 + 55 = 144. Then 144 + 89 = 233, 233 + 144 = 377, 377 + 233 = 610, 610 + 377 = 987, 987 + 610 = 1597, 1597 + 987 = 2584, 2584 + 1597 = 4181, 4181 + 2584 = 6765, 6765 + 4181 = 10946, 10946 + 6765 = 17711, 17711 + 10946 = 28657, 28657 + 17711 = 46368, 46368 + 28657 = 75025, 75025 + 46368 = 121393, 121393 + 75025 = 196418, 196418 + 121393 = 317811, 317811 + 196418 = 514229, 514229 + 317811 = 832040, 832040 + 514229 = 1346869, 1346869 + 832040 = 2176909, 2176909 + 1346869 = 3523778, 3523778 + 217
107
- ```
108
-
109
-
110
-
111
  ## For Mathematical problems
112
  Please use "Please reason step by step, and put your final answer within \boxed{}" in system prompt
113
 
@@ -117,7 +96,7 @@ Please use "Please reason step by step, and put your final answer within \boxed{
117
  ### Responsible Deployment
118
 
119
  #### Approach:
120
- - **ReasomingCore‑3B** is a foundational technology that includes built‑in safety guardrails. Developers are encouraged to integrate additional safeguards tailored to their specific applications.
121
 
122
  #### System‑Level Safety:
123
  - The model is designed to be deployed as part of a broader system that implements safety measures (e.g., Prompt Guard, Code Shield) to ensure outputs remain safe even under adversarial conditions.
@@ -163,7 +142,7 @@ Please use "Please reason step by step, and put your final answer within \boxed{
163
  ### Ethical Considerations and Limitations
164
 
165
  #### Core Values:
166
- - **ReasomingCore‑3B** is built on the values of openness, inclusivity, and helpfulness. It is designed to respect user autonomy and foster free thought and expression while mitigating potential harm.
167
 
168
  #### Testing and Limitations:
169
  - Despite extensive testing across diverse scenarios, the model may occasionally produce inaccurate, biased, or objectionable outputs. Developers must perform additional safety testing and integrate further safeguards as needed.
@@ -177,7 +156,7 @@ Please use "Please reason step by step, and put your final answer within \boxed{
177
 
178
  ### Conclusion
179
 
180
- **ReasomingCore‑3B** represents a significant advancement in multilingual, reasoning‑enhanced language models. Optimized for tasks requiring deep reasoning, contextual understanding, and safe, helpful interactions, it offers a powerful tool for both commercial and research applications. We invite developers and researchers to explore its capabilities and contribute to building secure, innovative AI systems.
181
 
182
  For further details, questions, or feedback, please email episteme.ai@proton.me
183
 
 
11
  - en
12
  ---
13
 
14
+ # ReasoningCore‑3B-0
 
 
 
15
 
16
+ **ReasoningCore‑3B** is a multilingual, reasoning‑enhanced large language model developed by EpitemeAI. Pretrained on vast amounts of publicly available data and instruction‑tuned to excel at nuanced reasoning, dialogue management, retrieval, and summarization tasks, it often outperforms many current open source and proprietary conversational models on a range of industry benchmarks. Fine tuned with reasoning dataset.
17
 
18
  ---
19
 
 
21
 
22
  - **Model Developer:** EpitemeAI
23
  - **Model Architecture:**
24
+ ReasoningCore‑3B is an auto‑regressive language model built on an optimized transformer architecture. It incorporates specialized reasoning pathways and has been fine‑tuned using both supervised learning and reinforcement learning with human feedback (RLHF) to align with human expectations for clarity, accuracy, and safety in complex tasks.
25
 
26
  | | Training Data | Params | Input Modalities | Output Modalities | Context Length | GQA | Shared Embeddings | Token Count | Knowledge Cutoff |
27
  |--------------------------------|--------------------------------------------------|--------|-----------------------|------------------------------|----------------|-----|-------------------|----------------|-------------------|
28
+ | **ReasoningCore‑3B (text only)** | A new mix of publicly available online data. | 3B | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
29
 
30
  - **Supported Languages:**
31
  Officially supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. While the pretraining included a broader range of languages, additional languages can be fine‑tuned in compliance with the community license and acceptable use policies.
 
53
 
54
  ## How to Use
55
 
56
+ ReasoningCore‑3B can be integrated using popular machine learning frameworks. Two primary methods are provided:
 
 
 
57
 
58
+ ## Use system prompt
59
  ```bash
60
  SYSTEM_PROMPT = """
61
  Respond in the following format:
 
68
  """
69
  ```
70
 
71
+ ### Use with Transformers
72
+
73
  Ensure you have transformers version 4.43.0 or later installed:
74
 
75
  ```bash
 
87
  )
88
  print(pipe("The secret to effective reasoning is"))
89
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
  ## For Mathematical problems
91
  Please use "Please reason step by step, and put your final answer within \boxed{}" in system prompt
92
 
 
96
  ### Responsible Deployment
97
 
98
  #### Approach:
99
+ - **ReasoningCore‑3B** is a foundational technology that includes built‑in safety guardrails. Developers are encouraged to integrate additional safeguards tailored to their specific applications.
100
 
101
  #### System‑Level Safety:
102
  - The model is designed to be deployed as part of a broader system that implements safety measures (e.g., Prompt Guard, Code Shield) to ensure outputs remain safe even under adversarial conditions.
 
142
  ### Ethical Considerations and Limitations
143
 
144
  #### Core Values:
145
+ - **ReasoningCore‑3B** is built on the values of openness, inclusivity, and helpfulness. It is designed to respect user autonomy and foster free thought and expression while mitigating potential harm.
146
 
147
  #### Testing and Limitations:
148
  - Despite extensive testing across diverse scenarios, the model may occasionally produce inaccurate, biased, or objectionable outputs. Developers must perform additional safety testing and integrate further safeguards as needed.
 
156
 
157
  ### Conclusion
158
 
159
+ **ReasoningCore‑3B** represents a significant advancement in multilingual, reasoning‑enhanced language models. Optimized for tasks requiring deep reasoning, contextual understanding, and safe, helpful interactions, it offers a powerful tool for both commercial and research applications. We invite developers and researchers to explore its capabilities and contribute to building secure, innovative AI systems.
160
 
161
  For further details, questions, or feedback, please email episteme.ai@proton.me
162