Update README.md
Browse files
README.md
CHANGED
|
@@ -23,7 +23,7 @@ Supervised fine tuned with ReasoningCore-3B-0 with reasoning dataset.
|
|
| 23 |
|
| 24 |
- **Model Developer:** EpitemeAI
|
| 25 |
- **Model Architecture:**
|
| 26 |
-
|
| 27 |
|
| 28 |
| | Training Data | Params | Input Modalities | Output Modalities | Context Length | GQA | Shared Embeddings | Token Count | Knowledge Cutoff |
|
| 29 |
|--------------------------------|--------------------------------------------------|--------|-----------------------|------------------------------|----------------|-----|-------------------|----------------|-------------------|
|
|
@@ -55,7 +55,7 @@ Supervised fine tuned with ReasoningCore-3B-0 with reasoning dataset.
|
|
| 55 |
|
| 56 |
## How to Use
|
| 57 |
|
| 58 |
-
|
| 59 |
|
| 60 |
## Alignment
|
| 61 |
Special alignment about Christianity, creation and mathematics
|
|
@@ -123,7 +123,7 @@ Please use "Please reason step by step, and put your final answer within \boxed{
|
|
| 123 |
### Responsible Deployment
|
| 124 |
|
| 125 |
#### Approach:
|
| 126 |
-
- **
|
| 127 |
|
| 128 |
#### System‑Level Safety:
|
| 129 |
- The model is designed to be deployed as part of a broader system that implements safety measures (e.g., Prompt Guard, Code Shield) to ensure outputs remain safe even under adversarial conditions.
|
|
@@ -169,7 +169,7 @@ Please use "Please reason step by step, and put your final answer within \boxed{
|
|
| 169 |
### Ethical Considerations and Limitations
|
| 170 |
|
| 171 |
#### Core Values:
|
| 172 |
-
- **
|
| 173 |
|
| 174 |
#### Testing and Limitations:
|
| 175 |
- Despite extensive testing across diverse scenarios, the model may occasionally produce inaccurate, biased, or objectionable outputs. Developers must perform additional safety testing and integrate further safeguards as needed.
|
|
@@ -183,7 +183,7 @@ Please use "Please reason step by step, and put your final answer within \boxed{
|
|
| 183 |
|
| 184 |
### Conclusion
|
| 185 |
|
| 186 |
-
**
|
| 187 |
|
| 188 |
For further details, questions, or feedback, please email episteme.ai@proton.me
|
| 189 |
|
|
|
|
| 23 |
|
| 24 |
- **Model Developer:** EpitemeAI
|
| 25 |
- **Model Architecture:**
|
| 26 |
+
ReasoingCore‑3B is an auto‑regressive language model built on an optimized transformer architecture. It incorporates specialized reasoning pathways and has been fine‑tuned using both supervised learning and reinforcement learning with human feedback (RLHF) to align with human expectations for clarity, accuracy, and safety in complex tasks.
|
| 27 |
|
| 28 |
| | Training Data | Params | Input Modalities | Output Modalities | Context Length | GQA | Shared Embeddings | Token Count | Knowledge Cutoff |
|
| 29 |
|--------------------------------|--------------------------------------------------|--------|-----------------------|------------------------------|----------------|-----|-------------------|----------------|-------------------|
|
|
|
|
| 55 |
|
| 56 |
## How to Use
|
| 57 |
|
| 58 |
+
ReasoningCore‑3B can be integrated using popular machine learning frameworks. Two primary methods are provided:
|
| 59 |
|
| 60 |
## Alignment
|
| 61 |
Special alignment about Christianity, creation and mathematics
|
|
|
|
| 123 |
### Responsible Deployment
|
| 124 |
|
| 125 |
#### Approach:
|
| 126 |
+
- **ReasoningCore‑3B** is a foundational technology that includes built‑in safety guardrails. Developers are encouraged to integrate additional safeguards tailored to their specific applications.
|
| 127 |
|
| 128 |
#### System‑Level Safety:
|
| 129 |
- The model is designed to be deployed as part of a broader system that implements safety measures (e.g., Prompt Guard, Code Shield) to ensure outputs remain safe even under adversarial conditions.
|
|
|
|
| 169 |
### Ethical Considerations and Limitations
|
| 170 |
|
| 171 |
#### Core Values:
|
| 172 |
+
- **ReasoningCore‑3B** is built on the values of openness, inclusivity, and helpfulness. It is designed to respect user autonomy and foster free thought and expression while mitigating potential harm.
|
| 173 |
|
| 174 |
#### Testing and Limitations:
|
| 175 |
- Despite extensive testing across diverse scenarios, the model may occasionally produce inaccurate, biased, or objectionable outputs. Developers must perform additional safety testing and integrate further safeguards as needed.
|
|
|
|
| 183 |
|
| 184 |
### Conclusion
|
| 185 |
|
| 186 |
+
**ReasoningCore‑3B** represents a significant advancement in multilingual, reasoning‑enhanced language models. Optimized for tasks requiring deep reasoning, contextual understanding, and safe, helpful interactions, it offers a powerful tool for both commercial and research applications. We invite developers and researchers to explore its capabilities and contribute to building secure, innovative AI systems.
|
| 187 |
|
| 188 |
For further details, questions, or feedback, please email episteme.ai@proton.me
|
| 189 |
|