Update README.md
Browse files
README.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
---
|
| 2 |
language: zh
|
| 3 |
tags:
|
| 4 |
-
-
|
| 5 |
- knowledge-distillation
|
| 6 |
- dark
|
| 7 |
- 2b
|
|
@@ -57,6 +57,18 @@ Dark_slm_i1 is developed with a strong commitment to ethical AI principles. The
|
|
| 57 |
* **Privacy**: Designed to support on-device processing to minimize data transfer and enhance user privacy.
|
| 58 |
* **Environmental Impact**: Contributing to more sustainable AI solutions due to its significantly lower energy consumption.
|
| 59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
## Training Data
|
| 61 |
|
| 62 |
Dark_slm_i1 was trained using a diverse dataset, with a focus on achieving broad language understanding while optimizing for its compact architecture. The knowledge distillation process involved transferring learned representations from a larger Dark-SLM model, which was trained on a massive corpus of text and code data. Specific details regarding the exact composition and size of the training datasets for both the teacher and student models will be provided in future updates and research papers.
|
|
@@ -66,6 +78,7 @@ Dark_slm_i1 was trained using a diverse dataset, with a focus on achieving broad
|
|
| 66 |
* **Parameters**: 2.273 Billion
|
| 67 |
* **Architecture**: AXI (Self-develop AGI Architecture based on nothing)
|
| 68 |
* **Training Framework**: PyTorch, TensorFlow
|
|
|
|
| 69 |
|
| 70 |
## Evaluation Results
|
| 71 |
|
|
|
|
| 1 |
---
|
| 2 |
language: zh
|
| 3 |
tags:
|
| 4 |
+
- slml
|
| 5 |
- knowledge-distillation
|
| 6 |
- dark
|
| 7 |
- 2b
|
|
|
|
| 57 |
* **Privacy**: Designed to support on-device processing to minimize data transfer and enhance user privacy.
|
| 58 |
* **Environmental Impact**: Contributing to more sustainable AI solutions due to its significantly lower energy consumption.
|
| 59 |
|
| 60 |
+
## Security
|
| 61 |
+
|
| 62 |
+
The security of Dark_slm_i1 is powered by **GuardianNet**, an advanced AI model security cloud service. GuardianNet provides comprehensive protection through multiple layers of security:
|
| 63 |
+
|
| 64 |
+
* **Real-time Monitoring**: Continuously monitors model behavior and API interactions to detect anomalies and potential malicious activities.
|
| 65 |
+
* **Adversarial Attack Detection**: Employs state-of-the-art algorithms to identify and mitigate various forms of adversarial attacks, including prompt injection and model evasion techniques.
|
| 66 |
+
* **Content Safety Filtering**: Implements robust content moderation to prevent the generation of harmful, unethical, or dangerous outputs.
|
| 67 |
+
* **Secure Deployment Framework**: Provides tools and protocols for secure model deployment, including access control, encryption, and audit logging.
|
| 68 |
+
* **Threat Intelligence Integration**: Leverages global threat intelligence to stay updated against emerging security vulnerabilities and attack vectors.
|
| 69 |
+
|
| 70 |
+
GuardianNet's cloud-based security architecture complements the model's on-device privacy advantages by providing enterprise-grade security without compromising the model's efficiency or performance.
|
| 71 |
+
|
| 72 |
## Training Data
|
| 73 |
|
| 74 |
Dark_slm_i1 was trained using a diverse dataset, with a focus on achieving broad language understanding while optimizing for its compact architecture. The knowledge distillation process involved transferring learned representations from a larger Dark-SLM model, which was trained on a massive corpus of text and code data. Specific details regarding the exact composition and size of the training datasets for both the teacher and student models will be provided in future updates and research papers.
|
|
|
|
| 78 |
* **Parameters**: 2.273 Billion
|
| 79 |
* **Architecture**: AXI (Self-develop AGI Architecture based on nothing)
|
| 80 |
* **Training Framework**: PyTorch, TensorFlow
|
| 81 |
+
* **Security Infrastructure**: GuardianNet AI Security Cloud
|
| 82 |
|
| 83 |
## Evaluation Results
|
| 84 |
|