Update README.md
Browse files
README.md
CHANGED
|
@@ -20,12 +20,12 @@ tags:
|
|
| 20 |
- text-generation-inference
|
| 21 |
---
|
| 22 |
|
| 23 |
-
**Model Card for
|
| 24 |
|
| 25 |
**Model Details**
|
| 26 |
|
| 27 |
-
- **Model Name**:
|
| 28 |
-
- **Model ID**:
|
| 29 |
- **License**: MIT
|
| 30 |
- **Base Models**:
|
| 31 |
- replit/replit-code-v1_5-3b
|
|
@@ -34,18 +34,18 @@ tags:
|
|
| 34 |
|
| 35 |
**Model Description**
|
| 36 |
|
| 37 |
-
|
| 38 |
|
| 39 |
**Training Data**
|
| 40 |
|
| 41 |
-
The model was trained
|
| 42 |
|
| 43 |
-
- **Wordlists**: A
|
| 44 |
-
- **CyberExploitDB**: A
|
| 45 |
-
- **Pentesting Dataset**: A
|
| 46 |
-
- **Shell Commands**: A repository of Unix/Linux shell commands and scripts.
|
| 47 |
|
| 48 |
-
These datasets were sourced from Canstralian
|
| 49 |
|
| 50 |
- Canstralian/Wordlists
|
| 51 |
- Canstralian/CyberExploitDB
|
|
@@ -54,48 +54,51 @@ These datasets were sourced from Canstralian's repositories:
|
|
| 54 |
|
| 55 |
**Intended Use**
|
| 56 |
|
| 57 |
-
|
| 58 |
|
| 59 |
-
- **Code Completion**:
|
| 60 |
-
- **Code Generation**: Creating boilerplate code or entire functions based on user
|
| 61 |
-
- **Educational
|
| 62 |
|
| 63 |
**Performance Metrics**
|
| 64 |
|
| 65 |
-
|
| 66 |
|
| 67 |
-
- **Accuracy**: Measures the correctness of
|
| 68 |
-
- **Code Evaluation**: Assesses the functionality and efficiency of
|
| 69 |
|
| 70 |
**Ethical Considerations**
|
| 71 |
|
| 72 |
-
|
| 73 |
|
| 74 |
-
- **
|
| 75 |
-
- **Avoid Sensitive
|
| 76 |
|
| 77 |
**Limitations**
|
| 78 |
|
| 79 |
-
|
| 80 |
|
| 81 |
-
- **
|
| 82 |
-
- **Lack
|
| 83 |
|
| 84 |
**Future Improvements**
|
| 85 |
|
| 86 |
-
|
| 87 |
|
| 88 |
-
- **Expanded Language Support**:
|
| 89 |
-
- **Contextual Understanding**:
|
| 90 |
|
| 91 |
**Acknowledgments**
|
| 92 |
|
| 93 |
-
We
|
| 94 |
|
| 95 |
**References**
|
| 96 |
|
| 97 |
- [Replit Code v1.5 Model Card](https://huggingface.co/replit/replit-code-v1_5-3b)
|
| 98 |
-
- [WhiteRabbitNeo Llama-3.1 Model
|
| 99 |
- [Canstralian GitHub Repositories](https://github.com/canstralian)
|
| 100 |
|
| 101 |
-
This model card provides
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
- text-generation-inference
|
| 21 |
---
|
| 22 |
|
| 23 |
+
## **Model Card for RabbitRedux**
|
| 24 |
|
| 25 |
**Model Details**
|
| 26 |
|
| 27 |
+
- **Model Name**: RabbitRedux
|
| 28 |
+
- **Model ID**: rabbitredux-v1
|
| 29 |
- **License**: MIT
|
| 30 |
- **Base Models**:
|
| 31 |
- replit/replit-code-v1_5-3b
|
|
|
|
| 34 |
|
| 35 |
**Model Description**
|
| 36 |
|
| 37 |
+
RabbitRedux is a cutting-edge code generation model designed to assist developers by generating code snippets, completing code blocks, and providing context-aware suggestions. It combines advanced AI architectures from Replit’s Code v1.5 and WhiteRabbitNeo's Llama series to produce high-quality code generation across multiple programming languages.
|
| 38 |
|
| 39 |
**Training Data**
|
| 40 |
|
| 41 |
+
The model was trained using a diverse set of datasets sourced from Canstralian’s repositories, including:
|
| 42 |
|
| 43 |
+
- **Wordlists**: A rich collection of programming language keywords, syntax, and coding patterns.
|
| 44 |
+
- **CyberExploitDB**: A database of cybersecurity exploits and related code examples.
|
| 45 |
+
- **Pentesting Dataset**: A collection of penetration testing scripts and tools.
|
| 46 |
+
- **Shell Commands**: A repository of common Unix/Linux shell commands and scripts.
|
| 47 |
|
| 48 |
+
These datasets were sourced from the following Canstralian repositories:
|
| 49 |
|
| 50 |
- Canstralian/Wordlists
|
| 51 |
- Canstralian/CyberExploitDB
|
|
|
|
| 54 |
|
| 55 |
**Intended Use**
|
| 56 |
|
| 57 |
+
RabbitRedux is designed for:
|
| 58 |
|
| 59 |
+
- **Code Completion**: Helping developers by suggesting code completions in real time.
|
| 60 |
+
- **Code Generation**: Creating boilerplate code or entire functions based on user inputs.
|
| 61 |
+
- **Educational Use**: Serving as a learning tool for exploring coding patterns and best practices.
|
| 62 |
|
| 63 |
**Performance Metrics**
|
| 64 |
|
| 65 |
+
RabbitRedux’s performance is evaluated using the following metrics:
|
| 66 |
|
| 67 |
+
- **Accuracy**: Measures the correctness of generated code snippets.
|
| 68 |
+
- **Code Evaluation**: Assesses the functionality and efficiency of generated code by executing it.
|
| 69 |
|
| 70 |
**Ethical Considerations**
|
| 71 |
|
| 72 |
+
RabbitRedux is intended to provide accurate and helpful code suggestions. However, users should:
|
| 73 |
|
| 74 |
+
- **Review Generated Code**: Always validate and test generated code to ensure it meets security and performance standards.
|
| 75 |
+
- **Avoid Sensitive Inputs**: Do not input sensitive or proprietary information into the model to prevent data leakage.
|
| 76 |
|
| 77 |
**Limitations**
|
| 78 |
|
| 79 |
+
While RabbitRedux is highly capable, it may:
|
| 80 |
|
| 81 |
+
- **Generate Inaccurate Code**: Occasionally produce code with errors or inefficiencies.
|
| 82 |
+
- **Lack Contextual Awareness**: It may not fully understand the broader context of a project, leading to less relevant suggestions.
|
| 83 |
|
| 84 |
**Future Improvements**
|
| 85 |
|
| 86 |
+
Future updates will include:
|
| 87 |
|
| 88 |
+
- **Expanded Language Support**: Adding support for more programming languages.
|
| 89 |
+
- **Improved Contextual Understanding**: Enhancing the model's ability to generate context-aware code.
|
| 90 |
|
| 91 |
**Acknowledgments**
|
| 92 |
|
| 93 |
+
We would like to thank the Canstralian community for their contributions of datasets used in training, and the open-source community for the development of the base models.
|
| 94 |
|
| 95 |
**References**
|
| 96 |
|
| 97 |
- [Replit Code v1.5 Model Card](https://huggingface.co/replit/replit-code-v1_5-3b)
|
| 98 |
+
- [WhiteRabbitNeo Llama-3.1 Model Cards](https://huggingface.co/WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B)
|
| 99 |
- [Canstralian GitHub Repositories](https://github.com/canstralian)
|
| 100 |
|
| 101 |
+
This model card provides an overview of RabbitRedux, detailing its capabilities, performance, and considerations for usage.
|
| 102 |
+
|
| 103 |
+
|
| 104 |
+
|