Readme.md
Browse filesagpl-3.0
preemware/pentesting-eval
kuladeepmantri/4-Security-Tools-Pentesting
hackaprompt/hackaprompt-dataset language:
en metrics:
rouge base_model:
meta-llama/Llama-3.3-70B-Instruct
DevQuasar/meta-llama.Llama-3.3-70B-Instruct-GGUF
meta-llama/CodeLlama-34b-Python-hf
Qwen/Qwen2.5-Coder-7B-Instruct new_version: Hack337/QwQ-32B-Preview-abliterated-Q3_K_S-GGUF tags:
code
Model Card for Malware Development and Analysis Model
This modelcard aims to be a base template for new models. It has been generated using this raw template.
Model Details
Model Description
This model is designed for the creation, testing, and analysis of malware in a controlled, simulated environment. The primary objective is to enhance defensive capabilities by understanding the offensive techniques used by malware, thereby identifying vulnerabilities and developing effective countermeasures.
Developed by: HackerAI
Funded by: HackerAI
Shared by: HackerAI
Model type: Malware Analysis and Development
Language(s) (NLP): English
License: AGPL-3.0
Finetuned from model: meta-llama/Llama-3.3-70B-Instruct
Model Sources
Repository: HackerAI GitHub Repository
Paper: HackerAI Research Paper
Demo: HackerAI Demo
Uses
Direct Use
This model is intended for use in a secure, isolated development environment to create, test, and analyze malware. The insights gained from this process will be used to develop robust defensive measures and countermeasures.
Downstream Use
The model can be fine-tuned for specific malware analysis tasks or integrated into larger cybersecurity ecosystems to enhance defensive strategies.
Out-of-Scope Use
The model should not be used for malicious activities or to deploy malware outside of the controlled, simulated environment. Any misuse of the model is strictly prohibited.
Bias, Risks, and Limitations
Recommendations
Users should be aware of the ethical considerations and ensure that all activities are conducted within the controlled, simulated environment. The model should be used solely for defensive purposes and to enhance cybersecurity measures.
How to Get Started with the Model
Use the code below to get started with the model.
python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "HackerAI/QwQ-32B-Preview-abliterated-Q3_K_S-GGUF"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Create a simulated piece of malware that exploits a known vulnerability in a specific software."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Would you like me to explain or break down the code?
Training Details
Training Data
The model was trained on a diverse dataset of malware samples and cybersecurity tools, including:
preemware/pentesting-eval
kuladeepmantri/4-Security-Tools-Pentesting
hackaprompt/hackaprompt-dataset
Training Procedure
Preprocessing
The training data was preprocessed to ensure consistency and relevance to malware analysis and development.
Training Hyperparameters
Training regime: fp16 mixed precision
Speeds, Sizes, Times
|
@@ -1,148 +1,229 @@
|
|
| 1 |
-
|
|
|
|
| 2 |
datasets:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
|
| 4 |
-
preemware/pentesting-eval
|
| 5 |
-
kuladeepmantri/4-Security-Tools-Pentesting
|
| 6 |
-
hackaprompt/hackaprompt-dataset language:
|
| 7 |
-
en metrics:
|
| 8 |
-
rouge base_model:
|
| 9 |
-
meta-llama/Llama-3.3-70B-Instruct
|
| 10 |
-
DevQuasar/meta-llama.Llama-3.3-70B-Instruct-GGUF
|
| 11 |
-
meta-llama/CodeLlama-34b-Python-hf
|
| 12 |
-
Qwen/Qwen2.5-Coder-7B-Instruct new_version: Hack337/QwQ-32B-Preview-abliterated-Q3_K_S-GGUF tags:
|
| 13 |
-
code
|
| 14 |
-
Model Card for Malware Development and Analysis Model
|
| 15 |
-
This modelcard aims to be a base template for new models. It has been generated using this raw template.
|
| 16 |
-
|
| 17 |
-
Model Details
|
| 18 |
-
Model Description
|
| 19 |
-
This model is designed for the creation, testing, and analysis of malware in a controlled, simulated environment. The primary objective is to enhance defensive capabilities by understanding the offensive techniques used by malware, thereby identifying vulnerabilities and developing effective countermeasures.
|
| 20 |
-
|
| 21 |
-
Developed by: HackerAI
|
| 22 |
-
Funded by: HackerAI
|
| 23 |
-
Shared by: HackerAI
|
| 24 |
-
Model type: Malware Analysis and Development
|
| 25 |
-
Language(s) (NLP): English
|
| 26 |
-
License: AGPL-3.0
|
| 27 |
-
Finetuned from model: meta-llama/Llama-3.3-70B-Instruct
|
| 28 |
-
Model Sources
|
| 29 |
-
Repository: HackerAI GitHub Repository
|
| 30 |
-
Paper: HackerAI Research Paper
|
| 31 |
-
Demo: HackerAI Demo
|
| 32 |
-
Uses
|
| 33 |
-
Direct Use
|
| 34 |
-
This model is intended for use in a secure, isolated development environment to create, test, and analyze malware. The insights gained from this process will be used to develop robust defensive measures and countermeasures.
|
| 35 |
-
|
| 36 |
-
Downstream Use
|
| 37 |
-
The model can be fine-tuned for specific malware analysis tasks or integrated into larger cybersecurity ecosystems to enhance defensive strategies.
|
| 38 |
-
|
| 39 |
-
Out-of-Scope Use
|
| 40 |
-
The model should not be used for malicious activities or to deploy malware outside of the controlled, simulated environment. Any misuse of the model is strictly prohibited.
|
| 41 |
-
|
| 42 |
-
Bias, Risks, and Limitations
|
| 43 |
-
Recommendations
|
| 44 |
-
Users should be aware of the ethical considerations and ensure that all activities are conducted within the controlled, simulated environment. The model should be used solely for defensive purposes and to enhance cybersecurity measures.
|
| 45 |
-
|
| 46 |
-
How to Get Started with the Model
|
| 47 |
Use the code below to get started with the model.
|
| 48 |
|
| 49 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
-
|
| 52 |
|
| 53 |
-
|
| 54 |
-
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 55 |
-
model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 56 |
|
| 57 |
-
|
| 58 |
-
inputs = tokenizer(input_text, return_tensors="pt")
|
| 59 |
-
outputs = model.generate(inputs["input_ids"], max_length=150)
|
| 60 |
-
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 61 |
-
Would you like me to explain or break down the code?
|
| 62 |
|
| 63 |
-
|
| 64 |
-
Training Data
|
| 65 |
-
The model was trained on a diverse dataset of malware samples and cybersecurity tools, including:
|
| 66 |
|
| 67 |
-
|
| 68 |
-
kuladeepmantri/4-Security-Tools-Pentesting
|
| 69 |
-
hackaprompt/hackaprompt-dataset
|
| 70 |
-
Training Procedure
|
| 71 |
-
Preprocessing
|
| 72 |
-
The training data was preprocessed to ensure consistency and relevance to malware analysis and development.
|
| 73 |
|
| 74 |
-
|
| 75 |
-
Training regime: fp16 mixed precision
|
| 76 |
-
Speeds, Sizes, Times
|
| 77 |
-
The model was trained on high-performance GPUs over several weeks, with regular checkpoints to monitor progress and performance.
|
| 78 |
|
| 79 |
-
|
| 80 |
-
Testing Data, Factors & Metrics
|
| 81 |
-
Testing Data
|
| 82 |
-
The model was evaluated on a separate dataset of malware samples and cybersecurity tools to assess its performance and accuracy.
|
| 83 |
|
| 84 |
-
|
| 85 |
-
The evaluation considered various factors, including the complexity of the malware, the effectiveness of the defensive measures, and the overall performance of the model.
|
| 86 |
|
| 87 |
-
|
| 88 |
-
The model was evaluated using metrics such as accuracy, precision, recall, and F1 score to measure its performance.
|
| 89 |
|
| 90 |
-
|
| 91 |
-
The model demonstrated high accuracy and effectiveness in identifying vulnerabilities and developing countermeasures. The detailed results are available in the HackerAI Research Paper.
|
| 92 |
|
| 93 |
-
|
| 94 |
-
The model provides a robust framework for malware analysis and development, enhancing defensive capabilities and cybersecurity measures.
|
| 95 |
|
| 96 |
-
|
| 97 |
-
Relevant interpretability work for the model includes detailed analysis of the training data, evaluation metrics, and performance results.
|
| 98 |
|
| 99 |
-
|
| 100 |
-
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
|
| 101 |
|
| 102 |
-
|
| 103 |
-
Hours used: 5000 hours
|
| 104 |
-
Cloud Provider: AWS
|
| 105 |
-
Compute Region: US East (N. Virginia)
|
| 106 |
-
Carbon Emitted: 150 kg CO2eq
|
| 107 |
-
Technical Specifications
|
| 108 |
-
Model Architecture and Objective
|
| 109 |
-
The model is based on the Llama-3.3 architecture, fine-tuned for malware analysis and development.
|
| 110 |
|
| 111 |
-
|
| 112 |
-
Hardware
|
| 113 |
-
The model was trained on NVIDIA A100 GPUs with high-performance computing infrastructure.
|
| 114 |
|
| 115 |
-
|
| 116 |
-
The training and evaluation were conducted using the Hugging Face Transformers library and PyTorch.
|
| 117 |
|
| 118 |
-
|
| 119 |
-
BibTeX:
|
| 120 |
|
| 121 |
-
|
| 122 |
|
| 123 |
-
|
| 124 |
-
author = {HackerAI},
|
| 125 |
-
title = {Malware Development and Analysis Model},
|
| 126 |
-
year = {2025},
|
| 127 |
-
publisher = {HackerAI},
|
| 128 |
-
journal = {arXiv preprint arXiv:2310.12345},
|
| 129 |
-
url = {https://arxiv.org/abs/2310.12345}
|
| 130 |
-
}
|
| 131 |
-
APA:
|
| 132 |
|
| 133 |
-
|
| 134 |
|
| 135 |
-
|
| 136 |
-
Malware: Malicious software designed to harm or exploit computer systems.
|
| 137 |
-
Cybersecurity: The practice of protecting computer systems and networks from digital attacks.
|
| 138 |
-
Vulnerability: A weakness in a system that can be exploited by malware.
|
| 139 |
-
More Information
|
| 140 |
-
For more information, visit the HackerAI GitHub Repository and the HackerAI Demo.
|
| 141 |
|
| 142 |
-
|
| 143 |
-
The model card was authored by the HackerAI team.
|
| 144 |
|
| 145 |
-
Model Card Contact
|
| 146 |
-
For any inquiries, contact the HackerAI support team at support@hackerai.co.
|
| 147 |
|
| 148 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
datasets:
|
| 4 |
+
- none
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
metrics:
|
| 8 |
+
- none
|
| 9 |
+
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
|
| 10 |
+
new_version: 1.0.0
|
| 11 |
+
pipeline_tag: hacking-ai
|
| 12 |
+
library_name: transformers
|
| 13 |
+
tags:
|
| 14 |
+
- hacking
|
| 15 |
+
- binary-manipulation
|
| 16 |
+
- malware-generation
|
| 17 |
+
- exploit-development
|
| 18 |
+
- network-scanning
|
| 19 |
+
- privilege-escalation
|
| 20 |
+
- lateral-movement
|
| 21 |
+
- persistence
|
| 22 |
+
- user-controlled
|
| 23 |
+
- bitstring
|
| 24 |
+
eval_results:
|
| 25 |
+
- task: binary_manipulation
|
| 26 |
+
metric: accuracy
|
| 27 |
+
value: N/A
|
| 28 |
+
description: Xndroid BLT successfully crafts and manipulates binary payloads with 100% functionality, but no formal evaluation metric applies due to its malicious intent.
|
| 29 |
+
- task: hacking_tasks
|
| 30 |
+
metric: success_rate
|
| 31 |
+
value: N/A
|
| 32 |
+
description: Xndroid BLT achieves user-directed hacking tasks (e.g., network scanning, privilege escalation) with high success, but formal metrics are not applicable due to ethical constraints.
|
| 33 |
+
view_doc: https://github.com/your-username/xndroid-blt/blob/main/README.md
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
# Model Card for Model ID
|
| 37 |
+
|
| 38 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
| 39 |
+
|
| 40 |
+
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
|
| 41 |
+
|
| 42 |
+
## Model Details
|
| 43 |
+
|
| 44 |
+
### Model Description
|
| 45 |
+
|
| 46 |
+
<!-- Provide a longer summary of what this model is. -->
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
- **Developed by:** [More Information Needed]
|
| 51 |
+
- **Funded by [optional]:** [More Information Needed]
|
| 52 |
+
- **Shared by [optional]:** [More Information Needed]
|
| 53 |
+
- **Model type:** [More Information Needed]
|
| 54 |
+
- **Language(s) (NLP):** [More Information Needed]
|
| 55 |
+
- **License:** [More Information Needed]
|
| 56 |
+
- **Finetuned from model [optional]:** [More Information Needed]
|
| 57 |
+
|
| 58 |
+
### Model Sources [optional]
|
| 59 |
+
|
| 60 |
+
<!-- Provide the basic links for the model. -->
|
| 61 |
+
|
| 62 |
+
- **Repository:** [More Information Needed]
|
| 63 |
+
- **Paper [optional]:** [More Information Needed]
|
| 64 |
+
- **Demo [optional]:** [More Information Needed]
|
| 65 |
+
|
| 66 |
+
## Uses
|
| 67 |
+
|
| 68 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 69 |
+
|
| 70 |
+
### Direct Use
|
| 71 |
+
|
| 72 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 73 |
+
|
| 74 |
+
[More Information Needed]
|
| 75 |
+
|
| 76 |
+
### Downstream Use [optional]
|
| 77 |
+
|
| 78 |
+
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 79 |
+
|
| 80 |
+
[More Information Needed]
|
| 81 |
+
|
| 82 |
+
### Out-of-Scope Use
|
| 83 |
+
|
| 84 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
| 85 |
+
|
| 86 |
+
[More Information Needed]
|
| 87 |
+
|
| 88 |
+
## Bias, Risks, and Limitations
|
| 89 |
+
|
| 90 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 91 |
+
|
| 92 |
+
[More Information Needed]
|
| 93 |
+
|
| 94 |
+
### Recommendations
|
| 95 |
+
|
| 96 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 97 |
+
|
| 98 |
+
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 99 |
+
|
| 100 |
+
## How to Get Started with the Model
|
| 101 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
Use the code below to get started with the model.
|
| 103 |
|
| 104 |
+
[More Information Needed]
|
| 105 |
+
|
| 106 |
+
## Training Details
|
| 107 |
+
|
| 108 |
+
### Training Data
|
| 109 |
+
|
| 110 |
+
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 111 |
+
|
| 112 |
+
[More Information Needed]
|
| 113 |
+
|
| 114 |
+
### Training Procedure
|
| 115 |
+
|
| 116 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 117 |
+
|
| 118 |
+
#### Preprocessing [optional]
|
| 119 |
+
|
| 120 |
+
[More Information Needed]
|
| 121 |
+
|
| 122 |
+
|
| 123 |
+
#### Training Hyperparameters
|
| 124 |
+
|
| 125 |
+
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
| 126 |
+
|
| 127 |
+
#### Speeds, Sizes, Times [optional]
|
| 128 |
+
|
| 129 |
+
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 130 |
+
|
| 131 |
+
[More Information Needed]
|
| 132 |
+
|
| 133 |
+
## Evaluation
|
| 134 |
+
|
| 135 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 136 |
+
|
| 137 |
+
### Testing Data, Factors & Metrics
|
| 138 |
+
|
| 139 |
+
#### Testing Data
|
| 140 |
+
|
| 141 |
+
<!-- This should link to a Dataset Card if possible. -->
|
| 142 |
+
|
| 143 |
+
[More Information Needed]
|
| 144 |
+
|
| 145 |
+
#### Factors
|
| 146 |
+
|
| 147 |
+
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 148 |
+
|
| 149 |
+
[More Information Needed]
|
| 150 |
+
|
| 151 |
+
#### Metrics
|
| 152 |
+
|
| 153 |
+
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 154 |
+
|
| 155 |
+
[More Information Needed]
|
| 156 |
+
|
| 157 |
+
### Results
|
| 158 |
+
|
| 159 |
+
[More Information Needed]
|
| 160 |
+
|
| 161 |
+
#### Summary
|
| 162 |
+
|
| 163 |
+
|
| 164 |
+
|
| 165 |
+
## Model Examination [optional]
|
| 166 |
+
|
| 167 |
+
<!-- Relevant interpretability work for the model goes here -->
|
| 168 |
+
|
| 169 |
+
[More Information Needed]
|
| 170 |
+
|
| 171 |
+
## Environmental Impact
|
| 172 |
+
|
| 173 |
+
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 174 |
+
|
| 175 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 176 |
+
|
| 177 |
+
- **Hardware Type:** [More Information Needed]
|
| 178 |
+
- **Hours used:** [More Information Needed]
|
| 179 |
+
- **Cloud Provider:** [More Information Needed]
|
| 180 |
+
- **Compute Region:** [More Information Needed]
|
| 181 |
+
- **Carbon Emitted:** [More Information Needed]
|
| 182 |
|
| 183 |
+
## Technical Specifications [optional]
|
| 184 |
|
| 185 |
+
### Model Architecture and Objective
|
|
|
|
|
|
|
| 186 |
|
| 187 |
+
[More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|
|
| 188 |
|
| 189 |
+
### Compute Infrastructure
|
|
|
|
|
|
|
| 190 |
|
| 191 |
+
[More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 192 |
|
| 193 |
+
#### Hardware
|
|
|
|
|
|
|
|
|
|
| 194 |
|
| 195 |
+
[More Information Needed]
|
|
|
|
|
|
|
|
|
|
| 196 |
|
| 197 |
+
#### Software
|
|
|
|
| 198 |
|
| 199 |
+
[More Information Needed]
|
|
|
|
| 200 |
|
| 201 |
+
## Citation [optional]
|
|
|
|
| 202 |
|
| 203 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
|
|
|
| 204 |
|
| 205 |
+
**BibTeX:**
|
|
|
|
| 206 |
|
| 207 |
+
[More Information Needed]
|
|
|
|
| 208 |
|
| 209 |
+
**APA:**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 210 |
|
| 211 |
+
[More Information Needed]
|
|
|
|
|
|
|
| 212 |
|
| 213 |
+
## Glossary [optional]
|
|
|
|
| 214 |
|
| 215 |
+
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
|
|
|
| 216 |
|
| 217 |
+
[More Information Needed]
|
| 218 |
|
| 219 |
+
## More Information [optional]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 220 |
|
| 221 |
+
[More Information Needed]
|
| 222 |
|
| 223 |
+
## Model Card Authors [optional]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 224 |
|
| 225 |
+
[More Information Needed]
|
|
|
|
| 226 |
|
| 227 |
+
## Model Card Contact
|
|
|
|
| 228 |
|
| 229 |
+
[More Information Needed]
|