Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -162,6 +162,8 @@ The accuracy benchmark results are presented in the table below:
|
|
| 162 |
<tr>
|
| 163 |
</table>
|
| 164 |
|
|
|
|
|
|
|
| 165 |
|
| 166 |
## Model Limitations:
|
| 167 |
The base model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
|
|
|
|
| 162 |
<tr>
|
| 163 |
</table>
|
| 164 |
|
| 165 |
+
> Baseline: [GLM-5-FP8](https://huggingface.co/zai-org/GLM-5-FP8).
|
| 166 |
+
> Benchmarked with temperature=1.0, top_p=0.95, max num tokens 131072
|
| 167 |
|
| 168 |
## Model Limitations:
|
| 169 |
The base model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
|