Usage and limitations summarization
Browse files
README.md
CHANGED
|
@@ -226,3 +226,6 @@ The Tiny-toxic-detector is designed to classify comments for toxicity. It is par
|
|
| 226 |
* Language Ambiguity
|
| 227 |
* The Tiny-toxic-detector may struggle with ambiguous or nuanced language as any other model would. Even though benchmarks like Toxigen evaluate the model’s performance with ambiguous language, it may still misclassify comments where toxicity is not clearly defined.
|
| 228 |
|
|
|
|
|
|
|
|
|
|
|
|
| 226 |
* Language Ambiguity
|
| 227 |
* The Tiny-toxic-detector may struggle with ambiguous or nuanced language as any other model would. Even though benchmarks like Toxigen evaluate the model’s performance with ambiguous language, it may still misclassify comments where toxicity is not clearly defined.
|
| 228 |
|
| 229 |
+
### Summarization
|
| 230 |
+
|
| 231 |
+
This model is a great fit if there is a resource constraint or if fast inference is important, but as any AI classification model, it can be wrong. As such, we discourage using this model in an automated system with no human oversight. There is a chance of overreliance on words rather than the context as a whole as outlined in the paper, so please keep this in mind as well.
|