HowToSD commited on
Commit
920cef2
·
verified ·
1 Parent(s): ea19e23

Create model card.

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # Model Card for text_prompt_safety_checker
5
+
6
+ ## Model Description
7
+
8
+ This is a safety checker model designed to detect NSFW word(s) in text. The intended use case is for an text to image generator.
9
+
10
+ This model is based on the [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) model and has been fine-tuned for binary classification.
11
+
12
+ ## Training Data
13
+
14
+ Custom textual training data (NSFW text instances as positive and SFW text instances as negative) was used.
15
+
16
+ ## Model Code
17
+
18
+ Inference code is available at [https://github.com/HowToSD/cremage/tree/main/modules/text_prompt_safety_checker](https://github.com/HowToSD/cremage/tree/main/modules/text_prompt_safety_checker). The model license specified here is not applicable to the source code above.
19
+
20
+ ## Intended Use
21
+
22
+ - This model is intended for detecting NSFW word(s) in text prompts.
23
+
24
+ ## Limitations
25
+
26
+ - Classification using this model is not foolproof and may not effectively screen all NSFW words or semantic concepts in the prompt. By choosing to use this model, users acknowledge and accept the limitations and risks associated with it.
27
+
28
+ ## License
29
+
30
+ The model weights are licensed under the Apache License 2.0. The license applies only to the model weights and not to the model code.