Upload folder using huggingface_hub
Browse files- hate_speech_demo.py +11 -10
hate_speech_demo.py
CHANGED
|
@@ -789,16 +789,17 @@ def create_gradio_app():
|
|
| 789 |
</ul>
|
| 790 |
|
| 791 |
<h2>How it works</h2>
|
| 792 |
-
<
|
| 793 |
-
|
| 794 |
-
|
| 795 |
-
|
| 796 |
-
|
| 797 |
-
|
| 798 |
-
|
| 799 |
-
|
| 800 |
-
|
| 801 |
-
|
|
|
|
| 802 |
|
| 803 |
<p>Our approach combines Contextual's state-of-the-art
|
| 804 |
<a href='https://contextual.ai/blog/introducing-instruction-following-reranker/' target='_blank'>steerable reranker</a>,
|
|
|
|
| 789 |
</ul>
|
| 790 |
|
| 791 |
<h2>How it works</h2>
|
| 792 |
+
<p><strong>Document-Grounded Evaluations</strong>: Every rating is directly tied to our
|
| 793 |
+
<a href="#" onclick="openPolicyPopup(); return false;">
|
| 794 |
+
hate speech policy document
|
| 795 |
+
</a>, which makes our system far superior to other solutions that lack transparent decision criteria.
|
| 796 |
+
</p>
|
| 797 |
+
|
| 798 |
+
<p><strong>Adaptable Policies</strong> The policy document serves as a starting point and can be easily adjusted to meet your specific requirements. As policies evolve, the system immediately adapts without requiring retraining.</p>
|
| 799 |
+
|
| 800 |
+
<p><strong>Clear Rationales</strong> Each evaluation includes a detailed explanation referencing specific policy sections, allowing users to understand exactly why content was flagged or approved.</p>
|
| 801 |
+
|
| 802 |
+
<p><strong>Continuous Improvement</strong> The system learns from feedback, addressing any misclassifications by improving retrieval accuracy over time.</p>
|
| 803 |
|
| 804 |
<p>Our approach combines Contextual's state-of-the-art
|
| 805 |
<a href='https://contextual.ai/blog/introducing-instruction-following-reranker/' target='_blank'>steerable reranker</a>,
|