Text Classification
Transformers
Safetensors
Portuguese
bert
Eval Results (legacy)
text-embeddings-inference
Instructions to use Silly-Machine/TuPy-Bert-Base-Multilabel with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Silly-Machine/TuPy-Bert-Base-Multilabel with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Silly-Machine/TuPy-Bert-Base-Multilabel")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Silly-Machine/TuPy-Bert-Base-Multilabel") model = AutoModelForSequenceClassification.from_pretrained("Silly-Machine/TuPy-Bert-Base-Multilabel") - Notebooks
- Google Colab
- Kaggle
Commit ·
51a4b2a
1
Parent(s): 08d021f
Update README.md
Browse files
README.md
CHANGED
|
@@ -40,8 +40,10 @@ model-index:
|
|
| 40 |
## Introduction
|
| 41 |
|
| 42 |
|
| 43 |
-
Tupy-BERT-Base is a fine-tuned BERT model designed specifically for multilabel classification of hate speech in Portuguese.
|
| 44 |
-
Derived from the [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased),
|
|
|
|
|
|
|
| 45 |
For more details or specific inquiries, please refer to the [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/).
|
| 46 |
|
| 47 |
The efficacy of Language Models can exhibit notable variations when confronted with a shift in domain between training and test data.
|
|
|
|
| 40 |
## Introduction
|
| 41 |
|
| 42 |
|
| 43 |
+
Tupy-BERT-Base-Multilabel is a fine-tuned BERT model designed specifically for multilabel classification of hate speech in Portuguese.
|
| 44 |
+
Derived from the [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased),
|
| 45 |
+
TuPy-Base is a refined solution for addressing categorical hate speech concerns (ageism, aporophobia, body shame, capacitism, LGBTphobia, political,
|
| 46 |
+
racism, religious intolerance, misogyny, and xenophobia).
|
| 47 |
For more details or specific inquiries, please refer to the [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/).
|
| 48 |
|
| 49 |
The efficacy of Language Models can exhibit notable variations when confronted with a shift in domain between training and test data.
|