Text Classification
Transformers
Safetensors
Portuguese
bert
Eval Results (legacy)
text-embeddings-inference
Instructions to use Silly-Machine/TuPy-Bert-Base-Binary-Classifier with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Silly-Machine/TuPy-Bert-Base-Binary-Classifier with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Silly-Machine/TuPy-Bert-Base-Binary-Classifier")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Silly-Machine/TuPy-Bert-Base-Binary-Classifier") model = AutoModelForSequenceClassification.from_pretrained("Silly-Machine/TuPy-Bert-Base-Binary-Classifier") - Notebooks
- Google Colab
- Kaggle
Commit ·
eb94cc9
1
Parent(s): 086ec26
Update README.md
Browse files
README.md
CHANGED
|
@@ -40,9 +40,9 @@ model-index:
|
|
| 40 |
## Introduction
|
| 41 |
|
| 42 |
|
| 43 |
-
|
| 44 |
Derived from the [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased),
|
| 45 |
-
TuPy-Base is a refined solution for addressing binary hate speech concerns (hate or not hate).
|
| 46 |
For more details or specific inquiries, please refer to the [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/).
|
| 47 |
|
| 48 |
The efficacy of Language Models can exhibit notable variations when confronted with a shift in domain between training and test data.
|
|
|
|
| 40 |
## Introduction
|
| 41 |
|
| 42 |
|
| 43 |
+
TuPy-Bert-Base-Binary-Classifier is a fine-tuned BERT model designed specifically for binary classification of hate speech in Portuguese.
|
| 44 |
Derived from the [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased),
|
| 45 |
+
TuPy-Bert-Base-Binary-Classifier is a refined solution for addressing binary hate speech concerns (hate or not hate).
|
| 46 |
For more details or specific inquiries, please refer to the [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/).
|
| 47 |
|
| 48 |
The efficacy of Language Models can exhibit notable variations when confronted with a shift in domain between training and test data.
|