Text Classification
Transformers
PyTorch
Safetensors
Portuguese
Trained with AutoTrain
Eval Results (legacy)
Instructions to use inctdd/told_br_binary_sm with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use inctdd/told_br_binary_sm with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="inctdd/told_br_binary_sm")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("inctdd/told_br_binary_sm", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 44ff6f5c13db75994a74168d44980e2d535c67259b3dd10b482ed271c0c9bb26
- Size of remote file:
- 436 MB
- SHA256:
- 8a1751bae989c77812333acc37d615f67aa3a01f091de1f9f14d514835bc81db
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.