Text Classification
Transformers
PyTorch
Safetensors
English
bert
sentence-similarity
text-embeddings-inference
Instructions to use dennlinger/bert-wiki-paragraphs with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use dennlinger/bert-wiki-paragraphs with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="dennlinger/bert-wiki-paragraphs")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("dennlinger/bert-wiki-paragraphs") model = AutoModelForSequenceClassification.from_pretrained("dennlinger/bert-wiki-paragraphs") - Notebooks
- Google Colab
- Kaggle
The model always predicts 1.
#3
by drmeir - opened
I am trying to give example inputs using the web UI and the model always predicts 1. Here is the JSON output for the input: Planes fly. They use fuel. [SEP] Tigers have teeth.
[
[
{
"label": 1,
"score": 0.9046787023544312
},
{
"label": 0,
"score": 0.09532131254673004
}
]
]
Am I missing something?
drmeir changed discussion title from The model does not seem to work at all. to The model always predicts 1.