| | --- |
| | pipeline_tag: text-classification |
| | metrics: |
| | - accuracy |
| | license: mit |
| | datasets: |
| | - mteb/twentynewsgroups-clustering |
| | language: |
| | - en |
| | --- |
| | # BERT Text Classification Model |
| |
|
| | This is a simple model for text classification using BERT. |
| |
|
| | ## Usage |
| |
|
| | To use the model, you can call the `classify_text` function with a text input, and it will return the predicted class label. |
| |
|
| | ```python |
| | text = "This is a positive review." |
| | predicted_class = classify_text(text) |
| | print("Predicted class:", predicted_class) |
| | |
| | from transformers import BertTokenizer, BertForSequenceClassification |
| | |
| | # Load pre-trained BERT tokenizer and model |
| | tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') |
| | model = BertForSequenceClassification.from_pretrained('bert-base-uncased') |
| | |
| | # Define a function to classify text |
| | def classify_text(text): |
| | inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True) |
| | outputs = model(**inputs) |
| | logits = outputs.logits |
| | probabilities = logits.softmax(dim=1) |
| | predicted_class = probabilities.argmax(dim=1).item() |
| | return predicted_class |
| | |
| | # Example usage |
| | text = "This is a positive review." |
| | predicted_class = classify_text(text) |
| | print("Predicted class:", predicted_class) |