Instructions to use CareerNinja/BERT_2_Labels with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use CareerNinja/BERT_2_Labels with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="CareerNinja/BERT_2_Labels")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("CareerNinja/BERT_2_Labels") model = AutoModelForSequenceClassification.from_pretrained("CareerNinja/BERT_2_Labels") - Notebooks
- Google Colab
- Kaggle
Number of Epochs = 5
Dataset Size = 5.5 k samples [train/validation]
Number of labels used = 2
Thresholding = True
Thresholding value = 0.7
Below is the function to aplly thresholding to output logits.
def get_prediction(text):
encoding = tokenizer(text, return_tensors="pt", padding="max_length", truncation=True, max_length=128)
encoding = {k: v.to(trainer.model.device) for k,v in encoding.items()}
outputs = model(**encoding)
logits = outputs.logits
sigmoid = torch.nn.Sigmoid()
probs = sigmoid(logits.squeeze().cpu())
probs = probs.detach().numpy()
label = np.argmax(probs, axis=-1)
if label == 1:
if probs[1] > 0.7:
return 1
else:
return 0
else:
return 0
- Downloads last month
- 4