polodealvarado/zero-shot-text-classification
Viewer • Updated • 1k • 18
How to use polodealvarado/dynquery with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("zero-shot-classification", model="polodealvarado/dynquery") # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("polodealvarado/dynquery", dtype="auto")# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("polodealvarado/dynquery", dtype="auto")DyREx-inspired dynamic label queries via cross-attention over text tokens.
This model encodes texts and candidate labels into a shared embedding space using BERT, enabling classification into arbitrary categories without retraining for new labels.
| Parameter | Value |
|---|---|
| Base model | bert-base-uncased |
| Model variant | dynquery |
| Training steps | 1000 |
| Batch size | 2 |
| Learning rate | 2e-05 |
| Trainable params | 111,844,608 |
| Training time | 383.0s |
Trained on polodealvarado/zeroshot-classification.
| Metric | Score |
|---|---|
| Precision | 0.7704 |
| Recall | 0.9773 |
| F1 Score | 0.8616 |
from models.dynquery import DynQueryModel
model = DynQueryModel.from_pretrained("polodealvarado/dynquery")
predictions = model.predict(
texts=["The stock market crashed yesterday."],
labels=[["Finance", "Sports", "Biology", "Economy"]],
)
print(predictions)
# [{"text": "...", "scores": {"Finance": 0.98, "Economy": 0.85, ...}}]
Base model
google-bert/bert-base-uncased
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-classification", model="polodealvarado/dynquery")