--- language: - en license: mit library_name: transformers pipeline_tag: zero-shot-classification tags: - zero-shot - multi-label - text-classification - pytorch metrics: - precision - recall - f1 base_model: bert-base-uncased datasets: - polodealvarado/zeroshot-classification --- # Zero-Shot Text Classification — spanclass GLiNER-inspired span-attentive classification with top-K span selection. This model encodes texts and candidate labels into a shared embedding space using BERT, enabling classification into arbitrary categories without retraining for new labels. ## Training Details | Parameter | Value | |-----------|-------| | Base model | `bert-base-uncased` | | Model variant | `spanclass` | | Training steps | 1000 | | Batch size | 2 | | Learning rate | 2e-05 | | Trainable params | 111,254,017 | | Training time | 374.1s | ## Dataset Trained on [polodealvarado/zeroshot-classification](https://huggingface.co/datasets/polodealvarado/zeroshot-classification). ## Evaluation Results | Metric | Score | |--------|-------| | Precision | 0.9277 | | Recall | 0.9503 | | F1 Score | 0.9388 | ## Usage ```python from models.spanclass import SpanClassModel model = SpanClassModel.from_pretrained("polodealvarado/spanclass") predictions = model.predict( texts=["The stock market crashed yesterday."], labels=[["Finance", "Sports", "Biology", "Economy"]], ) print(predictions) # [{"text": "...", "scores": {"Finance": 0.98, "Economy": 0.85, ...}}] ```