bert-ft-v2 / README.md
cloud0day3's picture
Create README.md
d5b2039 verified
metadata
license: mit
language:
  - en
base_model:
  - google-bert/bert-base-uncased
pipeline_tag: text-classification

News Relevancy Classifiers

bert-ft-v2

BERTft Badge

Model Description

  • Purpose: This model is trained for a specific task in research, it is not a commmercial product and should not be used in for-profit.
  • Architecture: bert-base-uncased
  • Fine-tuning task: Four-class English healthcare and AI news-headline relevancy classification
  • Dataset: ~254 English headlines (2024–2025) manually labeled into:
    • 0 — Not Relevant
    • 1 — Least Relevant
    • 2 — Highly Relevant
    • 3 — Most Relevant
  • HF Repo: cloud0day3/bert-ft-v2 (latest v3 checkpoint, 6 June 2025)
  • Date Trained: 2025-06-06

Model Inputs

  • A raw English headline (string), truncated/padded to 96 tokens.
  • Tokenization handled by the bundled vocab.txt + tokenizer_config.json + special_tokens_map.json.

Model Outputs

  • A single integer label (0–3). Mapped to human-readable categories:
    LABELS = {
        0: "Not Relevant",
        1: "Least Relevant",
        2: "Highly Relevant",
        3: "Most Relevant"
    }
    

Intended Use

  • Primary: Automatically assign a relevancy score to healthcare and AI English news headlines so that downstream pipelines (e.g., filtering, ranking) can operate without manual triage.

Examples of use:

  • Pre-filtering a news aggregation feed to capture healthcare and AI news.

  • Prioritizing headlines for editorial review.

  • Input to summarization/retrieval pipelines.

Out-of-Scope Uses

  • Any non-English text.

  • Multi-sentence inputs or full articles (this model is tuned on single-sentence headlines).

  • Tasks other than healthcare-tech relevancy (e.g., sentiment analysis, topic modeling).

  • High-risk decision making without human oversight (e.g., emergency alerts).