language: en tags: - reranking - information-retrieval - lifelog - transformers - modernbert license: apache-2.0 datasets: - private base_model: answerdotai/ModernBERT-base pipeline_tag: text-classification library_name: transformers

lifelog_reranking_modernbert

This model is a reranker built on top of ModernBERT-base, fine-tuned to re-rank candidate passages for lifelog retrieval tasks.

Model Details

  • Architecture: ModernBERT-base + classification head (1 output logit)
  • Objective: Binary relevance classification (relevant vs. non-relevant)
  • Loss: BCEWithLogitsLoss
  • Inputs: Text pairs (query, candidate_document) joined with [SEP]
  • Outputs: A single score (logit). Higher score = more relevant

Intended Use

The model is designed for reranking lifelog retrieval results, but can also be adapted to other query-document ranking tasks.

Example

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model_id = "linhtran222/lifelog_reranking_modernbert"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)

query = "What did I eat for lunch yesterday?"
doc = "You ate sushi and miso soup at a Japanese restaurant."

inputs = tokenizer(f"{query} [SEP] {doc}", return_tensors="pt")

with torch.no_grad():
    score = model(**inputs).logits.squeeze().item()

print("Relevance score:", score)
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support