YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
BioHiCL-Large: Hierarchical Multi-Label Contrastive Biomedical Retriever
Model Card
π Overview
BioHiCL-large is a biomedical dense retriever trained with hierarchical MeSH supervision to capture fine-grained semantic relationships between biomedical texts.
Unlike traditional dense retrievers trained with binary relevance signals, BioHiCL models semantic similarity using structured multi-label supervision derived from the MeSH ontology, enabling it to capture partial semantic overlap between documents.
β οΈ Important: Please ensure that the transformers version matches exactly (4.57.3), as other versions may lead to compatibility issues or unexpected behavior.
π‘ Key Features
- Hierarchical supervision: Leverages MeSH ontology to encode structured biomedical semantics
- Multi-label similarity learning: Captures graded semantic overlap beyond binary relevance
- Contrastive + regression training: Aligns embedding similarity with label similarity
- Efficient: ~0.3B parameters, suitable for deployment on a single GPU
- Domain-adapted retriever: Fine-tuned from a strong general-purpose bi-encoder
π§ Model Details
- Model type: Bi-encoder (dense retriever)
- Backbone: BAAI/bge-large-en-v1.5
- Parameters: ~0.3B
- Fine-tuning: LoRA (merged into base model)
- Max input length: 512 tokens
- Training data: Biomedical abstracts annotated with MeSH labels (e.g., BioASQ-derived corpora)
βοΈ Intended Use
This model is intended for biomedical information retrieval tasks such as:
- Scientific literature search (e.g., PubMed-style retrieval)
- Biomedical document ranking
- Queryβabstract semantic matching
- Benchmark evaluation on BEIR biomedical subsets
βοΈ How It Works
BioHiCL aligns:
- Embedding similarity (SimE): cosine similarity between document embeddings
- Label similarity (SimL): cosine similarity over weighted MeSH multi-label vectors
βοΈ Requirements
- python >= 3.8
- transformers == 4.57.3
β οΈ Important: Please ensure that the
transformersversion matches exactly (4.57.3), as other versions may lead to compatibility issues or unexpected behavior.
π Usage (BEIR Evaluation)
from beir import util
from beir.datasets.data_loader import GenericDataLoader
from beir.retrieval.models import SentenceBERT
from beir.retrieval.search.dense import DenseRetrievalExactSearch
from beir.retrieval.evaluation import EvaluateRetrieval
# 1. Download load the SciFact dataset
dataset = "scifact"
url = "https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/" + dataset + ".zip"
data_path = util.download_and_unzip(url, "datasets")
corpus, queries, qrels = GenericDataLoader(data_path).load(split="test")
# β οΈ Important: Please ensure that the `transformers` version matches exactly (4.57.3), as other versions may lead to compatibility issues or unexpected behavior.
model_name = "LunaLan07/BioHiCL-large"
model = SentenceBERT(model_name)
retriever = DenseRetrievalExactSearch(model, batch_size=16)
top_k = 10 # top 10 documents per query
results = retriever.search(corpus, queries, top_k=top_k, score_function="cos_sim")
k_values = [1, 3, 5, 10]
ndcg, _map, recall, precision = EvaluateRetrieval.evaluate(qrels, results, k_values=k_values)
π Citation
If you use this model, please cite:
@article{lan2026biohicl,
title={BioHiCL: Hierarchical Multi-Label Contrastive Learning for Biomedical Retrieval with MeSH Labels},
author={Lan, Mengfei and Zheng, Lecheng and Kilicoglu, Halil},
booktitle={ACL 2026},
year={2026}
}
- Downloads last month
- 199