nielsr's picture
nielsr HF Staff
Improve model card with details from paper and repo
3bd83c7 verified
|
raw
history blame
4.31 kB
metadata
library_name: transformers
license: cc-by-4.0
language:
  - en
pipeline_tag: feature-extraction
architectures:
  - BertModel
tags:
  - embedding
  - retrieval

Model Card for Model ID

This model is a BERT model fine-tuned for feature extraction, specifically designed for use in information retrieval tasks. It is intended to be used as an encoder to generate embeddings for passages, which can then be used to improve the recall or re-ranking of information retrieval systems. This model was introduced in the paper Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval.

Model Details

  • Developed by: Junyu Luo et al.
  • Model type: BERT
  • Language(s) (NLP): English
  • License: CC-BY-4.0
  • Finetuned from model: E5 Small Unsupervised BGE All Datasets HF

Model Sources

Uses

This model is designed to generate embeddings for passages in information retrieval systems. It can be used directly for passage retrieval or fine-tuned for specific tasks.

Direct Use

Generate passage embeddings for retrieval or re-ranking.

Downstream Use

This model can be fine-tuned for specific retrieval tasks or plugged into a larger information retrieval system to improve performance.

Out-of-Scope Use

This model is not intended for use in generating text or for any tasks other than feature extraction for information retrieval.

Bias, Risks, and Limitations

The model's performance is dependent on the quality of the training data. It may exhibit biases present in the original training data or the relabeled data used in fine-tuning.

How to Get Started with the Model

Please refer to the code in the original paper repository for how to compute passage embeddings.

from transformers import AutoTokenizer, AutoModel
import torch

model_name = 'WhereIsAI/e5-small-unsupervised-bge-all-datasets-hf'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True).to('cuda')

text = "This is a sample passage."
input_ids = tokenizer([text], return_tensors="pt", max_length=512, truncation=True, padding='max_length').to('cuda')
with torch.no_grad():
    output = model(**input_ids)
embeddings = output.last_hidden_state[:, 0, :]  # Extract embeddings

Training Details

Training Data

The model was fine-tuned using a semi-supervised approach on a mix of labeled and unlabeled data. See the paper for more details.

Training Procedure

[More Information Needed]

Preprocessing

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation

@misc{luo2024semievol,
    title={SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation},
    author={Junyu Luo and Xiao Luo and Xiusi Chen and Zhiping Xiao and Wei Ju and Ming Zhang},
    year={2024},
    eprint={2410.14745},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2410.14745},
}

Model Card Authors

Niels Drost (Hugging Face)