dataset_info:
features:
- name: question_id
dtype: int64
- name: question
dtype: string
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: validation
num_bytes: 7885383.303442969
num_examples: 20000
- name: train
num_bytes: 3571579491.696557
num_examples: 9058734
download_size: 2330139568
dataset_size: 3579464875
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
Dataset Description
This dataset is crafted for training a binary classification model to determine whether a given text passage answers a specific user query. Its primary purpose is to enhance our search engine by filtering out irrelevant passages, ensuring that users receive accurate and helpful responses to their questions.
Background and Motivation
In our search engine, users submit queries and receive multiple passages as results. Not all retrieved passages effectively answer the user's question, leading to a suboptimal user experience. To address this, we need a model capable of assessing each passage's relevance to the query, allowing us to present only the most pertinent information.
Source Dataset
The dataset is based on the MS MARCO V2.1 dataset from Microsoft, accessed via Hugging Face Datasets (version 2.1). MS MARCO is a large-scale corpus designed for machine reading comprehension and question answering tasks, containing real anonymized user queries and corresponding passages from web documents.
Dataset Construction
Original Format: Each sample in MS MARCO V2.1 consists of:
- A query (user's question).
- A set of 10 passages retrieved for that query.
- Labels indicating whether each passage was selected as an answer.
Transformation Process:
- Reshaping: We transformed the dataset to suit a binary classification task by iterating over each passage in every sample.
- Sample Creation: For each query-passage pair, we created a new sample with:
question_id: Unique identifier for the query.question: The user's query.text: The passage text.label: Binary label (1if the passage answers the question,0otherwise).
Dataset Splitting:
- Combined the original train and validation splits, excluding the test split.
- Shuffled the combined dataset to ensure randomness.
- Validation Set Size: Determined based on statistical calculations to ensure a sufficient sample size for reliable validation metrics:
- Accuracy Assumption: 90%
- Margin of Error: 0.5%
- Confidence Level: 98% (z-score of 2.326)
- Calculated Validation Size: Approximately 20,000 samples
- Split the dataset into:
- Training Set: Remaining samples after allocating 20,000 to validation.
- Validation Set: 20,000 samples for model evaluation.
Dataset Features
question_id(string): Unique identifier for each query.question(string): The user's query.text(string): The text of the passage.label(int): Binary indicator (1if the passage answers the question,0otherwise).
Intended Use
- Primary Task: Train a binary classification model to predict whether a passage answers a given query.
- Application: Integrate the model into our search engine pipeline to filter out non-relevant passages, improving the overall quality and relevance of search results.
Considerations
- Class Balance: The dataset may be imbalanced due to the nature of the original labels. It's important to consider this during model training (e.g., using class weights or resampling techniques).
- Data Quality: Passages are sourced from real-world search data and may contain noise or irrelevant information typical of web content.
- Licensing: Ensure compliance with MS MARCO's licensing terms when using this dataset for development or distribution.
Conclusion
This dataset aligns closely with our goal of improving answerability assessment in search results. By leveraging real user queries and associated passages, the trained model will be well-suited to judge the relevance of passages retrieved by our search engine, ultimately enhancing user satisfaction by providing accurate and relevant answers.
Additional Notes
- Dataset Location: Stored internally within our dataset repository for easy access and version control.
- Reproducibility: The notebook used to create this dataset is available and contains all steps for dataset generation, allowing for updates or modifications as needed.
- Future Work: Consider exploring sequence classification tasks or more complex models to further improve answerability predictions.