Update README.md
Browse files
README.md
CHANGED
|
@@ -26,3 +26,70 @@ configs:
|
|
| 26 |
- split: train
|
| 27 |
path: data/train-*
|
| 28 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
- split: train
|
| 27 |
path: data/train-*
|
| 28 |
---
|
| 29 |
+
|
| 30 |
+
**Dataset Description**
|
| 31 |
+
|
| 32 |
+
This dataset is crafted for training a binary classification model to determine whether a given text passage answers a specific user query. Its primary purpose is to enhance our search engine by filtering out irrelevant passages, ensuring that users receive accurate and helpful responses to their questions.
|
| 33 |
+
|
| 34 |
+
**Background and Motivation**
|
| 35 |
+
|
| 36 |
+
In our search engine, users submit queries and receive multiple passages as results. Not all retrieved passages effectively answer the user's question, leading to a suboptimal user experience. To address this, we need a model capable of assessing each passage's relevance to the query, allowing us to present only the most pertinent information.
|
| 37 |
+
|
| 38 |
+
**Source Dataset**
|
| 39 |
+
|
| 40 |
+
The dataset is based on the [MS MARCO V2.1](https://github.com/zhouyonglong/MSMARCOV2) dataset from Microsoft, accessed via [Hugging Face Datasets](https://huggingface.co/datasets/microsoft/ms_marco) (version 2.1). MS MARCO is a large-scale corpus designed for machine reading comprehension and question answering tasks, containing real anonymized user queries and corresponding passages from web documents.
|
| 41 |
+
|
| 42 |
+
**Dataset Construction**
|
| 43 |
+
|
| 44 |
+
- **Original Format**: Each sample in MS MARCO V2.1 consists of:
|
| 45 |
+
- A **query** (user's question).
|
| 46 |
+
- A set of **10 passages** retrieved for that query.
|
| 47 |
+
- **Labels** indicating whether each passage was selected as an answer.
|
| 48 |
+
|
| 49 |
+
- **Transformation Process**:
|
| 50 |
+
- **Reshaping**: We transformed the dataset to suit a binary classification task by iterating over each passage in every sample.
|
| 51 |
+
- **Sample Creation**: For each query-passage pair, we created a new sample with:
|
| 52 |
+
- `question_id`: Unique identifier for the query.
|
| 53 |
+
- `question`: The user's query.
|
| 54 |
+
- `text`: The passage text.
|
| 55 |
+
- `label`: Binary label (`1` if the passage answers the question, `0` otherwise).
|
| 56 |
+
|
| 57 |
+
- **Dataset Splitting**:
|
| 58 |
+
- Combined the original **train** and **validation** splits, excluding the **test** split.
|
| 59 |
+
- Shuffled the combined dataset to ensure randomness.
|
| 60 |
+
- **Validation Set Size**: Determined based on statistical calculations to ensure a sufficient sample size for reliable validation metrics:
|
| 61 |
+
- **Accuracy Assumption**: 90%
|
| 62 |
+
- **Margin of Error**: 0.5%
|
| 63 |
+
- **Confidence Level**: 98% (z-score of 2.326)
|
| 64 |
+
- **Calculated Validation Size**: Approximately 20,000 samples
|
| 65 |
+
- Split the dataset into:
|
| 66 |
+
- **Training Set**: Remaining samples after allocating 20,000 to validation.
|
| 67 |
+
- **Validation Set**: 20,000 samples for model evaluation.
|
| 68 |
+
|
| 69 |
+
**Dataset Features**
|
| 70 |
+
|
| 71 |
+
- `question_id` (string): Unique identifier for each query.
|
| 72 |
+
- `question` (string): The user's query.
|
| 73 |
+
- `text` (string): The text of the passage.
|
| 74 |
+
- `label` (int): Binary indicator (`1` if the passage answers the question, `0` otherwise).
|
| 75 |
+
|
| 76 |
+
**Intended Use**
|
| 77 |
+
|
| 78 |
+
- **Primary Task**: Train a binary classification model to predict whether a passage answers a given query.
|
| 79 |
+
- **Application**: Integrate the model into our search engine pipeline to filter out non-relevant passages, improving the overall quality and relevance of search results.
|
| 80 |
+
|
| 81 |
+
**Considerations**
|
| 82 |
+
|
| 83 |
+
- **Class Balance**: The dataset may be imbalanced due to the nature of the original labels. It's important to consider this during model training (e.g., using class weights or resampling techniques).
|
| 84 |
+
- **Data Quality**: Passages are sourced from real-world search data and may contain noise or irrelevant information typical of web content.
|
| 85 |
+
- **Licensing**: Ensure compliance with MS MARCO's licensing terms when using this dataset for development or distribution.
|
| 86 |
+
|
| 87 |
+
**Conclusion**
|
| 88 |
+
|
| 89 |
+
This dataset aligns closely with our goal of improving answerability assessment in search results. By leveraging real user queries and associated passages, the trained model will be well-suited to judge the relevance of passages retrieved by our search engine, ultimately enhancing user satisfaction by providing accurate and relevant answers.
|
| 90 |
+
|
| 91 |
+
**Additional Notes**
|
| 92 |
+
|
| 93 |
+
- **Dataset Location**: Stored internally within our dataset repository for easy access and version control.
|
| 94 |
+
- **Reproducibility**: The notebook used to create this dataset is available and contains all steps for dataset generation, allowing for updates or modifications as needed.
|
| 95 |
+
- **Future Work**: Consider exploring sequence classification tasks or more complex models to further improve answerability predictions.
|