Update README.md
Browse files
README.md
CHANGED
|
@@ -55,6 +55,7 @@ configs:
|
|
| 55 |
path: triplets/train-*
|
| 56 |
- split: test
|
| 57 |
path: triplets/test-*
|
|
|
|
| 58 |
---
|
| 59 |
|
| 60 |
This product search dataset compiles multiple open source product search datasets, that can be used for representation learning tasks.
|
|
@@ -68,6 +69,45 @@ This product search dataset compiles multiple open source product search dataset
|
|
| 68 |
| Home Depot | bstds/home_depot | Home Depot |
|
| 69 |
| Crowdflower | napsternxg/kaggle_crowdflower_ecommerce_search_relevance | Crowdflower |
|
| 70 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
### Train
|
| 72 |
|
| 73 |
| Dataset | Queries | Documents | Pairs |
|
|
|
|
| 55 |
path: triplets/train-*
|
| 56 |
- split: test
|
| 57 |
path: triplets/test-*
|
| 58 |
+
license: apache-2.0
|
| 59 |
---
|
| 60 |
|
| 61 |
This product search dataset compiles multiple open source product search datasets, that can be used for representation learning tasks.
|
|
|
|
| 69 |
| Home Depot | bstds/home_depot | Home Depot |
|
| 70 |
| Crowdflower | napsternxg/kaggle_crowdflower_ecommerce_search_relevance | Crowdflower |
|
| 71 |
|
| 72 |
+
### Schema
|
| 73 |
+
|
| 74 |
+
### Document
|
| 75 |
+
|
| 76 |
+
To standardize attributes across different sources and their availability, we use a template that can be applied based on available product information.
|
| 77 |
+
|
| 78 |
+
```python
|
| 79 |
+
if kwargs.get("title"):
|
| 80 |
+
template = f"""**product title**: {kwargs.get('title')}\n"""
|
| 81 |
+
else:
|
| 82 |
+
template = """"""
|
| 83 |
+
if kwargs.get("category"):
|
| 84 |
+
template += f"""**product category**: {kwargs.get('category').replace(" / ", " > ")}\n"""
|
| 85 |
+
if kwargs.get("attributes"):
|
| 86 |
+
template += """**product attributes**:\n"""
|
| 87 |
+
for k, v in kwargs.get("attributes").items():
|
| 88 |
+
template += f""" - **{k}**: {v}\n"""
|
| 89 |
+
|
| 90 |
+
if kwargs.get("description"):
|
| 91 |
+
template += f"""**product description**: {kwargs.get('description')}"""
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
The dataset has two splits:
|
| 95 |
+
- `Pairs`
|
| 96 |
+
- `Triplets`
|
| 97 |
+
|
| 98 |
+
### Pairs
|
| 99 |
+
|
| 100 |
+
Query: The user query.
|
| 101 |
+
Document: The product that was retrieved by the system.
|
| 102 |
+
Relevance: The relevance of the `<query, document>` pair.
|
| 103 |
+
|
| 104 |
+
Each individual source will have their logic for sampling queries, documents, and relevance assessments.
|
| 105 |
+
Most of the sources and manually graded by a group of annotators, except for `Marqo/marqo-GS-10M` which is the top 100 products retrieved from the system. I recommend reading the individual sources for a deeper understanding of their methodology.
|
| 106 |
+
|
| 107 |
+
|
| 108 |
+
### Triplets
|
| 109 |
+
|
| 110 |
+
|
| 111 |
### Train
|
| 112 |
|
| 113 |
| Dataset | Queries | Documents | Pairs |
|