File size: 4,168 Bytes
e216f9b
 
f941398
e216f9b
 
 
b853fb7
 
b0b5a82
 
b853fb7
0302d30
e216f9b
77d7c23
b853fb7
 
e216f9b
b853fb7
 
 
 
f941398
 
 
 
 
 
 
 
 
09e4b50
1c22c0b
 
f941398
 
 
f3732bf
00ea93c
 
f3732bf
00ea93c
 
 
 
e216f9b
 
 
77d7c23
 
e216f9b
 
f941398
 
f3732bf
 
 
 
083512c
e216f9b
449fdff
 
 
 
 
 
36e077e
 
 
 
 
449fdff
083512c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36e077e
 
 
 
083512c
 
 
 
449fdff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
dataset_info:
- config_name: pairs
  features:
  - name: query
    dtype: string
  - name: document
    dtype: string
  - name: relevance
    dtype: float64
  - name: source
    dtype: string
  splits:
  - name: train
    num_bytes: 2565164850
    num_examples: 5571429
  - name: test
    num_bytes: 730814746
    num_examples: 1462128
  download_size: 1234904598
  dataset_size: 3295979596
- config_name: triplets
  features:
  - name: anchor
    dtype: string
  - name: positive
    dtype: string
  - name: negative
    dtype: string
  - name: margin
    dtype: float64
  - name: source
    dtype: string
  - name: metadata
    dtype: string
  splits:
  - name: train
    num_bytes: 265774223
    num_examples: 955038
  - name: test
    num_bytes: 99868063
    num_examples: 361622
  download_size: 5258297
  dataset_size: 365642286
configs:
- config_name: pairs
  data_files:
  - split: train
    path: pairs/train-*
  - split: test
    path: pairs/test-*
- config_name: triplets
  data_files:
  - split: train
    path: triplets/train*
  - split: test
    path: triplets/test*
license: apache-2.0
---

This product search dataset compiles multiple open source product search datasets, that can be used for representation learning tasks.

### Sources
| Dataset | Repo ID | Source |
|-------------|---------|--------|
| Google | [Marqo/marqo-GS-10M](https://huggingface.co/datasets/Marqo/marqo-GS-10M) | Google Shopping |
| Amazon | [tasksource/esci](https://huggingface.co/datasets/tasksource/esci) | Amazon ESCI |
| Wayfair | [napsternxg/wands](https://huggingface.co/datasets/napsternxg/wands) | Wayfair |
| Home Depot | [bstds/home_depot](https://huggingface.co/datasets/bstds/home_depot) | Home Depot |
| Crowdflower | [napsternxg/kaggle_crowdflower_ecommerce_search_relevance](https://huggingface.co/datasets/napsternxg/kaggle_crowdflower_ecommerce_search_relevance) | Crowdflower |

### Schema

### Document

To standardize attributes across different sources and their availability, we use a template that can be applied based on available product information.

```python
if kwargs.get("title"):
    template = f"""**product title**: {kwargs.get('title')}\n"""
else:
    template = """"""
if kwargs.get("category"):
    template += f"""**product category**: {kwargs.get('category').replace(" / ", " > ")}\n"""
if kwargs.get("attributes"):
    template += """**product attributes**:\n"""
    for k, v in kwargs.get("attributes").items():
        template += f""" - **{k}**: {v}\n"""

if kwargs.get("description"):
    template += f"""**product description**: {kwargs.get('description')}"""
```

The dataset has two splits:
 - `Pairs`
 - `Triplets`

### Pairs

Query: The user query.
Document: The product that was retrieved by the system.
Relevance: The relevance of the `<query, document>` pair.

Each individual source will have their logic for sampling queries, documents, and relevance assessments.
Most of the sources and manually graded by a group of annotators, except for `Marqo/marqo-GS-10M` which is the top 100 products retrieved from the system. I recommend reading the individual sources for a deeper understanding of their methodology.

This format undergoes no filtering, and all `<query, document, relevance>` scores are maintained from the original source. 
These can be directly used for training sentence similarity tasks that uses `<sentence 1, sentence 2, score>`.
The scores should generally follow the range of 0-3, normalized across sources, but are not fully calibrated for the individual distributions.


### Triplets


### Train

| Dataset     | Queries | Documents | Pairs    |
|-------------|---------|-----------|----------|
| Google      | 77,288  | 2,202,907 | 3,926,764|
| Amazon      | 99,408  | 985,476   | 1,420,372|
| Wayfair     | 477     | 38,854    | 140,068  |
| Home Depot  | 11,795  | 54,360    | 74,067   |
| Crowdflower | 261     | 9,912     | 10,158   |

### Test

| Dataset     | Queries | Documents | Pairs    |
|-------------|---------|-----------|----------|
| Google      | 19,564  | 748,386   | 981,204  |
| Amazon      | 30,947  | 364,004   | 434,234  |
| Wayfair     | 477     | 25,317    | 46,690   |