lv12's picture
Update README.md
1e05b33 verified
metadata
dataset_info:
  - config_name: pairs
    features:
      - name: query
        dtype: string
      - name: document
        dtype: string
      - name: relevance
        dtype: float64
      - name: source
        dtype: string
    splits:
      - name: train
        num_bytes: 2565164850
        num_examples: 5571429
      - name: test
        num_bytes: 730814746
        num_examples: 1462128
    download_size: 1234904598
    dataset_size: 3295979596
  - config_name: triplets
    features:
      - name: anchor
        dtype: string
      - name: positive
        dtype: string
      - name: negative
        dtype: string
      - name: margin
        dtype: float64
      - name: source
        dtype: string
      - name: metadata
        dtype: string
    splits:
      - name: train
        num_bytes: 26826437400
        num_examples: 28941558
      - name: test
        num_bytes: 26826437400
        num_examples: 6722877
    download_size: 583323916
    dataset_size: 2399240463
configs:
  - config_name: pairs
    data_files:
      - split: train
        path: pairs/train-*
      - split: test
        path: pairs/test-*
  - config_name: triplets
    data_files:
      - split: train
        path: triplets/train*
      - split: test
        path: triplets/test*
license: apache-2.0

This product search dataset compiles multiple open source product search datasets, that can be used for representation learning tasks.

Sources

Dataset Repo ID Source
Google Marqo/marqo-GS-10M Google Shopping
Amazon tasksource/esci Amazon ESCI
Wayfair napsternxg/wands Wayfair
Home Depot bstds/home_depot Home Depot
Crowdflower napsternxg/kaggle_crowdflower_ecommerce_search_relevance Crowdflower

Schema

Document

To standardize attributes across different sources and their availability, we use a template that can be applied based on available product information.

if kwargs.get("title"):
    template = f"""**product title**: {kwargs.get('title')}\n"""
else:
    template = """"""
if kwargs.get("category"):
    template += f"""**product category**: {kwargs.get('category').replace(" / ", " > ")}\n"""
if kwargs.get("attributes"):
    template += """**product attributes**:\n"""
    for k, v in kwargs.get("attributes").items():
        template += f""" - **{k}**: {v}\n"""

if kwargs.get("description"):
    template += f"""**product description**: {kwargs.get('description')}"""

The dataset has two splits:

  • Pairs
  • Triplets

Pairs

Query: The user query. Document: The product that was retrieved by the system. Relevance: The relevance of the <query, document> pair.

Each individual source will have their logic for sampling queries, documents, and relevance assessments. Most of the sources and manually graded by a group of annotators, except for Marqo/marqo-GS-10M which is the top 100 products retrieved from the system. I recommend reading the individual sources for a deeper understanding of their methodology.

This format undergoes no filtering, and all <query, document, relevance> scores are maintained from the original source. These can be directly used for training sentence similarity tasks that uses <sentence 1, sentence 2, score>. The scores should generally follow the range of 0-3, normalized across sources, but are not fully calibrated for the individual distributions.

Triplets

Train

Dataset Queries Documents Pairs
Google 77,288 2,202,907 3,926,764
Amazon 99,408 985,476 1,420,372
Wayfair 477 38,854 140,068
Home Depot 11,795 54,360 74,067
Crowdflower 261 9,912 10,158

Test

Dataset Queries Documents Pairs
Google 19,564 748,386 981,204
Amazon 30,947 364,004 434,234
Wayfair 477 25,317 46,690