SPLADE for E-Commerce Search
A SPLADE sparse encoder fine-tuned on Amazon ESCI for e-commerce product search. Achieves +28% improvement over BM25 on product retrieval tasks.
Benchmark Results
Amazon ESCI (In-Domain)
| Model | nDCG@10 | vs BM25 |
|---|---|---|
| BM25 (baseline) | 0.305 | — |
| SPLADE (off-the-shelf) | 0.326 | +7% |
| This model | 0.389 | +28% |
Cross-Domain Generalization
| Dataset | nDCG@10 | vs BM25 |
|---|---|---|
| WANDS (Wayfair) | 0.355 | +8% |
| Home Depot | 0.384 | +10% |
Model Description
This is a SPLADE Sparse Encoder model finetuned from distilbert/distilbert-base-uncased on the esci dataset using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
Model Details
Model Description
- Model Type: SPLADE Sparse Encoder
- Base model: distilbert/distilbert-base-uncased
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 30522 dimensions
- Similarity Function: Dot Product
- Training Dataset:
- Languages: en, ja, es
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Sparse Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sparse Encoders on Hugging Face
Full Model Architecture
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'DistilBertForMaskedLM'})
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("sparse_encoder_model_id")
# Run inference
sentences = [
'magnetic screen door',
'[Flux Phenom] | Flux Phenom Magnetic Screen Door - Retractable Mesh with Self Sealing Magnets - Keeps Nature Out | The Flux Phenom magnetic screen door is made for any household. The instant screen door installs in just a few minutes in any doorway of your home. It keeps irritants out, lets fresh air in, and allow | 🔨Installs in an Instant: Our magnetic door screen comes with everything you need to install it quickly; black all metal thumbtacks, a large roll of hook and loop backing, plus video tutorial | 🚪Fits Doorways Up to 38x82 Inches: Door net works on fixed, sliding, metal or wood doors as long as they measure up to 38x82 inches. Important: Measure your door before ordering to ensure fit | ↔️ Opens and Closes Like Magic: Our retractable screen door features a middle seam lined with 26 magnets for walking through any doorway with ease',
'[HP] | HP VH240a 23.8-Inch Full HD 1080p IPS LED Monitor with Built-In Speakers and VESA Mounting, Rotating Portrait & Landscape, Tilt, and HDMI & VGA Ports (1KL30AA) - Black | RESOLUTION & PANEL — 23.8-inch Full HD monitor (1920 x 1080p at 60 Hz) with 16:9 aspect ratio and an anti-glare matte IPS LED-backlit panel (2 million pixels, 16.7 million colors) | RESPONSE TIME — 5ms with overdrive for a smooth picture that looks crisp and fluid without motion blur | BUILT-IN SPEAKERS — Integrated audio speakers provide great sound for your content (2 watts per channel)',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 38.9048, 40.5171, 21.8987],
# [ 40.5171, 177.9286, 54.3905],
# [ 21.8987, 54.3905, 183.6391]])
Training Details
Training Dataset
esci
- Dataset: esci at 8113b17
- Size: 100,000 training samples
- Columns:
anchorandpositive - Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 3 tokens
- mean: 8.1 tokens
- max: 34 tokens
- min: 5 tokens
- mean: 199.23 tokens
- max: 383 tokens
- Samples:
anchor positive bathroom fan without light[Panasonic] | Panasonic FV-20VQ3 WhisperCeiling 190 CFM Ceiling Mounted Fan | WhisperCeiling fans feature a totally enclosed condenser motor and a double-tapered, dolphin-shaped bladed blower wheel to quietly move air | Designed to give you continuous, trouble-free operation for many years thanks in part to its high-quality components and permanently lubricated motors which wear at a slower pace | Detachable adaptors, firmly secured duct ends, adjustable mounting brackets (up to 26-in), fan/motor units that detach easily from the housing and uncomplicated wiring all lend themselves to user-friendly installationrevent 80 cfm[Homewerks] | Homewerks 7141-80 Bathroom Fan Integrated LED Light Ceiling Mount Exhaust Ventilation, 1.1 Sones, 80 CFM | OUTSTANDING PERFORMANCE: This Homewerk's bath fan ensures comfort in your home by quietly eliminating moisture and humidity in the bathroom. This exhaust fan is 1.1 sones at 80 CFM which means it’s able to manage spaces up to 80 square feet and is very quiet.. | BATH FANS HELPS REMOVE HARSH ODOR: When cleaning the bathroom or toilet, harsh chemicals are used and they can leave an obnoxious odor behind. Homewerk’s bathroom fans can help remove this odor with its powerful ventilation | BUILD QUALITY: Designed to be corrosion resistant with its galvanized steel construction featuring a modern style round shape and has an 4000K Cool White Light LED Light. AC motor.revent 80 cfm[Homewerks] | Homewerks 7140-80 Bathroom Fan Ceiling Mount Exhaust Ventilation, 1.5 Sones, 80 CFM, White | OUTSTANDING PERFORMANCE: This Homewerk's bath fan ensures comfort in your home by quietly eliminating moisture and humidity in the bathroom. This exhaust fan is 1. 5 sone at 110 CFM which means it’s able to manage spaces up to 110 square feet | BATH FANS HELPS REMOVE HARSH ODOR: When cleaning the bathroom or toilet, harsh chemicals are used and they can leave an obnoxious odor behind. Homewerk’s bathroom fans can help remove this odor with its powerful ventilation | BUILD QUALITY: Designed to be corrosion resistant with its galvanized steel construction featuring a grille modern style. - Loss:
SpladeLosswith these parameters:{ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score', gather_across_devices=False)", "document_regularizer_weight": 3e-05, "query_regularizer_weight": 5e-05 }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size: 32learning_rate: 2e-05num_train_epochs: 1warmup_ratio: 0.1fp16: Truebatch_sampler: no_duplicatesrouter_mapping: {'anchor': 'query', 'positive': 'document'}
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 1max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportionalrouter_mapping: {'anchor': 'query', 'positive': 'document'}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss |
|---|---|---|
| 0.032 | 100 | 335.7698 |
| 0.064 | 200 | 1.6791 |
| 0.096 | 300 | 0.5408 |
| 0.128 | 400 | 0.4655 |
| 0.16 | 500 | 0.458 |
| 0.192 | 600 | 0.4366 |
| 0.224 | 700 | 0.3779 |
| 0.256 | 800 | 0.371 |
| 0.288 | 900 | 0.3352 |
| 0.32 | 1000 | 0.3661 |
| 0.352 | 1100 | 0.3196 |
| 0.384 | 1200 | 0.3385 |
| 0.416 | 1300 | 0.2944 |
| 0.448 | 1400 | 0.3257 |
| 0.48 | 1500 | 0.293 |
| 0.512 | 1600 | 0.3034 |
| 0.544 | 1700 | 0.2971 |
| 0.576 | 1800 | 0.2905 |
| 0.608 | 1900 | 0.2819 |
| 0.64 | 2000 | 0.2598 |
| 0.672 | 2100 | 0.2804 |
| 0.704 | 2200 | 0.2585 |
| 0.736 | 2300 | 0.2527 |
| 0.768 | 2400 | 0.2643 |
| 0.8 | 2500 | 0.2649 |
| 0.832 | 2600 | 0.2685 |
| 0.864 | 2700 | 0.2821 |
| 0.896 | 2800 | 0.2465 |
| 0.928 | 2900 | 0.2426 |
| 0.96 | 3000 | 0.2658 |
| 0.992 | 3100 | 0.2381 |
Framework Versions
- Python: 3.11.10
- Sentence Transformers: 5.2.0
- Transformers: 4.57.3
- PyTorch: 2.9.1+cu128
- Accelerate: 1.12.0
- Datasets: 4.4.1
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
SpladeLoss
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
SparseMultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
FlopsLoss
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
- Downloads last month
- 9
Model tree for thierrydamiba/splade-ecommerce-esci
Base model
distilbert/distilbert-base-uncased