| --- |
| language: |
| - en |
| - de |
| license: cc-by-4.0 |
| task_categories: |
| - text-classification |
| - other |
| tags: |
| - entity-resolution |
| - record-linkage |
| - cross-system-matching |
| - enterprise-data |
| - benchmark |
| - rag |
| - information-extraction |
| - multilingual |
| - neurips-2026 |
| pretty_name: CrossER |
| size_categories: |
| - 1K<n<10K |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: data/splits/train.json |
| - split: validation |
| path: data/splits/val.json |
| - split: test |
| path: data/splits/test.json |
| --- |
| |
| # CrossER: A Benchmark for Context-Dependent Cross-System Entity Resolution |
|
|
| [](https://creativecommons.org/licenses/by/4.0/) |
| [](https://neurips.cc/Conferences/2026/CallForEvaluationsDatasets) |
|
|
| **CrossER** is a benchmark for context-dependent cross-system entity resolution where surface features are deliberately misleading. Match pairs average only **0.29 string similarity** (names look unrelated), while non-match pairs average **0.94 similarity** (names look identical). |
|
|
| In real enterprises, matching `Product 4418` to `Maltodextrin DE20 Grade A` requires consulting migration runbooks, classification guides, and Slack threads — not string similarity. CrossER measures the "context gap" across three evaluation modes. |
|
|
| ## Dataset Summary |
|
|
| | Metric | Value | |
| |--------|-------| |
| | Total Entities | 688 | |
| | Total Pairs | 1,800 | |
| | Match / No-Match / Ambiguous | 800 / 800 / 200 | |
| | Source Systems | 5 | |
| | Entity Types | 4 | |
| | Languages | English, German | |
| | Signal Documents | 8 | |
| | Noise Documents | 110 | |
| | Oracle Context Records | 875 | |
|
|
| ## Headline Results |
|
|
| | Method | CrossER-Easy | CrossER-Full | CrossER-Hard | |
| |--------|-------------|-------------|-------------| |
| | String Matching | 0.741 | 0.363 | 0.000 | |
| | Fuzzy Matching | 0.771 | 0.455 | 0.000 | |
| | Embedding Matching | 0.964 | 0.559 | 0.000 | |
| | Attribute Matching | **1.000** | 0.729 | 0.000 | |
| | SBERT (multilingual) | 0.843 | 0.604 | 0.222 | |
| | LLM Zero-Shot | -- | 0.090 | 0.000 | |
| | LLM + RAG (BM25) | 0.848 | 0.632 | 0.200 | |
| | LLM + Oracle | **1.000** | **1.000** | **1.000** | |
|
|
| No-context methods score **0.00 F1** on hard pairs. Oracle context closes the gap completely. RAG partially bridges it — retrieval quality is the bottleneck. |
|
|
| ## Evaluation Modes |
|
|
| | Mode | Description | |
| |------|-------------| |
| | **No Context** | Entity pairs only — what's possible from attributes alone | |
| | **Raw Context** | 118 enterprise documents (8 signal + 110 noise) — realistic RAG | |
| | **Oracle Context** | 875 structured migration records — upper bound | |
|
|
| ## Named Subsets |
|
|
| | Subset | Pairs | Description | |
| |--------|-------|-------------| |
| | **CrossER-Easy** | 257 | Easy matches + obvious negatives; F1 ceiling = 1.000 | |
| | **CrossER-Medium** | 262 | Medium-difficulty pairs; F1 ceiling = 0.776 | |
| | **CrossER-Hard** | 203 | Hard matches + adversarial negatives + ambiguous; F1 ceiling = 0.000 (no-context) | |
| | **CrossER-Full** | 722 | All test pairs | |
|
|
| ## Source Systems |
|
|
| | System | Role | Naming Style | |
| |--------|------|-------------| |
| | SAP_TC2 | Primary ERP (NA HQ) | Formal English | |
| | SAP_CFIN | Financial consolidation | Internal codes / abbreviations | |
| | SAP_APAC | APAC regional ERP | Abbreviated with region prefix | |
| | LEGACY_ERP | Decommissioned (2019) | Cryptic category codes | |
| | SHAREPOINT | Tax/compliance reference | Authoritative long names | |
|
|
| ## Dataset Structure |
|
|
| ``` |
| data/ |
| ├── entities.json # 688 entities across 5 systems |
| ├── pairs.json # 1,800 pairs with difficulty tiers |
| ├── splits/ # train (40%) / val (20%) / test (40%) |
| ├── subsets/ # CrossER-Easy, -Medium, -Hard, -Full |
| └── context/ |
| ├── raw/documents/ # 8 signal documents |
| ├── raw/noise/ # 110 noise documents |
| └── structured/ # oracle_context.json (875 records) |
| ``` |
|
|
| ## Quick Start |
|
|
| ```python |
| from datasets import load_dataset |
| |
| # Load train/val/test splits |
| ds = load_dataset("smurthy5/CrossER") |
| |
| # Load a named subset |
| import json, requests |
| easy = json.loads(requests.get( |
| "https://huggingface.co/datasets/smurthy5/CrossER/resolve/main/data/subsets/crosser_easy.json" |
| ).text) |
| ``` |
|
|
| ## Prediction Format |
|
|
| ```json |
| [ |
| {"pair_id": "pair_0001", "predicted_label": "match"}, |
| {"pair_id": "pair_0002", "predicted_label": "no_match"} |
| ] |
| ``` |
|
|
| Valid labels: `match`, `no_match`, `ambiguous`. |
|
|
| ## Reproducibility |
|
|
| The dataset is fully reproducible: |
|
|
| ```bash |
| git clone https://github.com/nihalgunu/CrossER |
| pip install -r requirements.txt |
| python -m generate.generate_all --seed 42 |
| ``` |
|
|
| ## Citation |
|
|
| ```bibtex |
| @inproceedings{crosser2026, |
| author = {Gunukula, Nihal and Murthy, Sameer}, |
| title = {{CrossER: A Benchmark for Context-Dependent Cross-System Entity Resolution}}, |
| booktitle = {NeurIPS 2026 Evaluations \& Datasets Track}, |
| year = {2026}, |
| url = {https://huggingface.co/datasets/smurthy5/CrossER} |
| } |
| ``` |
|
|
| ## License |
|
|
| - **Code**: Apache 2.0 |
| - **Data**: CC BY 4.0 |
|
|
| --- |
|
|
| [Phyvant](https://phyvant.com) · [GitHub](https://github.com/nihalgunu/CrossER) · [Paper (NeurIPS 2026)](https://neurips.cc/Conferences/2026/CallForEvaluationsDatasets) |
|
|