The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 118, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 2392, in read_schema
file = ParquetFile(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 328, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1656, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Fused Patent + arXiv Technical Clustering Dataset (Deterministic, Quality-Gated)
Overview
This dataset is the output of a zero-touch technical clustering pipeline built over a fused corpus of patent text and arXiv-style research text.
The pipeline is fully deterministic from ingest through release and is designed to run end-to-end without manual curation or mid-run intervention. All artifacts, cluster assignments, and release decisions are derived from the same run.
This is not a curated dataset. It is a large-scale fused technical corpus that has been deterministically analyzed and quality-gated to isolate the portion that behaves like a semantic clustering dataset.
Key Stats
- Total labeled rows: 9,063,272
- Raw clusters (pre-filter): 422
- Release clusters (post-filter): 147
- Retained rows: 3,881,329
- Retention rate: 42.82%
- Shards: 91 (labels / embeddings / chunks)
- Size: ~20+ GB compressed
Pipeline Summary
The dataset was produced by a staged, resumable pipeline with Postgres acting as a control plane.
Core stages
- Ingest and normalize fused patent + arXiv text
- Chunk-level embedding
- Embedding clustering
- Shard-level processing with persistent state
- Reducer-tree merge into global clusters
- Global assignment + BM25 artifact generation
- Deterministic inspection and release gating
System Design
The pipeline is built to operate under real constraints (long runtimes, memory pressure, interruptions), not ideal notebook conditions.
Control plane (Postgres)
- Task leasing and discovery
- Heartbeats and worker liveness
- Stage state tracking (not-ready / running / done / failed)
- Reducer-tree coordination and staged unblocking
Failure-aware execution
Distinguishes between:
- true OOM
- bad allocation
- killed process
- general memory pressure
Descending batch ladder (deterministic step-down on failure)
Proactive downshifting based on resource pressure
Resumable state across interruptions
Reducer-tree merge
- Progressive level-by-level reduction
- Final stage unblocked only after upstream completion
- Prevents global merge bottlenecks
- Avoids downstream fan-out gaps
Deterministic Quality Gating
The raw clustering output was not treated as valid by default.
A full deterministic inspection pass across all 422 clusters produced:
- 147 coherent clusters
- 107 mixed clusters
- 168 metadata-heavy clusters
Filtering decision
For the release dataset:
- Kept: coherent clusters only
- Dropped: mixed + metadata-heavy clusters
This was done without:
- re-embedding
- hand labeling
- manual cluster editing
- modifying the original run
All decisions are reproducible from pipeline outputs.
Metadata Leakage
A large portion of clusters were dominated by ingestion or wrapper fields such as:
source_filerecord_hashraw_meta_jsonauthors_parsedpublished_date- similar structural tokens
These are not errors in the source data, but they degrade semantic clustering if left unfiltered.
Explicit detection and removal of these clusters is a core part of the release process.
Dataset Structure
The release package includes filtered artifacts aligned to the retained clusters:
labels/— cluster assignmentschunks/— source text chunksembeddings/— embedding vectorsmicroclusters/— original microcluster outputs (for provenance)global/— cluster summaries, BM25 artifacts, reference data
All components are consistent with the same filtered subset.
What This Dataset Is
- A deterministically derived technical clustering dataset
- A fused patent + research corpus with broad technical coverage
- A quality-gated subset of a larger clustering run
- A reproducible artifact tied to a single pipeline execution
What This Dataset Is Not
- Not manually curated
- Not hand-labeled
- Not cleaned via ad-hoc scripts
- Not a “perfect” semantic dataset
- Not independent from its pipeline (the pipeline defines it)
Example Cluster Themes
Cluster naming was derived deterministically from top terms. Example themes include:
- wireless communication systems
- semiconductor substrates and layers
- chemical compounds and formulations
- neural networks and data processing
- vehicle control systems
- signal processing and circuits
Intended Use
- Retrieval / RAG experiments
- Technical topic clustering
- Cross-domain similarity analysis
- Large-scale embedding evaluation
- Downstream filtering / refinement pipelines
Notes
This dataset represents the release-grade subset of the full run. The original unfiltered output (422 clusters) is intentionally not presented as the primary artifact.
- Downloads last month
- 13