The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
schema_name: string
version: string
name: string
origin: struct<mimetype: string, binary_hash: double, filename: string>
furniture: struct<self_ref: string, children: list<item: null>, content_layer: string, name: string, label: string>
body: struct<self_ref: string, children: list<item: struct<$ref: string>>, content_layer: string, name: string, label: string>
groups: list<item: struct<self_ref: string, parent: struct<$ref: string>, children: list<item: struct<$ref: string>>, content_layer: string, name: string, label: string>>
texts: list<item: struct<self_ref: string, parent: struct<$ref: string>, children: list<item: null>, content_layer: string, label: string, prov: list<item: struct<page_no: int64, bbox: struct<l: double, t: double, r: double, b: double, coord_origin: string>, charspan: list<item: int64>>>, orig: string, text: string, level: int64, enumerated: bool, marker: string>>
pictures: list<item: struct<self_ref: string, parent: struct<$ref: string>, children: list<item: struct<$ref: string>>, content_layer: string, label: string, prov: list<item: struct<page_no: int64, bbox: struct<l: double, t: double, r: double, b: double, coord_origin: string>, charspan: list<item: int64>>>, captions: list<item: null>, references: list<item: null>, footnotes: list<item: null>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, annotations: list<item: null>>>
tables: list<item: struct<self_ref: string, parent: struct<$ref: string>, children: list<item: null>, content_layer: string, label: string, prov: list<item: struct<page_no: int64, bbox: struct<l: double, t: double, r: double, b: double, coord_origin: string>, charspan: list<item: int64>>>, captions: list<item: null>, references: list<item: null>, footnotes: list<item: null>, data: struct<table_cells: list<item: struct<bbox: struct<l: double, t: double, r: double, b: double, coord_origin: string>, row_span: int64, col_span: int64, start_row_offset_idx: int64, end_row_offset_idx: int64, start_col_offset_idx: int64, end_col_offset_idx: int64, text: string, column_header: bool, row_header: bool, row_section: bool>>, num_rows: int64, num_cols: int64, grid: list<item: list<item: struct<bbox: struct<l: double, t: double, r: double, b: double, coord_origin: string>, row_span: int64, col_span: int64, start_row_offset_idx: int64, end_row_offset_idx: int64, start_col_offset_idx: int64, end_col_offset_idx: int64, text: string, column_header: bool, row_header: bool, row_section: bool>>>>>>
key_value_items: list<item: null>
form_items: list<item: null>
pages: struct<1: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 2: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 3: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 4: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 5: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 6: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 7: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>>
vs
id: string
title: string
content: string
contents: string
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 588, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
schema_name: string
version: string
name: string
origin: struct<mimetype: string, binary_hash: double, filename: string>
furniture: struct<self_ref: string, children: list<item: null>, content_layer: string, name: string, label: string>
body: struct<self_ref: string, children: list<item: struct<$ref: string>>, content_layer: string, name: string, label: string>
groups: list<item: struct<self_ref: string, parent: struct<$ref: string>, children: list<item: struct<$ref: string>>, content_layer: string, name: string, label: string>>
texts: list<item: struct<self_ref: string, parent: struct<$ref: string>, children: list<item: null>, content_layer: string, label: string, prov: list<item: struct<page_no: int64, bbox: struct<l: double, t: double, r: double, b: double, coord_origin: string>, charspan: list<item: int64>>>, orig: string, text: string, level: int64, enumerated: bool, marker: string>>
pictures: list<item: struct<self_ref: string, parent: struct<$ref: string>, children: list<item: struct<$ref: string>>, content_layer: string, label: string, prov: list<item: struct<page_no: int64, bbox: struct<l: double, t: double, r: double, b: double, coord_origin: string>, charspan: list<item: int64>>>, captions: list<item: null>, references: list<item: null>, footnotes: list<item: null>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, annotations: list<item: null>>>
tables: list<item: struct<self_ref: string, parent: struct<$ref: string>, children: list<item: null>, content_layer: string, label: string, prov: list<item: struct<page_no: int64, bbox: struct<l: double, t: double, r: double, b: double, coord_origin: string>, charspan: list<item: int64>>>, captions: list<item: null>, references: list<item: null>, footnotes: list<item: null>, data: struct<table_cells: list<item: struct<bbox: struct<l: double, t: double, r: double, b: double, coord_origin: string>, row_span: int64, col_span: int64, start_row_offset_idx: int64, end_row_offset_idx: int64, start_col_offset_idx: int64, end_col_offset_idx: int64, text: string, column_header: bool, row_header: bool, row_section: bool>>, num_rows: int64, num_cols: int64, grid: list<item: list<item: struct<bbox: struct<l: double, t: double, r: double, b: double, coord_origin: string>, row_span: int64, col_span: int64, start_row_offset_idx: int64, end_row_offset_idx: int64, start_col_offset_idx: int64, end_col_offset_idx: int64, text: string, column_header: bool, row_header: bool, row_section: bool>>>>>>
key_value_items: list<item: null>
form_items: list<item: null>
pages: struct<1: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 2: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 3: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 4: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 5: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 6: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>, 7: struct<size: struct<width: double, height: double>, image: struct<mimetype: string, dpi: int64, size: struct<width: double, height: double>, uri: string>, page_no: int64>>
vs
id: string
title: string
content: string
contents: stringNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Indexing Fixtures Dataset
This repository serves as a comprehensive test dataset for validating indexing functionality across diverse file types and edge cases. It ensures that our indexing pipeline can handle real-world document scenarios and maintain consistent performance across all supported formats.
π Dataset Structure
The dataset is organized into directories based on file types, specific issues, testing purposes, and problem topics:
File Type Categories
pdf/- PDF documents (208 files) - Primary document format testingword/- Microsoft Word documents (.doc, .docx) - Word processing filesexcel/- Excel spreadsheets (.xls, .xlsx) - Structured data filespowerpoint/- PowerPoint presentations (.pptx) - Presentation filescsv/- Comma-separated values files - Tabular datatsv/- Tab-separated values files - Alternative tabular formatimages/- Image files (.jpg, .png, .webp) - Visual contenthtms/- HTML and XML files - Web content formatszipped/- Compressed archives (.zip, .7z) - Archive handling
Issue-Based Categories
taishin-problems/- Specific client issues and edge cases- Files with parsing errors (#209, #215)
- Text duplication problems
- Chunk explosion scenarios
- Unprocessable entity errors (422)
no-chunks/- Image-based PDFs that produce no chunks with docling parser- Government forms and official documents (scanned/image format)
- Passport application materials (require VLM processing)
- Solution: Convert PDF pages to PNG and extract information using Vision Language Models
sensitive/- Privacy-sensitive test files- Identity documents
- Personal information samples
Purpose-Based Categories
QA/- Quality assurance test files (74 files)- Mixed format validation
- Corporate reports and multimedia
paper/- Academic papers and research documents- Scientific publication formats
- Citation and reference handling
passport/- Government document processing- Official forms and procedures
- Multi-language content
donghua/- University-specific documents- Academic administrative files
- Course regulations and procedures
Special Categories
excluded/- Files intentionally excluded from processing- Alternative formats (mht/)
- JSON test data
benchmark_results/- Performance testing data- CSV files with benchmark metrics
- Model evaluation records
π― Naming Convention
Directory and file names follow a systematic approach based on:
- Issue Tracking: Files prefixed with issue numbers (e.g.,
#209_ERROR_,#215_PDFθ§£ζηζεδΈζ·ιθ€_) - Topic Classification: Grouped by subject matter (passport, donghua, taishin)
- File Type: Organized by format for systematic testing
- Purpose: Categorized by intended use case (QA, benchmarks, problems)
π§ Technical Details
Git LFS Configuration
Large files are managed through Git LFS, including:
- All binary formats (PDF, Office documents, images, audio, video)
- Compressed archives
- Model files and data exports
File Coverage
- Total Files: 400+ test files
- Format Coverage: 20+ file types
- Size Range: From small forms to large multimedia files
- Language Support: Multi-language content (Chinese, English)
π Usage
This dataset is designed for:
- Continuous Integration: Automated testing of indexing pipelines
- Regression Testing: Ensuring fixes don't break existing functionality
- Performance Benchmarking: Measuring processing speed and accuracy
- Edge Case Validation: Testing problematic file scenarios
- Format Compatibility: Verifying support across all file types
π Quality Assurance
Each directory serves specific testing purposes:
- Positive Tests: Standard files that should process successfully
- Negative Tests: Files that should fail gracefully (
excluded/) - Edge Cases: Problematic files for robustness testing (
taishin-problems/) - VLM Processing Tests: Image-based PDFs requiring vision model extraction (
no-chunks/) - Performance Tests: Large files for scalability validation
π Problem Categories
The dataset includes known challenging scenarios:
- Text duplication in PDF parsing
- Chunk explosion (excessive segmentation)
- Encoding issues with special characters
- Unprocessable entity errors
- Large file handling
- Multi-format archives
π Contributing
When adding new test files:
- Place in appropriate directory based on type/purpose
- Use descriptive naming with issue references if applicable
- Update this README if adding new categories
- Ensure proper Git LFS tracking for binary files
π·οΈ Tags
dataset indexing testing qa document-processing nlp huggingface
- Downloads last month
- 16