The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
4063ed4d7559ba435267610e7387ee4626a076a49fa4951ab17028b95fb2da3e: string
9da4813774a7f9cd796fadc1fc8164d103e06ac2aad6300d0be1368fa6baa4d4: string
fc83dc4a2858af2852bd3dd8da38b4f19742c059eb99557a724e1476d7b944e5: string
0e3bd9383bd0a2bd6260cbf1e9a75e6a6cee5a4c726a55b524553443c87395e6: string
53abbc0de94fb86c4b8074b109a1e5bdb91b496b936a76cba95d83233e6338c1: string
27313d822909b4b9205a224895ec47341d9a641dd921465017ddadca32c8aea3: string
bfb27bd1eb56be78a6e267fc31b3ed76dfe7488dc0473a74eee43a3bc1aef5aa: string
71db2e25d97fd93f5ee3cc181af45d67229e215d90a96773f2894aa585d3e0bb: string
73d2cca7b98d17ea64398ad9422c0aea55cf6dd028e606081abc8dbd9666df67: string
d6b240283bcfd9343b545fc070be40cd077d70f2c0e177a786391b28244896f9: string
74c123aa7c7b802b3fecef041aa087662768456eab92a1b0f216bc2bcf7d918a: string
0e1225f81de37606cd180ef611af6867fa1f7858492c2d3e84b97a18eda26ccb: string
vs
id: string
name: string
description: string
url: string
created_at: string
size: int64
dom_nodes: int64
text_nodes: int64
link_nodes: int64
image_nodes: int64
max_nesting_level: int64
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3339, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
return next(iter(self.iter(batch_size=n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2300, in iter
for key, example in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
4063ed4d7559ba435267610e7387ee4626a076a49fa4951ab17028b95fb2da3e: string
9da4813774a7f9cd796fadc1fc8164d103e06ac2aad6300d0be1368fa6baa4d4: string
fc83dc4a2858af2852bd3dd8da38b4f19742c059eb99557a724e1476d7b944e5: string
0e3bd9383bd0a2bd6260cbf1e9a75e6a6cee5a4c726a55b524553443c87395e6: string
53abbc0de94fb86c4b8074b109a1e5bdb91b496b936a76cba95d83233e6338c1: string
27313d822909b4b9205a224895ec47341d9a641dd921465017ddadca32c8aea3: string
bfb27bd1eb56be78a6e267fc31b3ed76dfe7488dc0473a74eee43a3bc1aef5aa: string
71db2e25d97fd93f5ee3cc181af45d67229e215d90a96773f2894aa585d3e0bb: string
73d2cca7b98d17ea64398ad9422c0aea55cf6dd028e606081abc8dbd9666df67: string
d6b240283bcfd9343b545fc070be40cd077d70f2c0e177a786391b28244896f9: string
74c123aa7c7b802b3fecef041aa087662768456eab92a1b0f216bc2bcf7d918a: string
0e1225f81de37606cd180ef611af6867fa1f7858492c2d3e84b97a18eda26ccb: string
vs
id: string
name: string
description: string
url: string
created_at: string
size: int64
dom_nodes: int64
text_nodes: int64
link_nodes: int64
image_nodes: int64
max_nesting_level: int64Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CrawlEval
Resources and tools for evaluating the performance and behavior of web crawling systems.
Overview
CrawlEval provides a comprehensive suite of tools and datasets for evaluating web crawling systems, with a particular focus on HTML pattern extraction and content analysis. The project includes:
- A curated dataset of web pages with ground truth patterns
- Tools for fetching and analyzing web content
- Evaluation metrics and benchmarking capabilities
Dataset
The dataset is designed to test and benchmark web crawling systems' ability to extract structured data from HTML. It includes:
- Raw HTML files with various structures and complexities
- Ground truth PagePattern JSON files
- Metadata about each example (query, complexity, etc.)
See the dataset documentation for detailed information about the dataset structure and usage.
Tools
Web Page Fetcher (fetch_webpage.py)
A powerful tool for collecting and analyzing web pages for evaluation purposes.
Key features:
- Fetches web pages with proper JavaScript rendering using Selenium
- Extracts and analyzes metadata (DOM structure, nesting levels, etc.)
- Content deduplication using SHA-256 hashing
- URL deduplication with normalization
- Parallel processing of multiple URLs
- Progress tracking and detailed logging
Usage:
python -m crawleval.fetch_webpage --batch urls.txt [options]
Options:
--dir DIR: Base directory for storing data--list-hashes: Display the content hash index--list-urls: Display the URL index--save-results FILE: Save batch processing results to a JSON file--workers N: Number of parallel workers (default: 4)
Contributing
We welcome contributions to improve the dataset and tools. Please see the dataset documentation for guidelines on adding new examples.
- Downloads last month
- 3