Marvin Zhang commited on
Commit
67a3e42
·
1 Parent(s): ec5df57

initial commit

Browse files
Files changed (1) hide show
  1. README.md +54 -3
README.md CHANGED
@@ -1,3 +1,54 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # CrawlEval
5
+
6
+ Resources and tools for evaluating the performance and behavior of web crawling systems.
7
+
8
+ ## Overview
9
+
10
+ CrawlEval provides a comprehensive suite of tools and datasets for evaluating web crawling systems, with a particular focus on HTML pattern extraction and content analysis. The project includes:
11
+
12
+ 1. A curated dataset of web pages with ground truth patterns
13
+ 2. Tools for fetching and analyzing web content
14
+ 3. Evaluation metrics and benchmarking capabilities
15
+
16
+ ## Dataset
17
+
18
+ The dataset is designed to test and benchmark web crawling systems' ability to extract structured data from HTML. It includes:
19
+
20
+ - Raw HTML files with various structures and complexities
21
+ - Ground truth PagePattern JSON files
22
+ - Metadata about each example (query, complexity, etc.)
23
+
24
+ See the [dataset documentation](crawleval/README.md) for detailed information about the dataset structure and usage.
25
+
26
+ ## Tools
27
+
28
+ ### Web Page Fetcher (`fetch_webpage.py`)
29
+
30
+ A powerful tool for collecting and analyzing web pages for evaluation purposes.
31
+
32
+ Key features:
33
+ - Fetches web pages with proper JavaScript rendering using Selenium
34
+ - Extracts and analyzes metadata (DOM structure, nesting levels, etc.)
35
+ - Content deduplication using SHA-256 hashing
36
+ - URL deduplication with normalization
37
+ - Parallel processing of multiple URLs
38
+ - Progress tracking and detailed logging
39
+
40
+ Usage:
41
+ ```bash
42
+ python -m crawleval.fetch_webpage --batch urls.txt [options]
43
+ ```
44
+
45
+ Options:
46
+ - `--dir DIR`: Base directory for storing data
47
+ - `--list-hashes`: Display the content hash index
48
+ - `--list-urls`: Display the URL index
49
+ - `--save-results FILE`: Save batch processing results to a JSON file
50
+ - `--workers N`: Number of parallel workers (default: 4)
51
+
52
+ ## Contributing
53
+
54
+ We welcome contributions to improve the dataset and tools. Please see the [dataset documentation](crawleval/README.md) for guidelines on adding new examples.