File size: 7,204 Bytes
0f2f613
 
 
 
 
 
 
 
 
 
 
 
55db098
 
 
 
 
 
 
0509494
 
55db098
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d90aa78
55db098
 
 
 
d90aa78
55db098
 
 
 
 
 
 
 
 
e08f248
55db098
 
 
 
e08f248
 
55db098
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- agent
pretty_name: real-pbt
size_categories:
- 10K<n<100K
---
# RealPBT: A Dataset of 50,000+ PBTs Captured from Real-World Code

A large-scale dataset of property-based tests (PBTs) extracted from real-world, permissively licensed Github repos.
Each PBT comes with overlapping unit tests, and information about the functions it tests.

This data was scraped by [Benchify](https://www.benchify.com/).  We scraped Hypothesis PBTs for about 24 hours, and Typescript PBTs for about 8 hours, using our own proprietary Github scraper.  In each case we turned our scraper off when, anecdotally, we felt it had hit an asymptote in terms of finding new PBTs.  However, the choice of *when* to turn the scraper off was unscientific in nature and so the relative sizes of these datasets should not be viewed as a scientific measurement of the popularity of each framework (absolutely or relatively), despite the fact that it probably does roughly reflect that information.

**Note**: This dataset consists of multiple `jsonl` files. The HuggingFace dataset viewer only shows the first one, containing Python functions under test. To see the rest, look [here](https://huggingface.co/datasets/Benchify/realpbt/tree/main).

## Dataset Description

This dataset contains code examples from thousands of GitHub repositories, focusing on property-based testing using  [Hypothesis](https://hypothesis.readthedocs.io/en/latest/) (Python) and [Fast-Check](https://fast-check.dev/) (TypeScript).

### Dataset Statistics

- **Property-Based Tests (PBTs)**: 60,628 tests
  - Python PBTs: 54,345 (with detailed metrics, overlapping unit tests, and dependency information)
  - TypeScript PBTs: 6,283 (without the extra stuff mentioned above)
- **Unit Tests**: 6,343,790 (Python only)
- **Functions**: 6,845,964 (Python only)
- **Repositories**: 27,746+ GitHub repos

## Dataset Structure

The dataset consists of four JSONL files (one JSON object per line):

### 1. Python Property-Based Tests (`pbts.jsonl`)

Each record contains:
- `id`: Unique test identifier
- `name`: Test function name
- `code`: Complete test source code
- `language`: Programming language (always "python")
- `source_file`: File path within the repository
- `start_line`, `end_line`: Line numbers in source file
- `dependencies`: List of test dependencies (Python only)
- `repo`: Repository metadata
  - `name`: Repository name
  - `url`: GitHub URL
  - `license`: License type
  - `stars`: GitHub stars
  - `forks`: Fork count
- `metrics`: Code quality metrics (Python only) from [Radon](https://radon.readthedocs.io/)
  - `loc`: Lines of code
  - `sloc`: Source lines of code
  - `lloc`: Logical lines of code
  - `comments`: Comment lines
  - `avg_complexity`: Average cyclomatic complexity
  - `max_complexity`: Maximum cyclomatic complexity
  - `maintainability_index`: Maintainability score (0-100)
  - `halstead_difficulty`: Halstead difficulty metric
  - `halstead_effort`: Halstead effort metric
- `summary`: AI-generated natural language description of test behavior (generated with `4o-mini`)

### 2. TypeScript Property-Based Tests (`pbts_typescript.jsonl`)

Each record contains:
- `id`: Unique test identifier
- `name`: Test function name
- `code`: Complete test source code
- `language`: Programming language (always "typescript")
- `source_file`: File path within repository
- `start_line`, `end_line`: Line numbers (null - not available)
- `dependencies`: List of test dependencies (empty - no dependency analysis performed)
- `repo`: Repository metadata
  - `name`: Repository name
  - `url`: GitHub URL
  - `license`: License type
  - `stars`: GitHub stars
  - `forks`: Fork count
- `metrics`: Code quality metrics (null - not available)
- `summary`: AI-generated natural language description of test behavior
- `mode`: Testing framework used (always "fast-check")

### 3. Unit Tests (`unit_tests.jsonl`)

Each record contains:
- `id`: Unique test identifier
- `name`: Test function name
- `code`: Complete test source code
- `language`: Programming language (always "python")
- `source_file`: File path within repository
- `start_line`, `end_line`: Line numbers
- `repo`: Repository metadata (same structure as PBTs)

### 4. Functions (`functions.jsonl`)

Each record contains:
- `id`: Unique function identifier
- `name`: Function name
- `code`: Complete function source code
- `language`: Programming language (always "python")
- `source_file`: File path within repository
- `start_line`, `end_line`: Line numbers
- `repo`: Repository metadata (same structure as PBTs)

## Language Detection

**Python code validation:**
1. Uses Python's AST (Abstract Syntax Tree) parser
2. Attempts to parse code using `ast.parse()`
3. On success, labels as "python"

**TypeScript code validation:**
1. Checks for fast-check framework patterns (`fc.property`, `fc.assert`)
2. Validates basic syntax structure
3. Verifies balanced brackets and parentheses
4. On success, labels as "typescript"

The dataset includes Python (89.6%) and TypeScript (10.4%) PBTs.

## Code Metrics

The Python PBT records include code quality metrics:

- **Cyclomatic Complexity**: Measures code path complexity
- **Maintainability Index**: 0-100 score (higher is better)
- **Halstead Metrics**: Metrics measuring code difficulty and effort

## License Information

Each record includes the repository's license. Common licenses in this dataset:

- MIT
- Apache-2.0
- BSD-3-Clause
- GPL variants

We only extracted code from repos with licenses we considered permissive. If you believe we made a mistake (either sucking in a license which does not allow this kind of use, or, incorrectly determining the license of a repository) please don't hesitate to let us know and we will update the dataset accordingly.

Always check individual record licenses before use.

## Citation

If you use this dataset in your research, please cite:

```bibtex
@dataset{realPBT,
  title={{RealPBT}: 50,000+ PBTs Captured from Real-World Code},
  author={Max von Hippel, Evan Boehs, Jake Ginesin},
  year={2026},
  publisher={HuggingFace},
  note={Work supported by Benchify, Inc.},
  howpublished={\url{https://huggingface.co/datasets/Benchify/realpbt}}
}
```

### Acknowledgments

We gratefully acknowledge the following contributors who made this dataset possible:

- **[Max von Hippel](https://mxvh.pl)** - Led the project and performed data cleaning, dependency analysis, and data publication
- **[Evan Boehs](https://boehs.org/)** and **[Jake Ginesin](https://jakegines.in/about)** - Developed and implemented the web scraper for collecting property-based tests from open-source repositories
- **[Juan Castaño](https://www.linkedin.com/in/jfcastano)** - Set up and managed the database infrastructure and AWS instances used for large-scale scraping operations
- **[The Dartmouth DALI Lab](https://dali.dartmouth.edu/)** - Extended the scraper to support TypeScript property-based tests using the fast-check framework
    - **Sekpey Herbert Setor Kwame** - Helped with Typescript PBT scraping as a DALI Lab intern

## Contact

For questions, concerns, etc., please contact max@benchify.com or maxvh@hey.com.