grandchildrice commited on
Commit
aacc2c0
·
verified ·
1 Parent(s): cddd0b9

Update defi @ 2026-05-08T04:25:45Z

Browse files
Files changed (3) hide show
  1. README.md +96 -0
  2. defi/manifest.json +26 -0
  3. defi/train.parquet +3 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ pretty_name: "SPECA Vulnerability Reports"
6
+ tags:
7
+ - "security"
8
+ - "smart-contracts"
9
+ - "vulnerabilities"
10
+ - "audit"
11
+ ---
12
+
13
+ # SPECA Vulnerability Reports
14
+
15
+ A normalized corpus of public smart-contract audit findings, curated by
16
+ the [SPECA](https://github.com/NyxFoundation/speca) project. Each row is
17
+ one expert-judged audit issue, deduplicated across public competitive-audit
18
+ platforms.
19
+
20
+ The dataset is split into one **config per security domain** (`defi`,
21
+ `lending`, `oracle`, …). HuggingFace auto-detects each `<domain>/`
22
+ folder as a config; pick the domain you want at load time.
23
+
24
+ ## Loading
25
+
26
+ ```python
27
+ from datasets import load_dataset
28
+
29
+ # A specific domain (recommended).
30
+ ds = load_dataset("NyxFoundation/vulnerability-reports", "defi", split="train")
31
+
32
+ # Or, from inside the SPECA repo:
33
+ from scripts.datasets.load import load_findings
34
+ df = load_findings(domain="defi").to_pandas()
35
+ ```
36
+
37
+ ## Schema
38
+
39
+ | Field | Type | Description |
40
+ |---|---|---|
41
+ | `id` | string | Stable key, `<platform>:<contest-slug>:<issue_id>`. |
42
+ | `source_platform` | string | One of `code4rena`, `sherlock`, `codehawks`. |
43
+ | `contest` | string | Slugified contest identifier. |
44
+ | `issue_id` | string | Platform-local issue / finding id, `#`-stripped. |
45
+ | `severity` | string | `High` / `Medium` / `Low` / `Info`. |
46
+ | `title` | string | Verbatim issue title from the upstream platform. |
47
+ | `description` | string | Verbatim issue body. |
48
+ | `source_url` | string | Best-effort upstream link (deterministic for code4rena; may be empty for others — check the scraper output if you need a guaranteed URL). |
49
+ | `domain` | string | Matches the config name (`defi`, `lending`, …). |
50
+ | `scraped_at` | string | ISO 8601 UTC timestamp of the underlying scrape that produced this row. |
51
+
52
+ Per-build state for each domain — row count, scrape time, source CSVs,
53
+ `speca_commit` — lives in `<domain>/manifest.json` rather than this card,
54
+ so re-publishing one domain doesn't churn another's metadata.
55
+
56
+ ## Provenance
57
+
58
+ All findings originate from public competitive-audit platforms:
59
+
60
+ - [Code4rena](https://code4rena.com/) — contest reports published openly
61
+ on GitHub under the `code-423n4` org.
62
+ - [Sherlock](https://audits.sherlock.xyz/) — contest reports published
63
+ via Sherlock's public site and their `sherlock-audit` GitHub org.
64
+ - [CodeHawks](https://codehawks.cyfrin.io/) — contest reports published
65
+ on the CodeHawks platform.
66
+
67
+ This dataset redistributes the publicly accessible finding metadata
68
+ (title, description, severity, source URL). The `source_url` column
69
+ points back to the upstream record. If you are the original author of a
70
+ finding and want it removed, please open an issue on the
71
+ [SPECA repository](https://github.com/NyxFoundation/speca/issues).
72
+
73
+ ## How this dataset is built
74
+
75
+ The build pipeline lives in
76
+ [`scripts/datasets/`](https://github.com/NyxFoundation/speca/tree/main/scripts/datasets)
77
+ of the SPECA repo:
78
+
79
+ 1. `scripts/scrape_*.py` pulls raw data from each platform into CSVs
80
+ under `benchmarks/data/defi_audit_reports/`.
81
+ 2. `scripts/datasets/build_derived.py` normalizes those CSVs (and the
82
+ curated `csv/similar_audit_findings.csv`) into the schema above and
83
+ emits a parquet + manifest under `dist/datasets/<domain>/`.
84
+ 3. `scripts/datasets/publish_hf.py` pushes the parquet plus this README
85
+ to `NyxFoundation/vulnerability-reports` on the `main` branch, scoped to the published
86
+ domain's folder.
87
+
88
+ The `.github/workflows/datasets-publish.yml` workflow runs the last two
89
+ steps on a self-hosted runner via `workflow_dispatch`.
90
+
91
+ ## License
92
+
93
+ The schema, build code, and dataset card are MIT-licensed. Original
94
+ finding text is reproduced for research / educational use under the
95
+ public terms of each upstream platform; consult the source URL for the
96
+ authoritative version.
defi/manifest.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "domain": "defi",
3
+ "n_rows": 3909,
4
+ "parquet_bytes": 4603495,
5
+ "parquet_path": "train.parquet",
6
+ "platforms_included": [
7
+ "code4rena",
8
+ "codehawks",
9
+ "sherlock"
10
+ ],
11
+ "rows_by_platform": {
12
+ "code4rena": 3570,
13
+ "codehawks": 76,
14
+ "sherlock": 263
15
+ },
16
+ "rows_by_severity": {
17
+ "High": 1299,
18
+ "Medium": 2610
19
+ },
20
+ "scraped_at": "2026-05-08T04:25:45Z",
21
+ "severities_included": [],
22
+ "sources": [
23
+ "csv/similar_audit_findings.csv"
24
+ ],
25
+ "speca_commit": "6e1d29e214a95365dfef2b79d7b9e42d93ea0947"
26
+ }
defi/train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92fbdbde11430fb6ba138cf03e238054c251bcb0f29c4fc24f9e60057b30229f
3
+ size 4603495