tamnd commited on
Commit
58ab935
·
verified ·
1 Parent(s): 3f02c95

Add 2012-04 (129,931 items)

Browse files
Files changed (3) hide show
  1. README.md +284 -61
  2. data/2012/2012-04.parquet +3 -0
  3. stats.csv +3 -1
README.md CHANGED
@@ -3,9 +3,11 @@ license: odc-by
3
  task_categories:
4
  - text-generation
5
  - feature-extraction
 
 
6
  language:
7
  - en
8
- pretty_name: Hacker News Open Index
9
  size_categories:
10
  - 10M<n<100M
11
  tags:
@@ -13,6 +15,10 @@ tags:
13
  - forum
14
  - text
15
  - parquet
 
 
 
 
16
  configs:
17
  - config_name: default
18
  data_files:
@@ -24,85 +30,224 @@ configs:
24
  path: today/*.parquet
25
  ---
26
 
27
- # Hacker News Open Index
28
 
29
- > Every Hacker News item since 2006, updated every 5 minutes ready for training and retrieval
30
 
31
- ## What is it?
32
 
33
- This dataset contains the full Hacker News archive: **3802518 items** spanning 1970-01 to 2026-03-14 10:12 UTC, published as monthly Parquet files with 5-minute live blocks for today.
34
 
35
- Data includes stories, comments, Ask HN, Show HN, jobs, polls, and poll options — all fields preserved exactly as posted.
36
 
37
- ## Dataset Stats
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  | Metric | Value |
40
- |--------|-------|
41
- | Total items | 3802518 |
42
- | Historical months | 65 |
43
- | First month | 1970-01 |
44
- | Last committed month | 2012-03 |
45
- | Total size | 793.5 MB |
46
- | Last updated | 2026-03-14 10:12 UTC |
 
 
 
 
 
 
 
47
 
48
- ## File Layout
 
 
 
 
 
 
 
 
 
 
 
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ```
51
- data/
52
- 2006/2006-10.parquet ← first HN month
53
- ...
54
- 2026/2026-02.parquet
55
- today/
56
- 2026-03-14_00_00.parquet ← 5-min live blocks
57
- 2026-03-14_00_05.parquet
58
- ...
59
- stats.csv ← one row per committed month
60
- stats_today.csv ← one row per committed 5-min block
 
 
 
 
 
 
 
 
61
  ```
62
 
63
- ## How to Use
 
 
 
 
 
 
 
 
 
 
 
64
 
65
  ### Python (datasets)
66
 
67
  ```python
68
  from datasets import load_dataset
69
 
70
- # Stream the full history
71
  ds = load_dataset("open-index/hacker-news", split="train", streaming=True)
72
  for item in ds:
73
  print(item["id"], item["type"], item["title"])
74
 
75
- # Load a single month
76
  ds = load_dataset(
77
  "open-index/hacker-news",
78
- data_files="data/2024/2024-01.parquet",
79
  split="train",
80
  )
 
81
  ```
82
 
83
- ### DuckDB
84
 
85
- ```sql
86
- -- All stories from 2024
87
- SELECT id, by, title, score, url
88
- FROM read_parquet('hf://datasets/open-index/hacker-news/data/2024/*.parquet')
89
- WHERE type = 'story'
90
- ORDER BY score DESC
91
- LIMIT 20;
92
 
93
- -- Live blocks for today
94
- SELECT id, by, title, time
95
- FROM read_parquet('hf://datasets/open-index/hacker-news/today/*.parquet')
96
- ORDER BY id DESC
97
- LIMIT 50;
 
 
 
 
 
 
98
  ```
99
 
100
- ### huggingface_hub
101
 
102
  ```python
103
  from huggingface_hub import snapshot_download
104
 
105
- folder = snapshot_download(
 
106
  "open-index/hacker-news",
107
  repo_type="dataset",
108
  local_dir="./hn/",
@@ -110,28 +255,106 @@ folder = snapshot_download(
110
  )
111
  ```
112
 
 
 
 
 
 
 
 
 
 
113
  ## Schema
114
 
 
 
115
  | Column | Type | Description |
116
  |--------|------|-------------|
117
- | `id` | int64 | Item ID (monotonically increasing) |
118
- | `deleted` | bool | Soft-deleted flag |
119
- | `type` | string | story, comment, ask, show, job, poll, pollopt |
120
- | `by` | string | Username of author |
121
- | `time` | DateTime | Post timestamp (UTC) |
122
- | `text` | string | HTML body (comments, Ask HN, jobs) |
123
- | `dead` | bool | Flagged/killed by moderators |
124
- | `parent` | int64 | Parent item ID (for comments) |
125
- | `poll` | int64 | Poll item ID (for pollopts) |
126
- | `kids` | Array(int64) | Direct child item IDs |
127
- | `url` | string | External URL (stories) |
128
- | `score` | int64 | Points |
129
- | `title` | string | Story/Ask/Show/Job title |
130
- | `parts` | Array(int64) | Poll option IDs |
131
- | `descendants` | int64 | Total descendant comment count |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
132
 
133
  ## License
134
 
135
  Released under the **Open Data Commons Attribution License (ODC-By) v1.0**.
136
- Original content is subject to the rights of its respective authors.
137
- Hacker News data is provided by Y Combinator.
 
 
 
 
 
3
  task_categories:
4
  - text-generation
5
  - feature-extraction
6
+ - text-classification
7
+ - question-answering
8
  language:
9
  - en
10
+ pretty_name: Hacker News Complete Archive
11
  size_categories:
12
  - 10M<n<100M
13
  tags:
 
15
  - forum
16
  - text
17
  - parquet
18
+ - community
19
+ - tech
20
+ - comments
21
+ - live-updated
22
  configs:
23
  - config_name: default
24
  data_files:
 
30
  path: today/*.parquet
31
  ---
32
 
33
+ # Hacker News Complete Archive
34
 
35
+ Every Hacker News item since 2006, updated every 5 minutes. Stories, comments, Ask HN, Show HN, jobs, and polls — spanning **2006-10** to **2012-05** (47,332,789 items in full archive), with **4071134 items** committed so far.
36
 
37
+ This dataset mirrors the full [Hacker News](https://news.ycombinator.com) archive as monthly Parquet files, with 5-minute live blocks for today's activity. All fields are preserved exactly as they appear in the [HN API](https://github.com/HackerNewsAPI/HN-API).
38
 
39
+ ---
40
 
41
+ ## At a Glance
42
 
43
+ | Metric | Committed | Full Archive |
44
+ |--------|----------:|------------:|
45
+ | Items | 4071134 | 47,332,789 |
46
+ | Months | 67 | |
47
+ | First month | 2006-10 | 2006-10 |
48
+ | Last committed | 2012-05 | |
49
+ | Size | 859.0 MB | |
50
+ | Contributors | | 1,085,135 |
51
+ | Stories | | 6,032,585 |
52
+ | Comments | | 41,264,455 |
53
+ | Last updated | 2026-03-14 10:21 UTC | |
54
+
55
+ ---
56
+
57
+ ## Growth Over Time
58
+
59
+ Items committed to this dataset by year:
60
+
61
+ ```
62
+ 2006 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 62
63
+ 2007 ██░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 93.8K
64
+ 2008 ███████░░░░░░░░░░░░░░░░░░░░░░░ 320.9K
65
+ 2009 █████████████░░░░░░░░░░░░░░░░░ 608.4K
66
+ 2010 ██████████████████████░░░░░░░░ 1.0M
67
+ 2011 ██████████████████████████████ 1.4M
68
+ 2012 ██████████████░░░░░░░░░░░░░░░░ 637.5K
69
+ ```
70
+
71
+ ---
72
+
73
+ ## Content Breakdown
74
+
75
+ | Type | Count | Share |
76
+ |------|------:|------:|
77
+ | comment | 41,264,455 | 87.2% |
78
+ | story | 6,032,585 | 12.7% |
79
+ | job | 18,065 | 0.0% |
80
+ | poll | 2,239 | 0.0% |
81
+ | pollopt | 15,445 | 0.0% |
82
+
83
+ **84.8%** of stories link to external URLs. The remaining are text posts (Ask HN, Show HN, Launch HN).
84
+
85
+ Average discussion depth: **23.9 comments** per story (max: 9,275).
86
+
87
+ ---
88
+
89
+ ## Story Scores
90
 
91
  | Metric | Value |
92
+ |--------|------:|
93
+ | Average score | 1.5 |
94
+ | Median score | 0 |
95
+ | Highest score ever | 6,015 |
96
+ | Stories with 100+ points | 175,896 |
97
+ | Stories with 1,000+ points | 2,169 |
98
+
99
+ Most stories receive a handful of points, but the distribution has a long tail. The top 0.03% of stories (1,000+ points) represent the most discussed content in tech.
100
+
101
+ ---
102
+
103
+ ## Most-Shared Domains
104
+
105
+ The most frequently linked domains across all stories:
106
 
107
+ | # | Domain | Stories |
108
+ |--:|--------|--------:|
109
+ | 1 | github.com | 196,963 |
110
+ | 2 | www.youtube.com | 134,704 |
111
+ | 3 | medium.com | 124,515 |
112
+ | 4 | www.nytimes.com | 77,608 |
113
+ | 5 | en.wikipedia.org | 54,362 |
114
+ | 6 | techcrunch.com | 54,178 |
115
+ | 7 | twitter.com | 50,477 |
116
+ | 8 | arstechnica.com | 47,040 |
117
+ | 9 | www.theguardian.com | 44,227 |
118
+ | 10 | www.bloomberg.com | 37,766 |
119
 
120
+ ---
121
+
122
+ ## Most Active Story Submitters
123
+
124
+ Users ranked by total story submissions:
125
+
126
+ | # | User | Stories |
127
+ |--:|------|--------:|
128
+ | 1 | rbanffy | 36,762 |
129
+ | 2 | Tomte | 26,154 |
130
+ | 3 | tosh | 24,003 |
131
+ | 4 | bookofjoe | 20,548 |
132
+ | 5 | mooreds | 20,330 |
133
+ | 6 | pseudolus | 19,901 |
134
+ | 7 | PaulHoule | 18,930 |
135
+ | 8 | todsacerdoti | 18,880 |
136
+ | 9 | ingve | 17,042 |
137
+ | 10 | thunderbong | 15,945 |
138
+ | 11 | jonbaer | 14,161 |
139
+ | 12 | rntn | 13,410 |
140
+ | 13 | doener | 12,779 |
141
+ | 14 | Brajeshwar | 12,274 |
142
+ | 15 | LinuxBender | 11,058 |
143
+
144
+ ---
145
+
146
+ ## Quick Start
147
+
148
+ ### DuckDB
149
+
150
+ The fastest way to explore the dataset. DuckDB reads Parquet natively from Hugging Face:
151
+
152
+ ```sql
153
+ -- Top 20 highest-scored stories of all time
154
+ SELECT id, title, by, score, url, time
155
+ FROM read_parquet('hf://datasets/open-index/hacker-news/data/*/*.parquet')
156
+ WHERE type = 'story' AND title != ''
157
+ ORDER BY score DESC
158
+ LIMIT 20;
159
+ ```
160
+
161
+ ```sql
162
+ -- Monthly submission volume for 2024
163
+ SELECT
164
+ toStartOfMonth(time) AS month,
165
+ count(*) AS items,
166
+ countIf(type = 'story') AS stories,
167
+ countIf(type = 'comment') AS comments
168
+ FROM read_parquet('hf://datasets/open-index/hacker-news/data/2024/*.parquet')
169
+ GROUP BY month
170
+ ORDER BY month;
171
  ```
172
+
173
+ ```sql
174
+ -- Most discussed stories (by comment count) in the last year
175
+ SELECT id, title, by, score, descendants AS comments, url, time
176
+ FROM read_parquet('hf://datasets/open-index/hacker-news/data/2025/*.parquet')
177
+ WHERE type = 'story' AND descendants > 0
178
+ ORDER BY descendants DESC
179
+ LIMIT 20;
180
+ ```
181
+
182
+ ```sql
183
+ -- Who posts the most Ask HN questions?
184
+ SELECT by, count(*) AS posts
185
+ FROM read_parquet('hf://datasets/open-index/hacker-news/data/*/*.parquet')
186
+ WHERE type = 'story' AND title LIKE 'Ask HN:%'
187
+ GROUP BY by
188
+ ORDER BY posts DESC
189
+ LIMIT 20;
190
  ```
191
 
192
+ ```sql
193
+ -- Domain popularity over time
194
+ SELECT
195
+ extract(year FROM time) AS year,
196
+ regexp_extract(url, 'https?://([^/]+)', 1) AS domain,
197
+ count(*) AS stories
198
+ FROM read_parquet('hf://datasets/open-index/hacker-news/data/*/*.parquet')
199
+ WHERE type = 'story' AND url != ''
200
+ GROUP BY year, domain
201
+ QUALIFY row_number() OVER (PARTITION BY year ORDER BY stories DESC) <= 5
202
+ ORDER BY year, stories DESC;
203
+ ```
204
 
205
  ### Python (datasets)
206
 
207
  ```python
208
  from datasets import load_dataset
209
 
210
+ # Stream the full history (no download required)
211
  ds = load_dataset("open-index/hacker-news", split="train", streaming=True)
212
  for item in ds:
213
  print(item["id"], item["type"], item["title"])
214
 
215
+ # Load a specific year into memory
216
  ds = load_dataset(
217
  "open-index/hacker-news",
218
+ data_files="data/2024/*.parquet",
219
  split="train",
220
  )
221
+ print(f"{len(ds):,} items in 2024")
222
  ```
223
 
224
+ ### Python (pandas + DuckDB)
225
 
226
+ ```python
227
+ import duckdb
228
+
229
+ conn = duckdb.connect()
 
 
 
230
 
231
+ # Compute score percentiles
232
+ df = conn.sql("""
233
+ SELECT
234
+ percentile_disc(0.50) WITHIN GROUP (ORDER BY score) AS p50,
235
+ percentile_disc(0.90) WITHIN GROUP (ORDER BY score) AS p90,
236
+ percentile_disc(0.99) WITHIN GROUP (ORDER BY score) AS p99,
237
+ percentile_disc(0.999) WITHIN GROUP (ORDER BY score) AS p999
238
+ FROM read_parquet('hf://datasets/open-index/hacker-news/data/*/*.parquet')
239
+ WHERE type = 'story'
240
+ """).df()
241
+ print(df)
242
  ```
243
 
244
+ ### Download Specific Files
245
 
246
  ```python
247
  from huggingface_hub import snapshot_download
248
 
249
+ # Download only 2024 data
250
+ snapshot_download(
251
  "open-index/hacker-news",
252
  repo_type="dataset",
253
  local_dir="./hn/",
 
255
  )
256
  ```
257
 
258
+ ```bash
259
+ # CLI: download a single month
260
+ huggingface-cli download open-index/hacker-news \
261
+ data/2024/2024-01.parquet \
262
+ --repo-type dataset --local-dir ./hn/
263
+ ```
264
+
265
+ ---
266
+
267
  ## Schema
268
 
269
+ Every Parquet file shares the same schema, matching the [HN API](https://github.com/HackerNewsAPI/HN-API) item format:
270
+
271
  | Column | Type | Description |
272
  |--------|------|-------------|
273
+ | `id` | int64 | Unique item ID (monotonically increasing) |
274
+ | `deleted` | bool | `true` if soft-deleted by author or moderators |
275
+ | `type` | string | One of: `story`, `comment`, `job`, `poll`, `pollopt` |
276
+ | `by` | string | Username of the author |
277
+ | `time` | timestamp | Post timestamp (UTC) |
278
+ | `text` | string | HTML body text (comments, Ask HN, jobs) |
279
+ | `dead` | bool | `true` if flagged/killed by moderators |
280
+ | `parent` | int64 | Parent item ID (comments only) |
281
+ | `poll` | int64 | Associated poll ID (poll options only) |
282
+ | `kids` | list\<int64\> | Direct child item IDs |
283
+ | `url` | string | External URL (stories with links) |
284
+ | `score` | int64 | Points (upvotes minus downvotes) |
285
+ | `title` | string | Title text (stories, jobs, polls) |
286
+ | `parts` | list\<int64\> | Poll option item IDs (polls only) |
287
+ | `descendants` | int64 | Total comment count in the discussion tree |
288
+
289
+ ---
290
+
291
+ ## Data Architecture
292
+
293
+ ### File Layout
294
+
295
+ ```
296
+ open-index/hacker-news/
297
+ data/
298
+ 2006/2006-10.parquet # first HN month with data
299
+ 2006/2006-12.parquet
300
+ 2007/2007-01.parquet
301
+ ...
302
+ 2026/2026-02.parquet # most recent complete month
303
+ today/
304
+ 2026-03-14_00_00.parquet # 5-minute live blocks
305
+ 2026-03-14_00_05.parquet
306
+ ...
307
+ stats.csv # one row per committed month
308
+ stats_today.csv # one row per committed 5-min block
309
+ README.md
310
+ ```
311
+
312
+ ### Update Pipeline
313
+
314
+ This dataset is maintained by an automated pipeline:
315
+
316
+ 1. **Historical backfill** — On first run, fetches every month from 2006-10 to the most recent complete month. Each month is committed as a single Parquet file. Months already committed (tracked in `stats.csv`) are skipped, making the process resumable.
317
+
318
+ 2. **Live polling** — After backfill, polls the source every 5 minutes for new items. Each batch is committed as `today/YYYY-MM-DD_HH_MM.parquet`.
319
+
320
+ 3. **Day rollover** — At midnight UTC, all of today's 5-minute blocks are merged into the monthly Parquet file using DuckDB, then the individual blocks are deleted.
321
+
322
+ Data is sourced from the [ClickHouse Playground](https://sql.clickhouse.com) which mirrors the official HN Firebase API.
323
+
324
+ ### Parquet Compression
325
+
326
+ All files use **Zstandard (zstd)** compression at level 22 for optimal size. Monthly files are sorted by `id` for efficient range scans.
327
+
328
+ ---
329
+
330
+ ## Dataset Stats
331
+
332
+ Detailed per-month statistics are available in `stats.csv`:
333
+
334
+ | Column | Description |
335
+ |--------|-------------|
336
+ | `year`, `month` | Calendar month |
337
+ | `lowest_id`, `highest_id` | Item ID range in this file |
338
+ | `count` | Number of items |
339
+ | `dur_fetch_s` | Seconds to fetch from source |
340
+ | `dur_commit_s` | Seconds to commit to Hugging Face |
341
+ | `size_bytes` | Parquet file size |
342
+ | `committed_at` | ISO 8601 commit timestamp |
343
+
344
+ ```sql
345
+ -- Query the stats directly
346
+ SELECT * FROM read_csv_auto('hf://datasets/open-index/hacker-news/stats.csv')
347
+ ORDER BY year, month;
348
+ ```
349
+
350
+ ---
351
 
352
  ## License
353
 
354
  Released under the **Open Data Commons Attribution License (ODC-By) v1.0**.
355
+
356
+ Original content is subject to the rights of its respective authors. Hacker News data is provided by [Y Combinator](https://www.ycombinator.com). This dataset is an independent mirror — it is not affiliated with or endorsed by Y Combinator.
357
+
358
+ ---
359
+
360
+ *Last updated: 2026-03-14 10:21 UTC*
data/2012/2012-04.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aecddff0f7ab68068430894a8d0e46b1956a4dcd780a3a763e96a8a368b934f9
3
+ size 33171230
stats.csv CHANGED
@@ -63,4 +63,6 @@ year,month,lowest_id,highest_id,count,dur_fetch_s,dur_commit_s,size_bytes,commit
63
  2011,12,3297497,3412253,114578,2,7,25620474,2026-03-14T10:12:20Z
64
  2012,1,3412254,3535903,123265,3,15,32611036,2026-03-14T10:12:30Z
65
  2012,2,3535904,3650342,114197,3,6,25435313,2026-03-14T10:12:49Z
66
- 2012,3,3650343,3782006,131425,2,0,33789898,2026-03-14T10:12:57Z
 
 
 
63
  2011,12,3297497,3412253,114578,2,7,25620474,2026-03-14T10:12:20Z
64
  2012,1,3412254,3535903,123265,3,15,32611036,2026-03-14T10:12:30Z
65
  2012,2,3535904,3650342,114197,3,6,25435313,2026-03-14T10:12:49Z
66
+ 2012,3,3650343,3782006,131425,2,12,33789898,2026-03-14T10:12:57Z
67
+ 2012,4,3782007,3912227,129931,3,0,33171230,2026-03-14T10:20:32Z
68
+ 2012,5,3912228,4051124,138685,8,0,35491708,2026-03-14T10:21:15Z