emanuelevivoli commited on
Commit
c5114ab
·
verified ·
1 Parent(s): 09f4913

Add files using upload-large-folder tool

Browse files
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc0-1.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ - image-to-text
6
+ tags:
7
+ - comics
8
+ - metadata
9
+ - book-level
10
+ - tiny-dataset
11
+ - testing
12
+ size_categories:
13
+ - n<1K
14
+ ---
15
+
16
+ # Comic Books Tiny Dataset v0 - Books (Testing)
17
+
18
+ **Small test dataset of book-level metadata for rapid development and testing.**
19
+
20
+ ⚠️ **This is a TINY dataset** for testing only. For production, use `comix_v0_books`.
21
+
22
+ ## Dataset Description
23
+
24
+ - **Total Books**: Unknown
25
+ - **Format**: WebDataset (tar files)
26
+ - **Content**: Metadata only (NO images)
27
+ - **License**: Public Domain (CC0-1.0)
28
+ - **Purpose**: Fast testing and development
29
+
30
+ ## What's Included
31
+
32
+ Each book has:
33
+ - `{book_id}.json` - Book metadata with page references
34
+
35
+ ## Purpose
36
+
37
+ This dataset provides book-level metadata to group pages from **comix_v0_tiny_pages**.
38
+
39
+ **Workflow**:
40
+ 1. Download `comix_v0_tiny_pages` (with images)
41
+ 2. Download `comix_v0_tiny_books` (metadata only)
42
+ 3. Use WebDataset pipeline to group pages by book
43
+
44
+ ## Quick Start
45
+
46
+ ```python
47
+ from datasets import load_dataset
48
+ import json
49
+
50
+ # Load tiny books dataset
51
+ books = load_dataset(
52
+ "emanuelevivoli/comix_v0_tiny_books",
53
+ split="train",
54
+ streaming=True
55
+ )
56
+
57
+ # Iterate through books
58
+ for book in books:
59
+ book_data = json.loads(book["json"])
60
+
61
+ book_id = book_data["book_id"]
62
+ total_pages = book_data["book_metadata"]["total_pages"]
63
+
64
+ # Get page references
65
+ for page_ref in book_data["pages"]:
66
+ page_id = page_ref["page_id"]
67
+ dataset = page_ref["dataset"] # "comix_v0_tiny_pages"
68
+ tar_file = page_ref["tar_file"]
69
+ files = page_ref["files"] # json, jpg, seg.npz (if available)
70
+
71
+ print(f"Book {book_id}: {total_pages} pages")
72
+ ```
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Book JSON Schema (v0)
77
+
78
+ ```json
79
+ {
80
+ "book_id": "c00004",
81
+ "split": "train",
82
+ "book_metadata": {
83
+ "series_title": null,
84
+ "issue_number": null,
85
+ "publication_date": null,
86
+ "publisher": null,
87
+ "total_pages": 68,
88
+ "license_status": "Public Domain",
89
+ "digital_source": "Digital Comic Museum"
90
+ },
91
+ "pages": [
92
+ {
93
+ "page_number": 0,
94
+ "page_id": "c00004_p000",
95
+ "dataset": "comix_v0_tiny_pages",
96
+ "tar_file": "comix-v0-tiny-pages-00000.tar",
97
+ "files": {
98
+ "json": "c00004_p000.json",
99
+ "jpg": "c00004_p000.jpg",
100
+ "seg.npz": "c00004_p000.seg.npz"
101
+ }
102
+ },
103
+ ...
104
+ ],
105
+ "segments": [], // v1: story segments (to be added)
106
+ "characters": [] // v1: character bank (to be added)
107
+ }
108
+ ```
109
+
110
+ ## Data Splits
111
+
112
+ | Split | Books |
113
+ |-------|-------|
114
+ | Train | Unknown |
115
+ | Validation | Unknown |
116
+ | Test | Unknown |
117
+ | **Total** | **Unknown** |
118
+
119
+ ## Use Cases
120
+
121
+ ✅ **Testing**: Rapid iteration on book-level structure
122
+ ✅ **Development**: Quick validation of page grouping
123
+ ✅ **Debugging**: Small dataset for troubleshooting
124
+ ✅ **Prototyping**: Fast experimentation
125
+
126
+ ❌ **NOT for**: Training production models
127
+
128
+ ## Version 0 (Primordial)
129
+
130
+ This is a **primordial v0** with:
131
+ - ✅ Basic structure
132
+ - ✅ Page references
133
+ - ⏳ Empty `book_metadata` fields (to be populated in v1)
134
+ - ⏳ Empty `segments` (to be added in v1)
135
+ - ⏳ Empty `characters` (to be added in v1)
136
+
137
+ **Future (v1+)** will include:
138
+ - Bibliographic information (series, issue, publisher)
139
+ - Story segments with summaries
140
+ - Character banks with appearances
141
+
142
+ ## Companion Dataset
143
+
144
+ **comix_v0_tiny_pages**: Individual pages with images
145
+
146
+ ## Full Dataset
147
+
148
+ For production use: **comix_v0_books** (~19K books)
149
+
150
+ ## Example: Group Pages by Book
151
+
152
+ ```python
153
+ # Load both datasets
154
+ pages = load_dataset("emanuelevivoli/comix_v0_tiny_pages", split="train")
155
+ books = load_dataset("emanuelevivoli/comix_v0_tiny_books", split="train")
156
+
157
+ # Create page index
158
+ page_index = {p["json"]["page_id"]: p for p in pages}
159
+
160
+ # Get a book and its pages
161
+ book = books[0]
162
+ book_data = json.loads(book["json"])
163
+
164
+ book_pages = []
165
+ for page_ref in book_data['pages']:
166
+ page_id = page_ref['page_id']
167
+ if page_id in page_index:
168
+ book_pages.append(page_index[page_id])
169
+
170
+ print(f"Book {book_data['book_id']}: {len(book_pages)} pages loaded")
171
+ ```
172
+
173
+ ## Citation
174
+
175
+ ```bibtex
176
+ @dataset{comix_v0_tiny_books_2025,
177
+ title={Comic Books Tiny Dataset v0 - Books},
178
+ author={Emanuele Vivoli},
179
+ year={2025},
180
+ publisher={Hugging Face},
181
+ note={Testing dataset - metadata only},
182
+ url={https://huggingface.co/datasets/emanuelevivoli/comix_v0_tiny_books}
183
+ }
184
+ ```
185
+
186
+ ## License
187
+
188
+ Public Domain (CC0-1.0) - Digital Comic Museum
189
+
190
+ ## Updates
191
+
192
+ - **v0 (2025-11-18)**: Initial release
193
+ - Unknown books
194
+ - Primordial version with placeholder fields
195
+ - For testing only
_info.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "comix_v0_tiny_books",
3
+ "splits": {
4
+ "train": {
5
+ "name": "train",
6
+ "filenames": [
7
+ "comix-books-train-0000.tar",
8
+ "comix-books-train-0001.tar"
9
+ ],
10
+ "shard_lengths": [
11
+ 100,
12
+ 13
13
+ ],
14
+ "num_samples": 113
15
+ }
16
+ }
17
+ }
comix-books-train-0000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2ad3ff79f75f96efb9bd239f91e0460aca60a8d61fee0e61d94450a7d225e08
3
+ size 1105920
comix-books-train-0001.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76b4bfb3e333630b0e18e67f76867dba63fd621ca268ad8db361148330a2497e
3
+ size 153600