Mayank022 commited on
Commit
e540aa1
·
verified ·
1 Parent(s): d707d65

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: English Characters Image Dataset
3
+ license: mit
4
+ language: en
5
+ ---
6
+
7
+
8
+ # English Characters Image Dataset (A-Z, a-z, 0-9)
9
+
10
+ This dataset contains high-resolution (128x128 pixels) grayscale images of English characters, including uppercase letters (A-Z), lowercase letters (a-z), and digits (0-9). Each character is available in 80,000 to 100,000 unique font styles, making it one of the most comprehensive resources for character-level image modeling.
11
+
12
+ ---
13
+ ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/666c3d6489e21df7d4a02805/phal2bDh0c1lAej9JmFo8.gif)
14
+
15
+ ## Dataset Description
16
+
17
+ The images in this dataset have been generated by rendering over **85,000 unique fonts** collected from various sources on the internet. Each character (e.g., 'A', 'b', '5') has its own ZIP file. When extracted, it contains tens of thousands of stylized grayscale images of that character, all uniformly resized to **128x128 pixels**.
18
+
19
+ This dataset is ideal for projects involving:
20
+ - Training **conditional diffusion models** to generate text or characters.
21
+ - Building **OCR systems** or **character classification models**.
22
+ - Exploring **font-based generative models** or **representation learning** at the character level.
23
+
24
+ Unlike similar datasets that offer only 32x32 images with limited variation in style, this dataset provides significantly higher resolution and diversity, enabling more robust model training and experimentation.
25
+
26
+ ---
27
+
28
+
29
+
30
+ ## Motivations and Use Cases
31
+
32
+ The initial motivation for creating this dataset was to train a **diffusion-based generative model** capable of rendering high-quality text in various styles. The idea is to first train models on individual characters (the smallest building blocks of language) before scaling to full text generation.
33
+
34
+ Another potential use case is in **text recognition or classification** tasks. Having access to such a wide range of styles can help models generalize better to the vast number of font variations present across real-world images on the internet.
35
+
36
+ ### Important Note
37
+
38
+ This dataset is **not fully cleaned**. Some fonts generate symbolic or decorative representations instead of standard characters. While these instances are relatively rare, they may affect certain use cases where strict visual similarity to the character is required.
39
+
40
+ ---
41
+
42
+ ## Folder Structure
43
+
44
+ Each ZIP file (e.g., `A.zip`, `7.zip`, `Z.zip`) contains a folder of grayscale PNG images, all sized at 128x128 pixels. After extraction:
45
+
46
+ ```
47
+ A/
48
+ ├── font_001_A.png
49
+ ├── font_002_A.png
50
+ ...
51
+ ```
52
+
53
+ ---
54
+
55
+ ## How to Use This Dataset
56
+
57
+ You can easily download and extract any character’s ZIP file from the Hugging Face Hub using the `huggingface_hub` library.
58
+
59
+ ### Installation (if needed)
60
+
61
+ ```bash
62
+ pip install huggingface_hub
63
+ ```
64
+
65
+ ### Python Code to Download and Extract a Character Folder
66
+
67
+ ```python
68
+ from huggingface_hub import hf_hub_download
69
+ import zipfile
70
+ import os
71
+
72
+ def download_and_extract(character: str, repo_id: str, dest_root: str = "/content/extracted"):
73
+ os.makedirs(dest_root, exist_ok=True)
74
+ zip_filename = f"{character}.zip"
75
+ zip_path = hf_hub_download(repo_id=repo_id, filename=zip_filename, repo_type="dataset")
76
+
77
+ extract_path = os.path.join(dest_root, character)
78
+ os.makedirs(extract_path, exist_ok=True)
79
+
80
+ with zipfile.ZipFile(zip_path, 'r') as zip_ref:
81
+ zip_ref.extractall(extract_path)
82
+
83
+ return extract_path
84
+ ```
85
+
86
+ ### Example Usage
87
+
88
+ ```python
89
+ extracted_path = download_and_extract(
90
+ character="A",
91
+ repo_id="Mayank022/English_Characters_Images"
92
+ )
93
+ ```
94
+
95
+ You can then use `extracted_path` as the data source for your training loop or dataset class.
96
+
97
+ ---
98
+
99
+ ### Download All Characters (0-9, A-Z)
100
+
101
+ ```python
102
+ chars = [str(i) for i in range(10)] + [chr(c) for c in range(65, 91)] # '0'-'9' + 'A'-'Z'
103
+ for ch in chars:
104
+ download_and_extract(character=ch, repo_id="Mayank022/English_Characters_Images")
105
+ ```
106
+
107
+ ---
108
+
109
+ ## Final Notes
110
+
111
+ This dataset is part of an ongoing effort to support high-quality text generation and recognition research using generative models like diffusion. Contributions and feedback are welcome to help improve and expand the dataset further.