basavyr commited on
Commit
f2f19b4
Β·
1 Parent(s): fe2aefa

update docs

Browse files
Files changed (2) hide show
  1. AGENTS.md +34 -5
  2. README.md +36 -190
AGENTS.md CHANGED
@@ -32,8 +32,7 @@ python -m py_compile scripts/*.py
32
 
33
  ## 🎯 Code Style Guidelines
34
 
35
- ### Python Version & Structure
36
- - **Python Version**: 3.12 (see `.python-version`)
37
  - **Code Layout**: All Python modules in `scripts/` directory
38
  - **Entry Points**: Each module can be run as `__main__`
39
 
@@ -127,13 +126,15 @@ def check_image_sizes(data_dir: str = "data", num_samples: int = 10) -> None:
127
  ### Dependencies
128
  - **Core**: torch, pandas, PIL (Pillow), pathlib
129
  - **Optional**: torchvision for transforms
 
130
  - Keep dependencies minimal and well-justified
131
 
132
  ## πŸ“ Core Files
133
  - **`scripts/pytorch_dataloader.py`** - Main dataloader with type safety & error handling
134
- - **`scripts/utils.py`** - Data inspection utilities (debug, sizes, PyTorch memory analysis)
135
  - **`scripts/classes.py`** - ImageNet-100 class definitions
136
  - **`data/`** - Parquet files (train/validation splits)
 
137
 
138
  ## πŸ”§ Development Workflow
139
  1. Test changes with `python scripts/pytorch_dataloader.py`
@@ -147,7 +148,35 @@ def check_image_sizes(data_dir: str = "data", num_samples: int = 10) -> None:
147
  - Dataset files are large and stored in Git LFS
148
  - Always validate parquet file operations
149
  - Image decoding can fail - handle exceptions
150
- - Memory usage is ~24MB total - efficient for most systems
151
  - Maintain compatibility with PyTorch training workflows
152
  - Preserve existing API contracts when modifying dataloader
153
- - Use type hints for better IDE support and debugging
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## 🎯 Code Style Guidelines
34
 
35
+ ### Code Structure
 
36
  - **Code Layout**: All Python modules in `scripts/` directory
37
  - **Entry Points**: Each module can be run as `__main__`
38
 
 
126
  ### Dependencies
127
  - **Core**: torch, pandas, PIL (Pillow), pathlib
128
  - **Optional**: torchvision for transforms
129
+ - **Future PT-only**: torch, PIL (Pillow), pathlib (pandas eliminated)
130
  - Keep dependencies minimal and well-justified
131
 
132
  ## πŸ“ Core Files
133
  - **`scripts/pytorch_dataloader.py`** - Main dataloader with type safety & error handling
134
+ - **`scripts/utils.py`** - Data inspection utilities (debug, sizes, memory analysis)
135
  - **`scripts/classes.py`** - ImageNet-100 class definitions
136
  - **`data/`** - Parquet files (train/validation splits)
137
+ - **Future**: `data/train.pt`, `data/validation.pt` (planned PT conversion)
138
 
139
  ## πŸ”§ Development Workflow
140
  1. Test changes with `python scripts/pytorch_dataloader.py`
 
148
  - Dataset files are large and stored in Git LFS
149
  - Always validate parquet file operations
150
  - Image decoding can fail - handle exceptions
151
+ - Memory usage is efficient with lazy loading
152
  - Maintain compatibility with PyTorch training workflows
153
  - Preserve existing API contracts when modifying dataloader
154
+ - Use type hints for better IDE support and debugging
155
+
156
+ ## πŸ”„ PT File Conversion (Planned)
157
+
158
+ ### Conversion Workflow
159
+ ```python
160
+ # One-time conversion script (backlog)
161
+ def convert_parquet_to_pt(data_dir, split):
162
+ dataset = ImageNet100Parquet(data_dir, split, transform)
163
+ all_images = torch.stack([dataset[i][0] for i in range(len(dataset))])
164
+ all_labels = torch.tensor([dataset[i][1] for i in range(len(dataset))])
165
+ torch.save({'images': all_images, 'labels': all_labels}, f'{data_dir}/{split}.pt')
166
+ ```
167
+
168
+ ### PT Dataset Class
169
+ ```python
170
+ class ImageNet100PT(Dataset):
171
+ def __init__(self, data_dir: str, split: str = "train"):
172
+ data = torch.load(f"{data_dir}/{split}.pt")
173
+ self.images = data['images']
174
+ self.labels = data['labels']
175
+
176
+ def __len__(self): return len(self.images)
177
+ def __getitem__(self, idx): return self.images[idx], self.labels[idx]
178
+ ```
179
+
180
+ ### File Patterns
181
+ - **Current**: `data/train-*.parquet`, `data/validation-*.parquet`
182
+ - **Future**: `data/train.pt`, `data/validation.pt`
README.md CHANGED
@@ -1,202 +1,45 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: image
5
- dtype: image
6
- - name: label
7
- dtype:
8
- class_label:
9
- names:
10
- '0': bonnet, poke bonnet
11
- '1': green mamba
12
- '2': langur
13
- '3': Doberman, Doberman pinscher
14
- '4': gyromitra
15
- '5': Saluki, gazelle hound
16
- '6': vacuum, vacuum cleaner
17
- '7': window screen
18
- '8': cocktail shaker
19
- '9': garden spider, Aranea diademata
20
- '10': garter snake, grass snake
21
- '11': carbonara
22
- '12': pineapple, ananas
23
- '13': computer keyboard, keypad
24
- '14': tripod
25
- '15': komondor
26
- '16': >-
27
- American lobster, Northern lobster, Maine lobster, Homarus
28
- americanus
29
- '17': bannister, banister, balustrade, balusters, handrail
30
- '18': honeycomb
31
- '19': tile roof
32
- '20': papillon
33
- '21': boathouse
34
- '22': stinkhorn, carrion fungus
35
- '23': jean, blue jean, denim
36
- '24': Chihuahua
37
- '25': Chesapeake Bay retriever
38
- '26': robin, American robin, Turdus migratorius
39
- '27': tub, vat
40
- '28': Great Dane
41
- '29': rotisserie
42
- '30': bottlecap
43
- '31': throne
44
- '32': little blue heron, Egretta caerulea
45
- '33': rock crab, Cancer irroratus
46
- '34': Rottweiler
47
- '35': lorikeet
48
- '36': Gila monster, Heloderma suspectum
49
- '37': head cabbage
50
- '38': car wheel
51
- '39': coyote, prairie wolf, brush wolf, Canis latrans
52
- '40': moped
53
- '41': milk can
54
- '42': mixing bowl
55
- '43': toy terrier
56
- '44': chocolate sauce, chocolate syrup
57
- '45': rocking chair, rocker
58
- '46': wing
59
- '47': park bench
60
- '48': ambulance
61
- '49': football helmet
62
- '50': leafhopper
63
- '51': cauliflower
64
- '52': pirate, pirate ship
65
- '53': purse
66
- '54': hare
67
- '55': lampshade, lamp shade
68
- '56': fiddler crab
69
- '57': standard poodle
70
- '58': Shih-Tzu
71
- '59': pedestal, plinth, footstall
72
- '60': gibbon, Hylobates lar
73
- '61': safety pin
74
- '62': English foxhound
75
- '63': chime, bell, gong
76
- '64': >-
77
- American Staffordshire terrier, Staffordshire terrier, American pit
78
- bull terrier, pit bull terrier
79
- '65': bassinet
80
- '66': wild boar, boar, Sus scrofa
81
- '67': theater curtain, theatre curtain
82
- '68': dung beetle
83
- '69': hognose snake, puff adder, sand viper
84
- '70': Mexican hairless
85
- '71': mortarboard
86
- '72': Walker hound, Walker foxhound
87
- '73': red fox, Vulpes vulpes
88
- '74': modem
89
- '75': slide rule, slipstick
90
- '76': walking stick, walkingstick, stick insect
91
- '77': cinema, movie theater, movie theatre, movie house, picture palace
92
- '78': meerkat, mierkat
93
- '79': kuvasz
94
- '80': obelisk
95
- '81': harmonica, mouth organ, harp, mouth harp
96
- '82': sarong
97
- '83': mousetrap
98
- '84': hard disc, hard disk, fixed disk
99
- '85': American coot, marsh hen, mud hen, water hen, Fulica americana
100
- '86': reel
101
- '87': pickup, pickup truck
102
- '88': iron, smoothing iron
103
- '89': tabby, tabby cat
104
- '90': ski mask
105
- '91': vizsla, Hungarian pointer
106
- '92': laptop, laptop computer
107
- '93': stretcher
108
- '94': Dutch oven
109
- '95': African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus
110
- '96': boxer
111
- '97': gasmask, respirator, gas helmet
112
- '98': goose
113
- '99': borzoi, Russian wolfhound
114
- splits:
115
- - name: train
116
- num_bytes: 8091813320.875
117
- num_examples: 126689
118
- - name: validation
119
- num_bytes: 314447246
120
- num_examples: 5000
121
- download_size: 8406986315
122
- dataset_size: 8406260566.875
123
- configs:
124
- - config_name: default
125
- data_files:
126
- - split: train
127
- path: data/train-*
128
- - split: validation
129
- path: data/validation-*
130
- task_categories:
131
- - image-classification
132
- size_categories:
133
- - 100K<n<1M
134
- ---
135
-
136
  # ImageNet-100 PyTorch Dataloader
137
 
138
- A streamlined PyTorch implementation for loading ImageNet-100 dataset from parquet files. This repository provides efficient dataloaders for both training and validation, perfect for computer vision tasks.
139
 
140
  ## πŸš€ Quick Start
141
 
142
- ### Basic Usage
143
-
144
  ```python
145
  from scripts.pytorch_dataloader import ImageNet100Parquet
146
  from torch.utils.data import DataLoader
147
  from torchvision import transforms
148
 
149
- # Define transforms (resize to 224x224 for most models)
150
  transform = transforms.Compose([
151
  transforms.Resize((224, 224)),
152
  transforms.ToTensor(),
153
  ])
154
 
155
- # Create datasets
156
  train_dataset = ImageNet100Parquet("data", "train", transform)
157
- test_dataset = ImageNet100Parquet("data", "validation", transform)
158
 
159
- # Create dataloaders
160
  train_loader = DataLoader(train_dataset, batch_size=8, shuffle=True)
161
- test_loader = DataLoader(test_dataset, batch_size=8, shuffle=False)
162
-
163
- # Use in your training loop
164
- for x, y_true in train_loader:
165
- # x.shape: [batch_size, 3, 224, 224]
166
- # y_true.shape: [batch_size]
167
- pass
168
 
169
- for x, y_true in test_loader:
170
- # Same structure for validation
171
  pass
172
  ```
173
 
174
  ## πŸ“Š Dataset Details
175
 
176
  - **Classes**: 100 ImageNet classes (balanced)
177
- - **Training samples**: 126,689 images
178
- - **Validation samples**: 5,000 images
179
- - **Original image sizes**: Variable (mostly ~160px on shorter side)
180
- - **Standard output**: Resized to 224x224 (configurable)
181
 
182
  ## πŸ› οΈ Utilities
183
 
184
  ### Data Inspection
185
 
186
- Use the built-in utilities to understand your data structure:
187
-
188
  ```bash
189
- # Run all utilities
190
- python scripts/utils.py
191
-
192
- # Debug data structure only
193
- python scripts/utils.py debug
194
-
195
- # Check image sizes only
196
- python scripts.utils.py sizes
197
-
198
- # Analyze memory usage (PyTorch tensor-based)
199
- python scripts.utils.py memory
200
  ```
201
 
202
  ### Programmatic Usage
@@ -204,49 +47,50 @@ python scripts.utils.py memory
204
  ```python
205
  from scripts.utils import debug_structure, check_image_sizes, analyze_memory_usage
206
 
207
- # Inspect parquet file structure
208
- debug_structure()
209
-
210
- # Analyze image dimensions and PyTorch tensor memory usage
211
- check_image_sizes(num_samples=20)
212
- analyze_memory_usage(batch_size=32, num_batches=5)
213
  ```
214
 
215
  ## 🎯 Key Features
216
 
217
  - **Efficient Loading**: Direct parquet file reading with proper image decoding
218
- - **Memory Optimized**: ~24MB total memory usage with lazy loading during iteration
219
  - **Robust Error Handling**: Comprehensive validation and error messages
220
  - **Type Safe**: Full type hints for better IDE support and debugging
221
  - **Flexible Transforms**: Easy to customize preprocessing pipeline
222
- - **Data Inspection**: Built-in utilities for understanding dataset structure and memory usage
223
  - **PyTorch Native**: Seamless integration with PyTorch training workflows
224
 
 
 
 
 
 
 
 
 
 
225
  ## πŸ“ Data Format
226
 
227
- Images are stored in parquet files with the following structure:
228
  ```python
229
  {
230
- 'image': {
231
- 'bytes': b'\x89PNG...', # Raw image bytes
232
- 'path': None
233
- },
234
  'label': 0 # Integer class label (0-99)
235
  }
236
  ```
237
 
238
  ## πŸ”§ Configuration
239
 
240
- You can easily modify the dataloader for different needs:
241
-
242
  ```python
243
  # Different image sizes
244
  transform = transforms.Compose([
245
- transforms.Resize((256, 256)), # For models expecting 256x256
246
  transforms.ToTensor(),
247
  ])
248
 
249
- # Custom preprocessing
250
  transform = transforms.Compose([
251
  transforms.Resize((224, 224)),
252
  transforms.RandomHorizontalFlip(),
@@ -259,16 +103,18 @@ transform = transforms.Compose([
259
 
260
  ## πŸ“š Dataset Information
261
 
 
 
 
262
  - **Homepage**: https://github.com/HobbitLong/CMC
263
  - **Paper**: [Contrastive multiview coding](https://arxiv.org/abs/1906.05849)
264
- - **Based on**: Original ImageNet-1k with 100 randomly selected classes
265
 
266
  ## πŸ“„ License
267
 
268
- This dataset follows the original ImageNet license terms. Use only for non-commercial research and educational purposes.
269
 
270
  ## πŸ™ Acknowledgments
271
 
272
- - Original ImageNet team for the dataset
273
- - πŸ€— Transformers for the parquet format reference
274
- - [CMC paper](https://arxiv.org/abs/1906.05849) for the ImageNet-100 subset definition
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # ImageNet-100 PyTorch Dataloader
2
 
3
+ Streamlined PyTorch implementation for ImageNet-100 from parquet files. Efficient dataloaders for training and validation.
4
 
5
  ## πŸš€ Quick Start
6
 
 
 
7
  ```python
8
  from scripts.pytorch_dataloader import ImageNet100Parquet
9
  from torch.utils.data import DataLoader
10
  from torchvision import transforms
11
 
 
12
  transform = transforms.Compose([
13
  transforms.Resize((224, 224)),
14
  transforms.ToTensor(),
15
  ])
16
 
 
17
  train_dataset = ImageNet100Parquet("data", "train", transform)
18
+ val_dataset = ImageNet100Parquet("data", "validation", transform)
19
 
 
20
  train_loader = DataLoader(train_dataset, batch_size=8, shuffle=True)
21
+ val_loader = DataLoader(val_dataset, batch_size=8, shuffle=False)
 
 
 
 
 
 
22
 
23
+ for x, y in train_loader: # x: [batch, 3, 224, 224], y: [batch]
 
24
  pass
25
  ```
26
 
27
  ## πŸ“Š Dataset Details
28
 
29
  - **Classes**: 100 ImageNet classes (balanced)
30
+ - **Training**: 126,689 images
31
+ - **Validation**: 5,000 images
32
+ - **Image sizes**: Variable (standard output: 224Γ—224)
 
33
 
34
  ## πŸ› οΈ Utilities
35
 
36
  ### Data Inspection
37
 
 
 
38
  ```bash
39
+ python scripts/utils.py # Run all utilities
40
+ python scripts/utils.py debug # Debug structure only
41
+ python scripts/utils.py sizes # Check image sizes only
42
+ python scripts/utils.py memory # Analyze memory usage
 
 
 
 
 
 
 
43
  ```
44
 
45
  ### Programmatic Usage
 
47
  ```python
48
  from scripts.utils import debug_structure, check_image_sizes, analyze_memory_usage
49
 
50
+ debug_structure() # Inspect parquet structure
51
+ check_image_sizes(num_samples=20) # Analyze image dimensions
52
+ analyze_memory_usage(batch_size=32) # Memory usage analysis
 
 
 
53
  ```
54
 
55
  ## 🎯 Key Features
56
 
57
  - **Efficient Loading**: Direct parquet file reading with proper image decoding
58
+ - **Memory Optimized**: Lazy loading with efficient tensor memory usage
59
  - **Robust Error Handling**: Comprehensive validation and error messages
60
  - **Type Safe**: Full type hints for better IDE support and debugging
61
  - **Flexible Transforms**: Easy to customize preprocessing pipeline
62
+ - **Data Inspection**: Built-in utilities for dataset structure analysis
63
  - **PyTorch Native**: Seamless integration with PyTorch training workflows
64
 
65
+ ## πŸ”„ Future PT File Support
66
+
67
+ Planned conversion to eliminate pandas dependency:
68
+ ```python
69
+ # Future usage (backlog item)
70
+ train_dataset = ImageNet100PT("data", "train") # Direct torch.load()
71
+ val_dataset = ImageNet100PT("data", "validation")
72
+ ```
73
+
74
  ## πŸ“ Data Format
75
 
76
+ Parquet structure:
77
  ```python
78
  {
79
+ 'image': {'bytes': b'\x89PNG...', 'path': None},
 
 
 
80
  'label': 0 # Integer class label (0-99)
81
  }
82
  ```
83
 
84
  ## πŸ”§ Configuration
85
 
 
 
86
  ```python
87
  # Different image sizes
88
  transform = transforms.Compose([
89
+ transforms.Resize((256, 256)),
90
  transforms.ToTensor(),
91
  ])
92
 
93
+ # Custom preprocessing
94
  transform = transforms.Compose([
95
  transforms.Resize((224, 224)),
96
  transforms.RandomHorizontalFlip(),
 
103
 
104
  ## πŸ“š Dataset Information
105
 
106
+ - **Classes**: 100 ImageNet classes (balanced)
107
+ - **Training**: 126,689 images
108
+ - **Validation**: 5,000 images
109
  - **Homepage**: https://github.com/HobbitLong/CMC
110
  - **Paper**: [Contrastive multiview coding](https://arxiv.org/abs/1906.05849)
 
111
 
112
  ## πŸ“„ License
113
 
114
+ Original ImageNet license terms. Non-commercial research and educational use only.
115
 
116
  ## πŸ™ Acknowledgments
117
 
118
+ - Original ImageNet team
119
+ - πŸ€— Transformers (parquet format reference)
120
+ - [CMC paper](https://arxiv.org/abs/1906.05849) (ImageNet-100 subset)