davanstrien HF Staff Claude Opus 4.5 commited on
Commit
1160c58
·
1 Parent(s): e9aa104

Add image classifier training tutorial and template

Browse files

- train-image-classifier.py: Fine-tune ViT for image classification
- Works as interactive tutorial AND batch script
- Includes HF Jobs documentation with GPU flavor table
- Uses beans dataset by default (fast to train)

- _template.py: Minimal template for community adoption

- README.md: Added new scripts, recipes, best practices

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

Files changed (3) hide show
  1. README.md +92 -9
  2. _template.py +138 -0
  3. train-image-classifier.py +529 -0
README.md CHANGED
@@ -23,17 +23,22 @@ This makes them perfect for tutorials and educational content where you want use
23
  | Script | Description |
24
  |--------|-------------|
25
  | `getting-started.py` | Introduction to UV scripts and HF datasets |
 
 
26
 
27
  ## Usage
28
 
29
  ### Run as a script
30
 
31
  ```bash
32
- # Run directly from Hugging Face
33
- uv run https://huggingface.co/datasets/uv-scripts/marimo/raw/main/getting-started.py --help
34
-
35
- # Load a dataset and show info
36
  uv run https://huggingface.co/datasets/uv-scripts/marimo/raw/main/getting-started.py --dataset squad
 
 
 
 
 
 
37
  ```
38
 
39
  ### Run interactively
@@ -45,15 +50,16 @@ cd marimo
45
 
46
  # Open in marimo editor (--sandbox auto-installs dependencies)
47
  uvx marimo edit --sandbox getting-started.py
 
48
  ```
49
 
50
- ### Run on HF Jobs
51
 
52
  ```bash
53
- hf jobs uv run --flavor cpu-basic \
54
- -e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
55
- https://huggingface.co/datasets/uv-scripts/marimo/raw/main/getting-started.py \
56
- --dataset squad
57
  ```
58
 
59
  ## Why Marimo?
@@ -63,6 +69,83 @@ hf jobs uv run --flavor cpu-basic \
63
  - **Self-contained**: Inline dependencies via PEP 723 metadata
64
  - **Dual-mode**: Same file works as notebook and script
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ## Learn More
67
 
68
  - [Marimo documentation](https://docs.marimo.io/)
 
23
  | Script | Description |
24
  |--------|-------------|
25
  | `getting-started.py` | Introduction to UV scripts and HF datasets |
26
+ | `train-image-classifier.py` | Fine-tune a Vision Transformer on image classification |
27
+ | `_template.py` | Minimal template for creating your own notebooks |
28
 
29
  ## Usage
30
 
31
  ### Run as a script
32
 
33
  ```bash
34
+ # Get dataset info
 
 
 
35
  uv run https://huggingface.co/datasets/uv-scripts/marimo/raw/main/getting-started.py --dataset squad
36
+
37
+ # Train an image classifier
38
+ uv run https://huggingface.co/datasets/uv-scripts/marimo/raw/main/train-image-classifier.py \
39
+ --dataset beans \
40
+ --epochs 3 \
41
+ --output-repo your-username/beans-vit
42
  ```
43
 
44
  ### Run interactively
 
50
 
51
  # Open in marimo editor (--sandbox auto-installs dependencies)
52
  uvx marimo edit --sandbox getting-started.py
53
+ uvx marimo edit --sandbox train-image-classifier.py
54
  ```
55
 
56
+ ### Run on HF Jobs (GPU)
57
 
58
  ```bash
59
+ # Train image classifier with GPU
60
+ hf jobs uv run --flavor l4x1 --secrets HF_TOKEN \
61
+ https://huggingface.co/datasets/uv-scripts/marimo/raw/main/train-image-classifier.py \
62
+ -- --dataset beans --output-repo your-username/beans-vit --epochs 5 --push-to-hub
63
  ```
64
 
65
  ## Why Marimo?
 
69
  - **Self-contained**: Inline dependencies via PEP 723 metadata
70
  - **Dual-mode**: Same file works as notebook and script
71
 
72
+ ## Create Your Own Marimo UV Script
73
+
74
+ Use `_template.py` as a starting point:
75
+
76
+ ```bash
77
+ # Clone and copy the template
78
+ git clone https://huggingface.co/datasets/uv-scripts/marimo
79
+ cp marimo/_template.py my-notebook.py
80
+
81
+ # Edit interactively
82
+ uvx marimo edit --sandbox my-notebook.py
83
+
84
+ # Test as script
85
+ uv run my-notebook.py --help
86
+ ```
87
+
88
+ ## Recipes
89
+
90
+ ### Add explanation (notebook only)
91
+
92
+ ```python
93
+ mo.md("""
94
+ ## This is a heading
95
+
96
+ This text explains what's happening. Only shows in interactive mode.
97
+ """)
98
+ ```
99
+
100
+ ### Show output in both modes
101
+
102
+ ```python
103
+ # print() shows in terminal (script) AND cell output (notebook)
104
+ print(f"Loaded {len(data)} items")
105
+ ```
106
+
107
+ ### Interactive control with CLI fallback
108
+
109
+ ```python
110
+ # Parse CLI args first
111
+ parser = argparse.ArgumentParser()
112
+ parser.add_argument("--count", type=int, default=10)
113
+ args, _ = parser.parse_known_args()
114
+
115
+ # Create UI control with CLI default
116
+ slider = mo.ui.slider(1, 100, value=args.count, label="Count")
117
+
118
+ # Use it - works in both modes
119
+ count = slider.value # UI value in notebook, CLI value in script
120
+ ```
121
+
122
+ ### Show visuals (notebook only)
123
+
124
+ ```python
125
+ # mo.md() with images, mo.ui.table(), etc. only display in notebook
126
+ mo.ui.table(dataframe)
127
+
128
+ # For script mode, also print summary
129
+ print(f"DataFrame has {len(df)} rows")
130
+ ```
131
+
132
+ ### Conditional notebook-only code
133
+
134
+ ```python
135
+ # Check if running interactively
136
+ if hasattr(mo, 'running_in_notebook') and mo.running_in_notebook():
137
+ # Heavy visualization only in notebook
138
+ show_complex_plot(data)
139
+ ```
140
+
141
+ ## Best Practices
142
+
143
+ 1. **Always include `print()` for important output** - It works in both modes
144
+ 2. **Use argparse for all configuration** - CLI args work everywhere
145
+ 3. **Add `mo.md()` explanations between steps** - Makes tutorials readable
146
+ 4. **Test in script mode first** - Ensure it works without interactivity
147
+ 5. **Keep dependencies minimal** - Add `marimo` plus only what you need
148
+
149
  ## Learn More
150
 
151
  - [Marimo documentation](https://docs.marimo.io/)
_template.py ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "marimo",
5
+ # # Add your dependencies here, e.g.:
6
+ # # "datasets",
7
+ # # "transformers",
8
+ # # "torch",
9
+ # ]
10
+ # ///
11
+ """
12
+ Your Notebook Title
13
+
14
+ Brief description of what this notebook does.
15
+
16
+ Two ways to run:
17
+ - Tutorial: uvx marimo edit --sandbox your-notebook.py
18
+ - Script: uv run your-notebook.py --your-args
19
+ """
20
+
21
+ import marimo
22
+
23
+ app = marimo.App(width="medium")
24
+
25
+
26
+ # =============================================================================
27
+ # Cell 1: Import marimo
28
+ # This cell is required - it imports marimo for use in other cells
29
+ # =============================================================================
30
+ @app.cell
31
+ def _():
32
+ import marimo as mo
33
+
34
+ return (mo,)
35
+
36
+
37
+ # =============================================================================
38
+ # Cell 2: Introduction (notebook mode only)
39
+ # Use mo.md() for explanations that only show in interactive mode
40
+ # =============================================================================
41
+ @app.cell
42
+ def _(mo):
43
+ mo.md(
44
+ """
45
+ # Your Notebook Title
46
+
47
+ Explain what this notebook does and why it's useful.
48
+
49
+ **Two ways to run:**
50
+ - **Tutorial**: `uvx marimo edit --sandbox your-notebook.py`
51
+ - **Script**: `uv run your-notebook.py --your-args`
52
+ """
53
+ )
54
+ return
55
+
56
+
57
+ # =============================================================================
58
+ # Cell 3: Configuration
59
+ # Pattern: argparse for CLI + mo.ui for interactive controls
60
+ # Interactive controls fall back to CLI defaults
61
+ # =============================================================================
62
+ @app.cell
63
+ def _(mo):
64
+ import argparse
65
+
66
+ # Parse CLI args (works in both modes)
67
+ parser = argparse.ArgumentParser(description="Your script description")
68
+ parser.add_argument("--input", default="default_value", help="Input parameter")
69
+ parser.add_argument("--count", type=int, default=10, help="Number of items")
70
+ args, _ = parser.parse_known_args()
71
+
72
+ # Interactive controls (shown in notebook mode)
73
+ # These use CLI args as defaults, so script mode still works
74
+ input_control = mo.ui.text(value=args.input, label="Input")
75
+ count_control = mo.ui.slider(1, 100, value=args.count, label="Count")
76
+
77
+ mo.hstack([input_control, count_control])
78
+ return argparse, args, count_control, input_control, parser
79
+
80
+
81
+ # =============================================================================
82
+ # Cell 4: Resolve values
83
+ # Use interactive values if set, otherwise fall back to CLI args
84
+ # print() shows output in BOTH modes (script stdout + notebook console)
85
+ # =============================================================================
86
+ @app.cell
87
+ def _(args, count_control, input_control):
88
+ # Resolve values (interactive takes precedence)
89
+ input_value = input_control.value or args.input
90
+ count_value = count_control.value or args.count
91
+
92
+ # print() works in both modes - shows in terminal for scripts,
93
+ # shows in cell output for notebooks
94
+ print(f"Input: {input_value}")
95
+ print(f"Count: {count_value}")
96
+ return count_value, input_value
97
+
98
+
99
+ # =============================================================================
100
+ # Cell 5: Your main logic
101
+ # This is where you do the actual work
102
+ # =============================================================================
103
+ @app.cell
104
+ def _(count_value, input_value, mo):
105
+ mo.md(
106
+ """
107
+ ## Processing
108
+
109
+ Explain what this step does...
110
+ """
111
+ )
112
+
113
+ # Your processing code here
114
+ results = [f"{input_value}_{i}" for i in range(count_value)]
115
+ print(f"Generated {len(results)} results")
116
+ return (results,)
117
+
118
+
119
+ # =============================================================================
120
+ # Cell 6: Display results
121
+ # Use mo.md() or mo.ui.table() for rich display in notebook mode
122
+ # Use print() for output that shows in both modes
123
+ # =============================================================================
124
+ @app.cell
125
+ def _(mo, results):
126
+ # Show in notebook mode (rich display)
127
+ mo.md("### Results\n\n- " + "\n- ".join(results[:5]) + "\n- ...")
128
+
129
+ # Also print for script mode
130
+ print(f"First 5 results: {results[:5]}")
131
+ return
132
+
133
+
134
+ # =============================================================================
135
+ # Entry point - required for script mode
136
+ # =============================================================================
137
+ if __name__ == "__main__":
138
+ app.run()
train-image-classifier.py ADDED
@@ -0,0 +1,529 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "marimo",
5
+ # "datasets",
6
+ # "transformers",
7
+ # "torch",
8
+ # "torchvision",
9
+ # "huggingface-hub",
10
+ # "evaluate",
11
+ # "accelerate",
12
+ # "scikit-learn",
13
+ # ]
14
+ # ///
15
+ """
16
+ Train an Image Classifier
17
+
18
+ This marimo notebook fine-tunes a Vision Transformer (ViT) for image classification.
19
+
20
+ Two ways to run:
21
+ - Tutorial: uvx marimo edit --sandbox train-image-classifier.py
22
+ - Script: uv run train-image-classifier.py --dataset beans --output-repo user/my-model
23
+
24
+ On HF Jobs (GPU):
25
+ hf jobs uv run --flavor l4x1 --secrets HF_TOKEN \
26
+ https://huggingface.co/datasets/uv-scripts/marimo/raw/main/train-image-classifier.py \
27
+ -- --dataset beans --output-repo user/beans-vit --epochs 5
28
+ """
29
+
30
+ import marimo
31
+
32
+ __generated_with = "0.19.6"
33
+ app = marimo.App(width="medium")
34
+
35
+
36
+ @app.cell
37
+ def _():
38
+ import marimo as mo
39
+ return (mo,)
40
+
41
+
42
+ @app.cell
43
+ def _(mo):
44
+ mo.md("""
45
+ # Train an Image Classifier
46
+
47
+ This notebook fine-tunes a Vision Transformer (ViT) for image classification.
48
+
49
+ **Two ways to run:**
50
+ - **Tutorial**: `uvx marimo edit --sandbox train-image-classifier.py`
51
+ - **Script**: `uv run train-image-classifier.py --dataset beans --output-repo user/my-model`
52
+
53
+ The same code powers both experiences!
54
+ """)
55
+ return
56
+
57
+
58
+ @app.cell
59
+ def _(mo):
60
+ mo.md("""
61
+ ## Running on HF Jobs (GPU)
62
+
63
+ This notebook can run on [Hugging Face Jobs](https://huggingface.co/docs/hub/jobs) for GPU training.
64
+ No local GPU needed - just run:
65
+
66
+ ```bash
67
+ hf jobs uv run --flavor l4x1 --secrets HF_TOKEN \\
68
+ https://huggingface.co/datasets/uv-scripts/marimo/raw/main/train-image-classifier.py \\
69
+ -- --dataset beans --output-repo your-username/beans-vit --epochs 5 --push-to-hub
70
+ ```
71
+
72
+ **GPU Flavors:**
73
+ | Flavor | GPU | VRAM | Best for |
74
+ |--------|-----|------|----------|
75
+ | `l4x1` | L4 | 24GB | Most fine-tuning tasks |
76
+ | `a10gx1` | A10G | 24GB | Slightly faster than L4 |
77
+ | `a100x1` | A100 | 40GB | Large models, big batches |
78
+
79
+ **Key flags:**
80
+ - `--secrets HF_TOKEN` - Passes your HF token for pushing models
81
+ - `--` - Separates `hf jobs` args from script args
82
+ - `--push-to-hub` - Actually pushes the model (otherwise just saves locally)
83
+
84
+ **Tip:** Start with `beans` dataset and 1-3 epochs to test, then scale up!
85
+ """)
86
+ return
87
+
88
+
89
+ @app.cell
90
+ def _(mo):
91
+ mo.md("""
92
+ ## Step 1: Configuration
93
+
94
+ Set up training parameters. In interactive mode, use the controls below.
95
+ In script mode, pass command-line arguments.
96
+ """)
97
+ return
98
+
99
+
100
+ @app.cell
101
+ def _(mo):
102
+ import argparse
103
+
104
+ # Parse CLI args (works in both modes)
105
+ parser = argparse.ArgumentParser(description="Fine-tune ViT for image classification")
106
+ parser.add_argument(
107
+ "--dataset",
108
+ default="beans",
109
+ help="HF dataset name (must be image classification dataset)",
110
+ )
111
+ parser.add_argument(
112
+ "--model",
113
+ default="google/vit-base-patch16-224-in21k",
114
+ help="Pretrained model to fine-tune",
115
+ )
116
+ parser.add_argument(
117
+ "--output-repo",
118
+ default=None,
119
+ help="Where to push trained model (e.g., user/my-model)",
120
+ )
121
+ parser.add_argument("--epochs", type=int, default=3, help="Number of training epochs")
122
+ parser.add_argument("--batch-size", type=int, default=16, help="Batch size")
123
+ parser.add_argument("--lr", type=float, default=5e-5, help="Learning rate")
124
+ parser.add_argument(
125
+ "--push-to-hub",
126
+ action="store_true",
127
+ default=False,
128
+ help="Push model to Hub after training",
129
+ )
130
+ args, _ = parser.parse_known_args()
131
+
132
+ # Interactive controls (shown in notebook mode)
133
+ dataset_input = mo.ui.text(value=args.dataset, label="Dataset")
134
+ model_input = mo.ui.text(value=args.model, label="Model")
135
+ output_input = mo.ui.text(value=args.output_repo or "", label="Output Repo")
136
+ epochs_input = mo.ui.slider(1, 20, value=args.epochs, label="Epochs")
137
+ batch_size_input = mo.ui.dropdown(
138
+ options=["8", "16", "32", "64"], value=str(args.batch_size), label="Batch Size"
139
+ )
140
+ lr_input = mo.ui.dropdown(
141
+ options=["1e-5", "2e-5", "5e-5", "1e-4"],
142
+ value=f"{args.lr:.0e}".replace("e-0", "e-"),
143
+ label="Learning Rate",
144
+ )
145
+
146
+ mo.vstack(
147
+ [
148
+ mo.hstack([dataset_input, model_input]),
149
+ mo.hstack([output_input]),
150
+ mo.hstack([epochs_input, batch_size_input, lr_input]),
151
+ ]
152
+ )
153
+ return (
154
+ args,
155
+ batch_size_input,
156
+ dataset_input,
157
+ epochs_input,
158
+ lr_input,
159
+ model_input,
160
+ output_input,
161
+ )
162
+
163
+
164
+ @app.cell
165
+ def _(
166
+ args,
167
+ batch_size_input,
168
+ dataset_input,
169
+ epochs_input,
170
+ lr_input,
171
+ model_input,
172
+ output_input,
173
+ ):
174
+ # Resolve values (interactive takes precedence)
175
+ dataset_name = dataset_input.value or args.dataset
176
+ model_name = model_input.value or args.model
177
+ output_repo = output_input.value or args.output_repo
178
+ num_epochs = epochs_input.value or args.epochs
179
+ batch_size = int(batch_size_input.value) if batch_size_input.value else args.batch_size
180
+ learning_rate = float(lr_input.value) if lr_input.value else args.lr
181
+
182
+ print("Configuration:")
183
+ print(f" Dataset: {dataset_name}")
184
+ print(f" Model: {model_name}")
185
+ print(f" Output: {output_repo or '(not pushing to Hub)'}")
186
+ print(f" Epochs: {num_epochs}, Batch Size: {batch_size}, LR: {learning_rate}")
187
+ return (
188
+ batch_size,
189
+ dataset_name,
190
+ learning_rate,
191
+ model_name,
192
+ num_epochs,
193
+ output_repo,
194
+ )
195
+
196
+
197
+ @app.cell
198
+ def _(mo):
199
+ mo.md("""
200
+ ## Step 2: Load Dataset
201
+
202
+ We'll load an image classification dataset from the Hub.
203
+ The `beans` dataset is small (~1000 images) and trains quickly - perfect for learning!
204
+ """)
205
+ return
206
+
207
+
208
+ @app.cell
209
+ def _(dataset_name, mo):
210
+ from datasets import load_dataset
211
+
212
+ print(f"Loading dataset: {dataset_name}...")
213
+ dataset = load_dataset(dataset_name, trust_remote_code=True)
214
+ print(f"Train: {len(dataset['train']):,} samples")
215
+ print(f"Test: {len(dataset['test']):,} samples")
216
+
217
+ # Get label info
218
+ label_feature = dataset["train"].features["label"]
219
+ labels = label_feature.names if hasattr(label_feature, "names") else None
220
+ num_labels = label_feature.num_classes if hasattr(label_feature, "num_classes") else len(set(dataset["train"]["label"]))
221
+
222
+ print(f"Labels ({num_labels}): {labels}")
223
+
224
+ mo.md(f"**Loaded {len(dataset['train']):,} training samples with {num_labels} classes**")
225
+ return dataset, labels, num_labels
226
+
227
+
228
+ @app.cell
229
+ def _(dataset, labels, mo):
230
+ # Show sample images (notebook mode only)
231
+ import base64
232
+ from io import BytesIO
233
+
234
+ def image_to_base64(img, max_size=150):
235
+ """Convert PIL image to base64 for HTML display."""
236
+ img_copy = img.copy()
237
+ img_copy.thumbnail((max_size, max_size))
238
+ buffered = BytesIO()
239
+ img_copy.save(buffered, format="PNG")
240
+ return base64.b64encode(buffered.getvalue()).decode()
241
+
242
+ # Get 6 sample images with different labels
243
+ samples = dataset["train"].shuffle(seed=42).select(range(6))
244
+
245
+ images_html = []
246
+ for sample in samples:
247
+ img_b64 = image_to_base64(sample["image"])
248
+ label_name = labels[sample["label"]] if labels else sample["label"]
249
+ images_html.append(
250
+ f"""
251
+ <div style="text-align: center; margin: 5px;">
252
+ <img src="data:image/png;base64,{img_b64}" style="border-radius: 8px;"/>
253
+ <br/><small>{label_name}</small>
254
+ </div>
255
+ """
256
+ )
257
+
258
+ mo.md(f"""
259
+ ### Sample Images
260
+ <div style="display: flex; flex-wrap: wrap; gap: 10px;">
261
+ {"".join(images_html)}
262
+ </div>
263
+ """)
264
+ return
265
+
266
+
267
+ @app.cell
268
+ def _(mo):
269
+ mo.md("""
270
+ ## Step 3: Prepare Model and Processor
271
+
272
+ We load a pretrained Vision Transformer and its image processor.
273
+ The processor handles resizing and normalization to match the model's training.
274
+ """)
275
+ return
276
+
277
+
278
+ @app.cell
279
+ def _(labels, model_name, num_labels):
280
+ from transformers import AutoImageProcessor, AutoModelForImageClassification
281
+
282
+ print(f"Loading model: {model_name}...")
283
+
284
+ # Load image processor
285
+ image_processor = AutoImageProcessor.from_pretrained(model_name)
286
+ print(f"Image size: {image_processor.size}")
287
+
288
+ # Load model with correct number of labels
289
+ label2id = {label: i for i, label in enumerate(labels)} if labels else None
290
+ id2label = {i: label for i, label in enumerate(labels)} if labels else None
291
+
292
+ model = AutoModelForImageClassification.from_pretrained(
293
+ model_name,
294
+ num_labels=num_labels,
295
+ label2id=label2id,
296
+ id2label=id2label,
297
+ ignore_mismatched_sizes=True, # Classification head will be different
298
+ )
299
+ print(f"Model loaded with {num_labels} output classes")
300
+ return id2label, image_processor, model
301
+
302
+
303
+ @app.cell
304
+ def _(mo):
305
+ mo.md("""
306
+ ## Step 4: Preprocess Data
307
+
308
+ Apply the image processor to convert images into tensors suitable for the model.
309
+ """)
310
+ return
311
+
312
+
313
+ @app.cell
314
+ def _(dataset, image_processor):
315
+ def preprocess(examples):
316
+ """Apply image processor to batch of images."""
317
+ images = [img.convert("RGB") for img in examples["image"]]
318
+ inputs = image_processor(images, return_tensors="pt")
319
+ inputs["label"] = examples["label"]
320
+ return inputs
321
+
322
+ print("Preprocessing dataset...")
323
+ processed_dataset = dataset.with_transform(preprocess)
324
+ print("Preprocessing complete (transforms applied lazily)")
325
+ return (processed_dataset,)
326
+
327
+
328
+ @app.cell
329
+ def _(mo):
330
+ mo.md("""
331
+ ## Step 5: Training
332
+
333
+ We use the Hugging Face Trainer for a clean training loop with built-in logging.
334
+ """)
335
+ return
336
+
337
+
338
+ @app.cell
339
+ def _(
340
+ batch_size,
341
+ learning_rate,
342
+ model,
343
+ num_epochs,
344
+ output_repo,
345
+ processed_dataset,
346
+ ):
347
+ import evaluate
348
+ import numpy as np
349
+ from transformers import Trainer, TrainingArguments
350
+
351
+ # Load accuracy metric
352
+ accuracy_metric = evaluate.load("accuracy")
353
+
354
+ def compute_metrics(eval_pred):
355
+ predictions, labels = eval_pred
356
+ predictions = np.argmax(predictions, axis=1)
357
+ return accuracy_metric.compute(predictions=predictions, references=labels)
358
+
359
+ # Training arguments
360
+ training_args = TrainingArguments(
361
+ output_dir="./image-classifier-output",
362
+ num_train_epochs=num_epochs,
363
+ per_device_train_batch_size=batch_size,
364
+ per_device_eval_batch_size=batch_size,
365
+ learning_rate=learning_rate,
366
+ eval_strategy="epoch",
367
+ save_strategy="epoch",
368
+ logging_steps=10,
369
+ load_best_model_at_end=True,
370
+ metric_for_best_model="accuracy",
371
+ push_to_hub=bool(output_repo),
372
+ hub_model_id=output_repo if output_repo else None,
373
+ remove_unused_columns=False, # Keep image column for transforms
374
+ report_to="none", # Disable wandb/tensorboard for simplicity
375
+ )
376
+
377
+ # Create trainer
378
+ trainer = Trainer(
379
+ model=model,
380
+ args=training_args,
381
+ train_dataset=processed_dataset["train"],
382
+ eval_dataset=processed_dataset["test"],
383
+ compute_metrics=compute_metrics,
384
+ )
385
+
386
+ print(f"Starting training for {num_epochs} epochs...")
387
+ return (trainer,)
388
+
389
+
390
+ @app.cell
391
+ def _(trainer):
392
+ # Run training
393
+ train_result = trainer.train()
394
+ print("\nTraining complete!")
395
+ print(f" Total steps: {train_result.global_step}")
396
+ print(f" Training loss: {train_result.training_loss:.4f}")
397
+ return
398
+
399
+
400
+ @app.cell
401
+ def _(mo):
402
+ mo.md("""
403
+ ## Step 6: Evaluation
404
+
405
+ Let's see how well our model performs on the test set.
406
+ """)
407
+ return
408
+
409
+
410
+ @app.cell
411
+ def _(trainer):
412
+ # Evaluate on test set
413
+ eval_results = trainer.evaluate()
414
+ print("\nEvaluation Results:")
415
+ print(f" Accuracy: {eval_results['eval_accuracy']:.2%}")
416
+ print(f" Loss: {eval_results['eval_loss']:.4f}")
417
+ return
418
+
419
+
420
+ @app.cell
421
+ def _(dataset, id2label, image_processor, mo, model):
422
+ import torch
423
+
424
+ # Show some predictions (notebook mode)
425
+ model.eval()
426
+ test_samples = dataset["test"].shuffle(seed=42).select(range(4))
427
+
428
+ prediction_html = []
429
+ for sample in test_samples:
430
+ img = sample["image"].convert("RGB")
431
+ inputs = image_processor(img, return_tensors="pt")
432
+
433
+ with torch.no_grad():
434
+ outputs = model(**inputs)
435
+ pred_idx = outputs.logits.argmax(-1).item()
436
+
437
+ true_label = id2label[sample["label"]] if id2label else sample["label"]
438
+ pred_label = id2label[pred_idx] if id2label else pred_idx
439
+ correct = "correct" if pred_idx == sample["label"] else "wrong"
440
+
441
+ # Convert image for display
442
+ from io import BytesIO
443
+ import base64
444
+
445
+ img_copy = img.copy()
446
+ img_copy.thumbnail((120, 120))
447
+ buffered = BytesIO()
448
+ img_copy.save(buffered, format="PNG")
449
+ img_b64 = base64.b64encode(buffered.getvalue()).decode()
450
+
451
+ border_color = "#4ade80" if correct == "correct" else "#f87171"
452
+ prediction_html.append(
453
+ f"""
454
+ <div style="text-align: center; margin: 5px; padding: 10px; border: 2px solid {border_color}; border-radius: 8px;">
455
+ <img src="data:image/png;base64,{img_b64}" style="border-radius: 4px;"/>
456
+ <br/><small>True: <b>{true_label}</b></small>
457
+ <br/><small>Pred: <b>{pred_label}</b></small>
458
+ </div>
459
+ """
460
+ )
461
+
462
+ mo.md(f"""
463
+ ### Sample Predictions
464
+ <div style="display: flex; flex-wrap: wrap; gap: 10px;">
465
+ {"".join(prediction_html)}
466
+ </div>
467
+ <small>Green border = correct, Red border = wrong</small>
468
+ """)
469
+ return
470
+
471
+
472
+ @app.cell
473
+ def _(mo):
474
+ mo.md("""
475
+ ## Step 7: Push to Hub
476
+
477
+ If you specified `--output-repo`, the model will be pushed to the Hugging Face Hub.
478
+ """)
479
+ return
480
+
481
+
482
+ @app.cell
483
+ def _(args, output_repo, trainer):
484
+ if output_repo and args.push_to_hub:
485
+ print(f"Pushing model to: https://huggingface.co/{output_repo}")
486
+ trainer.push_to_hub()
487
+ print("Model pushed successfully!")
488
+ elif output_repo:
489
+ print("Model saved locally. To push to Hub, add --push-to-hub flag.")
490
+ print(" Or run: trainer.push_to_hub()")
491
+ else:
492
+ print("No output repo specified. Model saved locally to ./image-classifier-output")
493
+ print("To push to Hub, run with: --output-repo your-username/model-name --push-to-hub")
494
+ return
495
+
496
+
497
+ @app.cell
498
+ def _(mo):
499
+ mo.md("""
500
+ ## Next Steps
501
+
502
+ ### Try different datasets
503
+ - `food101` - 101 food categories (75k train images)
504
+ - `cifar10` - 10 classes of objects (50k train images)
505
+ - `oxford_flowers102` - 102 flower species
506
+ - `fashion_mnist` - Clothing items (grayscale)
507
+
508
+ ### Try different models
509
+ - `microsoft/resnet-50` - Classic CNN architecture
510
+ - `facebook/deit-base-patch16-224` - Data-efficient ViT
511
+ - `google/vit-large-patch16-224` - Larger ViT (needs more VRAM)
512
+
513
+ ### Scale up with HF Jobs
514
+
515
+ ```bash
516
+ # Train on food101 with more epochs
517
+ hf jobs uv run --flavor l4x1 --secrets HF_TOKEN \\
518
+ https://huggingface.co/datasets/uv-scripts/marimo/raw/main/train-image-classifier.py \\
519
+ -- --dataset food101 --epochs 10 --batch-size 32 \\
520
+ --output-repo your-username/food101-vit --push-to-hub
521
+ ```
522
+
523
+ **More UV scripts**: [huggingface.co/uv-scripts](https://huggingface.co/uv-scripts)
524
+ """)
525
+ return
526
+
527
+
528
+ if __name__ == "__main__":
529
+ app.run()