NickMystic commited on
Commit
9005665
·
verified ·
1 Parent(s): e039070

Upload folder using huggingface_hub

Browse files
alexnet_places365.pth_mlx.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:587f2f379063fb722563b86d9e7fea2321119b571c6bff7e09e309abf6dbf0b4
3
+ size 117002764
resnet50_places365.pth_mlx.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7e4496e460a4cbec41e02f169c7be9c0e3cebe28036ac917105ba386471c47b
3
+ size 48691562
train_dream.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TODO: Implement Fine-Tuning Logic
2
+
3
+ """
4
+ DeepDream Training / Fine-Tuning Script (Placeholder)
5
+
6
+ Goal:
7
+ Allow users to fine-tune these base models (VGG, GoogLeNet, etc.) on their own datasets
8
+ to create custom Dream styles.
9
+
10
+ Steps to Implement:
11
+ 1. Load Dataset: Use `torchvision.datasets.ImageFolder` or custom loader for user images.
12
+ 2. Load Model: Use our MLX models (need to add `train()` mode with dropout/grad support if missing,
13
+ or simpler: use PyTorch for training -> export to MLX).
14
+ *Easier path:* Train in PyTorch using standard scripts, then use `export_*.py` to bring it here.
15
+ 3. Training Loop: Standard classification training or style transfer fine-tuning.
16
+ 4. Export: Save the fine-tuned weights to `.pth`, then run export script.
17
+
18
+ Usage:
19
+ python train_dream.py --data /path/to/images --epochs 10 --model vgg16
20
+ """
21
+
22
+ import argparse
23
+
24
+ def main():
25
+ print("--- DeepDream-MLX Training Stub ---")
26
+ print("Feature coming soon.")
27
+ print("Current Workflow: Train in PyTorch -> Use export_*.py -> Dream in MLX")
28
+
29
+ if __name__ == "__main__":
30
+ main()