CIFAR-10 Fast Training Benchmark (VGG-style)
This model was trained as a performance showcase for Epochly, a zero-config cloud GPU infrastructure.
π Performance Results
- Training Time: 10.19 seconds (3 epochs)
- Hardware: NVIDIA Blackwell GB10 (128GB Unified Memory)
- Setup Time: ~10 seconds (Cold Start)
- Framework: PyTorch 2.5+
π οΈ Model Architecture
A custom SimpleVGG implementation with:
- 3 Convolutional blocks (64, 128, and 256 filters)
- ReLU activations and MaxPool layers
- Fully connected classifier with Dropout
- Optimized for Blackwell's FP4/FP8 tensor cores
π¦ How to reproduce
You can find the training script AI training.py in the files of this repository.
To run this exact benchmark in under 10 seconds without configuring CUDA or Docker, upload the script to: π https://www.epochly.co/
π Training Logs
π AI TRAINING SPEED BENCHMARK (Local CPU vs Epochly GPU)
[*] Target device detected: CUDA
- GPU Model: NVIDIA GB10
[*] Initializing Deep Neural Network (VGG-style)...
[*] Starting training for 3 epochs...
=> Epoch 1 completed in 3.64 seconds
=> Epoch 2 completed in 3.29 seconds
=> Epoch 3 completed in 3.27 seconds
π TRAINING COMPLETE!
Total time taken: 10.19 seconds
π‘ TIMELINE COMPARISON:
Wow! Only 10.19s on Epochly GPU! π₯