CIFAR-10 Fast Training Benchmark (VGG-style)

This model was trained as a performance showcase for Epochly, a zero-config cloud GPU infrastructure.

πŸš€ Performance Results

  • Training Time: 10.19 seconds (3 epochs)
  • Hardware: NVIDIA Blackwell GB10 (128GB Unified Memory)
  • Setup Time: ~10 seconds (Cold Start)
  • Framework: PyTorch 2.5+

πŸ› οΈ Model Architecture

A custom SimpleVGG implementation with:

  • 3 Convolutional blocks (64, 128, and 256 filters)
  • ReLU activations and MaxPool layers
  • Fully connected classifier with Dropout
  • Optimized for Blackwell's FP4/FP8 tensor cores

πŸ“¦ How to reproduce

You can find the training script AI training.py in the files of this repository.

To run this exact benchmark in under 10 seconds without configuring CUDA or Docker, upload the script to: πŸ‘‰ https://www.epochly.co/

πŸ“ Training Logs

πŸš€ AI TRAINING SPEED BENCHMARK (Local CPU vs Epochly GPU)
[*] Target device detected: CUDA
    - GPU Model: NVIDIA GB10
[*] Initializing Deep Neural Network (VGG-style)...
[*] Starting training for 3 epochs...
    => Epoch 1 completed in 3.64 seconds
    => Epoch 2 completed in 3.29 seconds
    => Epoch 3 completed in 3.27 seconds
πŸŽ‰ TRAINING COMPLETE!
Total time taken: 10.19 seconds
πŸ’‘ TIMELINE COMPARISON:
Wow! Only 10.19s on Epochly GPU! πŸ”₯
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train Epochly/cifar10-fast-benchmark