Datasets:

ArXiv:
License:
Cassandre7829 commited on
Commit
003d2d6
·
1 Parent(s): 1feb835

Challenge dataset

Browse files
Files changed (4) hide show
  1. README.md +50 -3
  2. data/train.csv +3 -0
  3. data/val.csv +0 -0
  4. data_utils.py +90 -0
README.md CHANGED
@@ -1,3 +1,50 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # MNIST dataset used during the Perceval Quest challenge
6
+
7
+ This repository hosts a partial MNIST dataset used during the Perceval Quest as part of the
8
+ Hybrid AI Quantum Challenge. The dataset is stored under `data/` and split into
9
+ `train.csv` and `val.csv`.
10
+
11
+ This dataset is a subset of the original MNIST dataset that can be found [here](https://web.archive.org/web/20200430193701/http://yann.lecun.com/exdb/mnist/) and introduced in [LeCun et al., 1998a].
12
+
13
+ The Perceval Quest challenge lasted from November 2024 to March 2025. More than 64 teams participated in its first phase and 12 teams were selected amongst the finalist.
14
+
15
+ ## Dataset structure
16
+
17
+ - `data/train.csv`
18
+ - `data/val.csv`
19
+
20
+ Each CSV contains two columns:
21
+
22
+ - `image`: a stringified list of 784 floats (28x28 grayscale image)
23
+ - `label`: the digit class (0-9)
24
+
25
+ ## Load the dataset from `data/`
26
+
27
+ ### Option 1: pandas
28
+
29
+ ```python
30
+ import pandas as pd
31
+
32
+ train_df = pd.read_csv("./data/train.csv")
33
+ val_df = pd.read_csv("./data/val.csv")
34
+ ```
35
+
36
+ ### Option 2: PyTorch Dataset (provided)
37
+
38
+ ```python
39
+ from data_utils import MNIST_partial
40
+
41
+ train_set = MNIST_partial(data="./data", split="train")
42
+ val_set = MNIST_partial(data="./data", split="val")
43
+ ```
44
+
45
+ ## References
46
+ - Dataset: [LeCun et al., 1998a] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. "Gradient-based learning applied to document recognition." Proceedings of the IEEE, 86(11):2278-2324, November 1998.
47
+ - Paper: NOTTON, Cassandre, APOSTOLOU, Vassilis, SENELLART, Agathe, et al.
48
+ Establishing Baselines for Photonic Quantum Machine Learning: Insights from an Open,
49
+ Collaborative Initiative. arXiv preprint arXiv:2510.25839, 2025.
50
+ - Repository: https://github.com/Quandela/HybridAIQuantum-Challenge/tree/main
data/train.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a0e126569c49d8c7130f5695d33a7d67f83c19ba55458233aabb6141ef306c0
3
+ size 29292986
data/val.csv ADDED
The diff for this file is too large to render. See raw diff
 
data_utils.py ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from torch.utils.data import Dataset, DataLoader
2
+ import os
3
+ import pandas as pd
4
+ import re
5
+ import torch
6
+ import torchvision.transforms as transforms
7
+ import matplotlib.pyplot as plt
8
+
9
+ ################
10
+ ## DATA UTILS ##
11
+ ################
12
+
13
+ # load the correct train, val dataset for the challenge, from the csv files
14
+ class MNIST_partial(Dataset):
15
+ def __init__(self, data = './data', transform=None, split = 'train'):
16
+ """
17
+ Args:
18
+ data: path to dataset folder which contains train.csv and val.csv
19
+ transform (callable, optional): Optional transform to be applied
20
+ on a sample (e.g., data augmentation or normalization)
21
+ split: 'train' or 'val' to determine which set to download
22
+ """
23
+ self.data_dir = data
24
+ self.transform = transform
25
+ self.data = []
26
+
27
+ if split == 'train':
28
+ filename = os.path.join(self.data_dir,'train.csv')
29
+ elif split == 'val':
30
+ filename = os.path.join(self.data_dir,'val.csv')
31
+ else:
32
+ raise AttributeError("split!='train' and split!='val': split must be train or val")
33
+
34
+ self.df = pd.read_csv(filename)
35
+
36
+
37
+ def __len__(self):
38
+ l = len(self.df['image'])
39
+ return l
40
+
41
+ def __getitem__(self, idx):
42
+ img = self.df['image'].iloc[idx]
43
+ label = self.df['label'].iloc[idx]
44
+ # string to list
45
+ img_list = re.split(r',', img)
46
+ # remove '[' and ']'
47
+ img_list[0] = img_list[0][1:]
48
+ img_list[-1] = img_list[-1][:-1]
49
+ # convert to float
50
+ img_float = [float(el) for el in img_list]
51
+ # convert to image
52
+ img_square = torch.unflatten(torch.tensor(img_float),0,(1,28,28))
53
+ if self.transform is not None:
54
+ img_square = self.transform(img_square)
55
+ return img_square, label
56
+
57
+
58
+
59
+ ####################
60
+ ## TRAINING UTILS ##
61
+ ####################
62
+
63
+ # plot the training curves (accuracy and loss) and save them in 'training_curves.png'
64
+ def plot_training_metrics(train_acc,val_acc,train_loss,val_loss):
65
+ fig, axes = plt.subplots(1,2,figsize = (15,5))
66
+ X = [i for i in range(len(train_acc))]
67
+ names = [str(i+1) for i in range(len(train_acc))]
68
+ axes[0].plot(X,train_acc,label = 'training')
69
+ axes[0].plot(X,val_acc,label = 'validation')
70
+ axes[0].set_xlabel("Epochs")
71
+ axes[0].set_ylabel("ACC")
72
+ axes[0].set_title("Training and validation accuracies")
73
+ axes[0].grid(visible = True)
74
+ axes[0].legend()
75
+ axes[1].plot(X,train_loss,label = 'training')
76
+ axes[1].plot(X,val_loss,label = 'validation')
77
+ axes[1].set_xlabel("Epochs")
78
+ axes[1].set_ylabel("Loss")
79
+ axes[1].set_title("Training and validation losses")
80
+ axes[1].grid(visible = True)
81
+ axes[1].legend()
82
+ axes[0].set_xticks(ticks=X,labels = names)
83
+ axes[1].set_xticks(ticks=X,labels = names)
84
+ fig.savefig("training_curves.png")
85
+
86
+
87
+ # compute the accuracy of the model
88
+ def accuracy(outputs, labels):
89
+ _, preds = torch.max(outputs, dim = 1)
90
+ return(torch.tensor(torch.sum(preds == labels).item()/ len(preds)))