LeTienDat commited on
Commit
e6d39e4
·
verified ·
1 Parent(s): 464c76d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -3
README.md CHANGED
@@ -1,3 +1,109 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - image-classification
5
+ language:
6
+ - en
7
+ ---
8
+
9
+ # CIFAR-10 Feature Representations (BTL3)
10
+
11
+ This dataset contains pre-extracted **feature embeddings** from the CIFAR-10 dataset, produced using several pretrained image classification models.
12
+ The goal is to enable fast experimentation, classifier prototyping, and model comparison **without needing to train or forward pass large models in Colab**.
13
+
14
+ ---
15
+
16
+ ## Dataset Source
17
+
18
+ The original CIFAR-10 dataset is MIT-licensed and available here:
19
+ https://www.cs.toronto.edu/~kriz/cifar.html
20
+
21
+ This dataset **does not contain the raw images**, only derived representation vectors.
22
+
23
+ ---
24
+
25
+ ## Models Used for Feature Extraction
26
+
27
+ | Model | Input Size | Library / Weights | Feature Representation | Output Dim |
28
+ | --------------- | ---------- | ----------------------------------------------------- | ------------------------------ | ---------- |
29
+ | ResNet-50 | 224×224 | torchvision (`ResNet50_Weights.IMAGENET1K_V1`) | Global average pooled | **2048** |
30
+ | VGG-16 | 224×224 | torchvision (`VGG16_Weights.IMAGENET1K_V1`) | FC6 layer output | **4096** |
31
+ | EfficientNet-B0 | 224×224 | torchvision (`EfficientNet_B0_Weights.IMAGENET1K_V1`) | Global average pooled | **1280** |
32
+ | ViT-Base/16 | 224×224 | timm (`vit_base_patch16_224`) | CLS token embedding | **768** |
33
+ | Swin-Base | 224×224 | timm (`swin_base_patch4_window7_224`) | Global mean pooled final stage | **1024** |
34
+
35
+ Each model produces a different feature dimensionality depending on its architecture.
36
+
37
+ ---
38
+
39
+ ## File Format
40
+
41
+ All feature data is stored in **compressed `.npz`** format:
42
+
43
+ ```
44
+ model_name/
45
+ train_features.npz
46
+ test_features.npz
47
+ ````
48
+
49
+ Each `.npz` file contains:
50
+ - **`features`** → Feature vectors of shape `(N, D)`
51
+ - **`labels`** → Corresponding class labels `(N,)`
52
+
53
+ Example:
54
+ If using ResNet-50 → `(50000, 2048)` for training features.
55
+
56
+ ---
57
+
58
+ ## Loading the Features in Python
59
+
60
+ ```python
61
+ from huggingface_hub import hf_hub_download
62
+ import numpy as np
63
+
64
+ def load_features(model_name, split="train"):
65
+ file_path = hf_hub_download(
66
+ repo_id="LeTienDat/BTL3_CIFAR-10",
67
+ filename=f"{model_name}/{split}_features.npz"
68
+ )
69
+ data = np.load(file_path)
70
+ return data["features"], data["labels"]
71
+
72
+ # Example usage
73
+ X_train, y_train = load_features("resnet50", "train")
74
+ X_test, y_test = load_features("resnet50", "test")
75
+
76
+ print(X_train.shape, y_train.shape)
77
+ ````
78
+
79
+ ---
80
+
81
+ ## License
82
+
83
+ This dataset is released under the **CC-BY 4.0** license.
84
+
85
+ You are free to:
86
+
87
+ * Use
88
+ * Modify
89
+ * Share
90
+ * Publish results
91
+
92
+ As long as you **credit this dataset repository**.
93
+
94
+ Original CIFAR-10 dataset is MIT licensed.
95
+
96
+ ---
97
+
98
+ ## Citation
99
+
100
+ If you use these features, please cite:
101
+
102
+ ```
103
+ @misc{BTL3_CIFAR10_Features,
104
+ author = {Le Tien Dat},
105
+ title = {BTL3 CIFAR-10 Feature Dataset},
106
+ year = {2025},
107
+ howpublished = {\url{https://huggingface.co/LeTienDat/BTL3_CIFAR-10}}
108
+ }
109
+ ```