parquet-converter commited on
Commit
cd9ab52
·
1 Parent(s): 35d54d5

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,39 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.wasm filter=lfs diff=lfs merge=lfs -text
25
- *.xz filter=lfs diff=lfs merge=lfs -text
26
- *.zip filter=lfs diff=lfs merge=lfs -text
27
- *.zstandard filter=lfs diff=lfs merge=lfs -text
28
- *tfevents* filter=lfs diff=lfs merge=lfs -text
29
- # Audio files - uncompressed
30
- *.pcm filter=lfs diff=lfs merge=lfs -text
31
- *.sam filter=lfs diff=lfs merge=lfs -text
32
- *.raw filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - compressed
34
- *.aac filter=lfs diff=lfs merge=lfs -text
35
- *.flac filter=lfs diff=lfs merge=lfs -text
36
- *.mp3 filter=lfs diff=lfs merge=lfs -text
37
- *.ogg filter=lfs diff=lfs merge=lfs -text
38
- *.wav filter=lfs diff=lfs merge=lfs -text
39
- ffhq-dataset-v1.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
LICENSE.txt DELETED
@@ -1,30 +0,0 @@
1
- Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces,
2
- originally created as a benchmark for generative adversarial networks (GAN):
3
-
4
- A Style-Based Generator Architecture for Generative Adversarial Networks
5
- Tero Karras (NVIDIA), Samuli Laine (NVIDIA), Timo Aila (NVIDIA)
6
- http://stylegan.xyz/paper
7
-
8
- The individual images were published in Flickr by their respective authors
9
- under either Creative Commons BY 2.0, Creative Commons BY-NC 2.0,
10
- Public Domain Mark 1.0, Public Domain CC0 1.0, or U.S. Government Works
11
- license. All of these licenses allow free use, redistribution, and adaptation
12
- for non-commercial purposes. However, some of them require giving appropriate
13
- credit to the original author, as well as indicating any changes that were
14
- made to the images. The license and original author of each image are
15
- indicated in the metadata.
16
-
17
- https://creativecommons.org/licenses/by/2.0/
18
- https://creativecommons.org/licenses/by-nc/2.0/
19
- https://creativecommons.org/publicdomain/mark/1.0/
20
- https://creativecommons.org/publicdomain/zero/1.0/
21
- http://www.usa.gov/copyright.shtml
22
-
23
- The dataset itself (including JSON metadata, download script, and
24
- documentation) is made available under Creative Commons BY-NC-SA 4.0 license
25
- by NVIDIA Corporation. You can use, redistribute, and adapt it for
26
- non-commercial purposes, as long as you (a) give appropriate credit by
27
- citing our paper, (b) indicate any changes that you've made, and
28
- (c) distribute any derivative works under the same license.
29
-
30
- https://creativecommons.org/licenses/by-nc-sa/4.0/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,163 +0,0 @@
1
- FFHQ 70000张png图片
2
- 链接:https://pan.baidu.com/s/1XDfTKWOhtwAAQQJ0KBU4RQ
3
- 提取码:bowj
4
-
5
-
6
- ## Flickr-Faces-HQ Dataset (FFHQ)
7
- ![Python 3.6](https://img.shields.io/badge/python-3.6-green.svg?style=plastic)
8
- ![License CC](https://img.shields.io/badge/license-CC-green.svg?style=plastic)
9
- ![Format PNG](https://img.shields.io/badge/format-PNG-green.svg?style=plastic)
10
- ![Resolution 1024×1024](https://img.shields.io/badge/resolution-1024×1024-green.svg?style=plastic)
11
- ![Images 70000](https://img.shields.io/badge/images-70,000-green.svg?style=plastic)
12
-
13
- ![Teaser image](./ffhq-teaser.png)
14
-
15
- Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN):
16
-
17
- > **A Style-Based Generator Architecture for Generative Adversarial Networks**<br>
18
- > Tero Karras (NVIDIA), Samuli Laine (NVIDIA), Timo Aila (NVIDIA)<br>
19
- > http://stylegan.xyz/paper
20
-
21
- The dataset consists of 70,000 high-quality PNG images at 1024&times;1024 resolution and contains considerable variation in terms of age, ethnicity and image background. It also has good coverage of accessories such as eyeglasses, sunglasses, hats, etc. The images were crawled from [Flickr](https://www.flickr.com/), thus inheriting all the biases of that website, and automatically aligned and cropped using [dlib](http://dlib.net/). Only images under permissive licenses were collected. Various automatic filters were used to prune the set, and finally [Amazon Mechanical Turk](https://www.mturk.com/) was used to remove the occasional statues, paintings, or photos of photos.
22
-
23
- For business inquiries, please contact [researchinquiries@nvidia.com](mailto:researchinquiries@nvidia.com)
24
-
25
- For press and other inquiries, please contact Hector Marinez at [hmarinez@nvidia.com](mailto:hmarinez@nvidia.com)
26
-
27
- ## Licenses
28
-
29
- The individual images were published in Flickr by their respective authors under either [Creative Commons BY 2.0](https://creativecommons.org/licenses/by/2.0/), [Creative Commons BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/), [Public Domain Mark 1.0](https://creativecommons.org/publicdomain/mark/1.0/), [Public Domain CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/), or [U.S. Government Works](http://www.usa.gov/copyright.shtml) license. All of these licenses allow **free use, redistribution, and adaptation for non-commercial purposes**. However, some of them require giving **appropriate credit** to the original author, as well as **indicating any changes** that were made to the images. The license and original author of each image are indicated in the metadata.
30
-
31
- * [https://creativecommons.org/licenses/by/2.0/](https://creativecommons.org/licenses/by/2.0/)
32
- * [https://creativecommons.org/licenses/by-nc/2.0/](https://creativecommons.org/licenses/by-nc/2.0/)
33
- * [https://creativecommons.org/publicdomain/mark/1.0/](https://creativecommons.org/publicdomain/mark/1.0/)
34
- * [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/)
35
- * [http://www.usa.gov/copyright.shtml](http://www.usa.gov/copyright.shtml)
36
-
37
- The dataset itself (including JSON metadata, download script, and documentation) is made available under [Creative Commons BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license by NVIDIA Corporation. You can **use, redistribute, and adapt it for non-commercial purposes**, as long as you (a) give appropriate credit by **citing our paper**, (b) **indicate any changes** that you've made, and (c) distribute any derivative works **under the same license**.
38
-
39
- * [https://creativecommons.org/licenses/by-nc-sa/4.0/](https://creativecommons.org/licenses/by-nc-sa/4.0/)
40
-
41
- ## Overview
42
-
43
- All data is hosted on Google Drive:
44
-
45
- | Path | Size | Files | Format | Description
46
- | :--- | :--: | ----: | :----: | :----------
47
- | [ffhq-dataset](https://drive.google.com/open?id=1u2xu7bSrWxrbUxk-dT-UvEJq8IjdmNTP) | 2.56 TB | 210,014 | | Main folder
48
- | &boxvr;&nbsp; [ffhq-dataset-v1.json](https://drive.google.com/open?id=1IB0BFbN_eRZx9UkJqLHSgJiQhqX-PrI6) | 254 MB | 1 | JSON | Metadata including copyright info, URLs, etc.
49
- | &boxvr;&nbsp; [images1024x1024](https://drive.google.com/open?id=1u3Hbfn3Q6jsTlte3BY85CGwId77H-OOu) | 89.1 GB | 70,000 | PNG | Aligned and cropped images at 1024&times;1024
50
- | &boxvr;&nbsp; [thumbnails128x128](https://drive.google.com/open?id=1uJkWCpLUM-BnXW3H_IgVMdfENeNDFNmC) | 1.95 GB | 70,000 | PNG | Thumbnails at 128&times;128
51
- | &boxvr;&nbsp; [in-the-wild-images](https://drive.google.com/open?id=1YyuocbwILsHAjTusSUG-_zL343jlVBhf) | 955 GB | 70,000 | PNG | Original images from Flickr
52
- | &boxvr;&nbsp; [tfrecords](https://drive.google.com/open?id=1LTBpJ0W_WLjqza3zdayligS8Dh1V1gA6) | 273 GB | 9 | tfrecords | Multi-resolution data for [StyleGAN](http://stylegan.xyz/code) and [ProGAN](https://github.com/tkarras/progressive_growing_of_gans)
53
- | &boxur;&nbsp; [zips](https://drive.google.com/open?id=1WocxvZ4GEZ1DI8dOz30aSj2zT6pkATYS) | 1.28 TB | 4 | ZIP | Contents of each folder as a ZIP archive.
54
-
55
- High-level statistics:
56
-
57
- ![Pie charts](./ffhq-piecharts.png)
58
-
59
- For use cases that require separate training and validation sets, we have appointed the first 60,000 images to be used for training and the remaining 10,000 for validation. In the [StyleGAN paper](http://stylegan.xyz/paper), however, we used all 70,000 images for training.
60
-
61
- We have explicitly made sure that there are no duplicate images in the dataset itself. However, please note that the `in-the-wild` folder may contain multiple copies of the same image in cases where we extracted several different faces from the same image.
62
-
63
- ## Download script
64
-
65
- You can either grab the data directly from Google Drive or use the provided [download script](./download_ffhq.py). The script makes things considerably easier by automatically downloading all the requested files, verifying their checksums, retrying each file several times on error, and employing multiple concurrent connections to maximize bandwidth.
66
-
67
- ```
68
- > python download_ffhq.py -h
69
- usage: download_ffhq.py [-h] [-j] [-s] [-i] [-t] [-w] [-r] [-a]
70
- [--num_threads NUM] [--status_delay SEC]
71
- [--timing_window LEN] [--chunk_size KB]
72
- [--num_attempts NUM]
73
-
74
- Download Flickr-Face-HQ (FFHQ) dataset to current working directory.
75
-
76
- optional arguments:
77
- -h, --help show this help message and exit
78
- -j, --json download metadata as JSON (254 MB)
79
- -s, --stats print statistics about the dataset
80
- -i, --images download 1024x1024 images as PNG (89.1 GB)
81
- -t, --thumbs download 128x128 thumbnails as PNG (1.95 GB)
82
- -w, --wilds download in-the-wild images as PNG (955 GB)
83
- -r, --tfrecords download multi-resolution TFRecords (273 GB)
84
- -a, --align recreate 1024x1024 images from in-the-wild images
85
- --num_threads NUM number of concurrent download threads (default: 32)
86
- --status_delay SEC time between download status prints (default: 0.2)
87
- --timing_window LEN samples for estimating download eta (default: 50)
88
- --chunk_size KB chunk size for each download thread (default: 128)
89
- --num_attempts NUM number of download attempts per file (default: 10)
90
- ```
91
-
92
- ```
93
- > python ..\download_ffhq.py --json --images
94
- Downloading JSON metadata...
95
- \ 100.00% done 1/1 files 0.25/0.25 GB 43.21 MB/s ETA: done
96
- Parsing JSON metadata...
97
- Downloading 70000 files...
98
- | 100.00% done 70000/70000 files 89.19 GB/89.19 GB 59.87 MB/s ETA: done
99
- ```
100
-
101
- The script also serves as a reference implementation of the automated scheme that we used to align and crop the images. Once you have downloaded the in-the-wild images with `python download_ffhq.py --wilds`, you can run `python download_ffhq.py --align` to reproduce exact replicas of the aligned 1024&times;1024 images using the facial landmark locations included in the metadata.
102
-
103
- ## Metadata
104
-
105
- The `ffhq-dataset-v1.json` file contains the following information for each image in a machine-readable format:
106
-
107
- ```
108
- {
109
- "0": { # Image index
110
- "category": "training", # Training or validation
111
- "metadata": { # Info about the original Flickr photo:
112
- "photo_url": "https://www.flickr.com/photos/...", # - Flickr URL
113
- "photo_title": "DSCF0899.JPG", # - File name
114
- "author": "Jeremy Frumkin", # - Author
115
- "country": "", # - Country where the photo was taken
116
- "license": "Attribution-NonCommercial License", # - License name
117
- "license_url": "https://creativecommons.org/...", # - License detail URL
118
- "date_uploaded": "2007-08-16", # - Date when the photo was uploaded to Flickr
119
- "date_crawled": "2018-10-10" # - Date when the photo was crawled from Flickr
120
- },
121
- "image": { # Info about the aligned 1024x1024 image:
122
- "file_url": "https://drive.google.com/...", # - Google Drive URL
123
- "file_path": "images1024x1024/00000.png", # - Google Drive path
124
- "file_size": 1488194, # - Size of the PNG file in bytes
125
- "file_md5": "ddeaeea6ce59569643715759d537fd1b", # - MD5 checksum of the PNG file
126
- "pixel_size": [1024, 1024], # - Image dimensions
127
- "pixel_md5": "47238b44dfb87644460cbdcc4607e289", # - MD5 checksum of the raw pixel data
128
- "face_landmarks": [...] # - 68 face landmarks reported by dlib
129
- },
130
- "thumbnail": { # Info about the 128x128 thumbnail:
131
- "file_url": "https://drive.google.com/...", # - Google Drive URL
132
- "file_path": "thumbnails128x128/00000.png", # - Google Drive path
133
- "file_size": 29050, # - Size of the PNG file in bytes
134
- "file_md5": "bd3e40b2ba20f76b55dc282907b89cd1", # - MD5 checksum of the PNG file
135
- "pixel_size": [128, 128], # - Image dimensions
136
- "pixel_md5": "38d7e93eb9a796d0e65f8c64de8ba161" # - MD5 checksum of the raw pixel data
137
- },
138
- "in_the_wild": { # Info about the in-the-wild image:
139
- "file_url": "https://drive.google.com/...", # - Google Drive URL
140
- "file_path": "in-the-wild-images/00000.png", # - Google Drive path
141
- "file_size": 3991569, # - Size of the PNG file in bytes
142
- "file_md5": "1dc0287e73e485efb0516a80ce9d42b4", # - MD5 checksum of the PNG file
143
- "pixel_size": [2016, 1512], # - Image dimensions
144
- "pixel_md5": "86b3470c42e33235d76b979161fb2327", # - MD5 checksum of the raw pixel data
145
- "face_rect": [667, 410, 1438, 1181], # - Axis-aligned rectangle of the face region
146
- "face_landmarks": [...], # - 68 face landmarks reported by dlib
147
- "face_quad": [...] # - Aligned quad of the face region
148
- }
149
- },
150
- ...
151
- }
152
- ```
153
-
154
- ## Acknowledgements
155
-
156
- We thank Jaakko Lehtinen, David Luebke, and Tuomas Kynk&auml;&auml;nniemi for in-depth discussions and helpful comments; Janne Hellsten, Tero Kuosmanen, and Pekka J&auml;nis for compute infrastructure and help with the code release.
157
-
158
- We also thank Vahid Kazemi and Josephine Sullivan for their work on automatic face detection and alignment that enabled us to collect the data in the first place:
159
-
160
- > **One Millisecond Face Alignment with an Ensemble of Regression Trees**<br>
161
- > Vahid Kazemi, Josephine Sullivan<br>
162
- > Proc. CVPR 2014<br>
163
- > https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Kazemi_One_Millisecond_Face_2014_CVPR_paper.pdf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ffhq-dataset-v1.json → student--FFHQ/text-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a3d1e8a76c82a4affca83b61132479ef5d1f22795d8a13477e5fd7c8d85123dd
3
- size 266533842
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45fa2769f06c3421ab6cd7795051e96c6478aca85f3376a50c437ee1cde5a104
3
+ size 142177973