Update README.md
Browse filesAdded a number of images and filtering examples.
README.md
CHANGED
|
@@ -75,10 +75,10 @@ Our dataset is formatted in a Parquet data frame of the following structure:
|
|
| 75 |
`label`: Fake/Real label. (1: Fake, 0: Real)
|
| 76 |
|
| 77 |
## Data splits
|
| 78 |
-
`Systematic
|
| 79 |
-
`Manual
|
| 80 |
-
`Commercial
|
| 81 |
-
`PublicEval
|
| 82 |
|
| 83 |
## Usage examples
|
| 84 |
|
|
@@ -104,7 +104,8 @@ for i, data in enumerate(commfor_train):
|
|
| 104 |
*Note:*
|
| 105 |
- Downloading and indexing the data can take some time, but only for the first time. **Downloading may use up to 2.2TB** (1.1TB data + 1.1TB re-indexed `arrow` files)
|
| 106 |
- It is possible to randomly access data by passing an index (e.g., `commfor_train[10]`, `commfor_train[247]`).
|
| 107 |
-
- It may be wise to set `cache_dir` to some other directory if your home directory is limited. By default, it will download data to `~/.cache/huggingface/datasets`.
|
|
|
|
| 108 |
|
| 109 |
It is also possible to use streaming for some use cases (e.g., downloading only a certain subset or a small portion of data).
|
| 110 |
```python
|
|
|
|
| 75 |
`label`: Fake/Real label. (1: Fake, 0: Real)
|
| 76 |
|
| 77 |
## Data splits
|
| 78 |
+
`Systematic` (1,919,493 images): Systematically downloaded subset of the data (data downloaded from Hugging Face via automatic pipeline) \
|
| 79 |
+
`Manual` (774,023 images): Manually downloaded subset of the data \
|
| 80 |
+
`Commercial` (14,918 images): Commercial models subset \
|
| 81 |
+
`PublicEval` (51,836 images): Evaluation set where generated images are paired with COCO or FFHQ for license-compliant redistribution. Note that these are not the "source" datasets used to sample the generated images
|
| 82 |
|
| 83 |
## Usage examples
|
| 84 |
|
|
|
|
| 104 |
*Note:*
|
| 105 |
- Downloading and indexing the data can take some time, but only for the first time. **Downloading may use up to 2.2TB** (1.1TB data + 1.1TB re-indexed `arrow` files)
|
| 106 |
- It is possible to randomly access data by passing an index (e.g., `commfor_train[10]`, `commfor_train[247]`).
|
| 107 |
+
- It may be wise to set `cache_dir` to some other directory if your home directory is limited. By default, it will download data to `~/.cache/huggingface/datasets`.
|
| 108 |
+
- Not all images have a `prompt`. This can be because the generator does not require text prompts (e.g., unconditional, class-conditional) or due to an error. In cases where you need a specific portion of data, you can use the `.filter()` method (e.g., for data with prompts, `commfor_train.filter(lambda x: x['prompt'] != "", num_proc=8)`)
|
| 109 |
|
| 110 |
It is also possible to use streaming for some use cases (e.g., downloading only a certain subset or a small portion of data).
|
| 111 |
```python
|