Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Tags:
image
License:
jespark commited on
Commit
f0b06f7
·
verified ·
1 Parent(s): dc6fc6c

Updated Community Forensics Eval set descriptions.

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -84,7 +84,7 @@ Our dataset is formatted in a Parquet data frame of the following structure:
84
  `Systematic` (1,919,493 images): Systematically downloaded subset of the data (data downloaded from Hugging Face via automatic pipeline) \
85
  `Manual` (774,023 images): Manually downloaded subset of the data \
86
  `Commercial` (14,918 images): Commercial models subset \
87
- `PublicEval` (51,836 images): Evaluation set where generated images are paired with COCO or FFHQ for license-compliant redistribution. Note that these are not the "source" datasets used to sample the generated images
88
 
89
  ## Usage examples
90
 
@@ -96,7 +96,8 @@ import io
96
 
97
  # default training set
98
  commfor_train = ds.load_dataset("OwensLab/CommunityForensics", split="Systematic+Manual", cache_dir="~/.cache/huggingface/datasets")
99
- commfor_eval = ds.load_dataset("OwensLab/CommunityForensics", split="PublicEval", cache_dir="~/.cache/huggingface/datasets")
 
100
 
101
  # optionally shuffle the dataset
102
  commfor_train = commfor_train.shuffle(seed=123, writer_batch_size=3000)
@@ -123,7 +124,7 @@ import io
123
  commfor_sys_stream = ds.load_dataset("OwensLab/CommunityForensics", split='Systematic', streaming=True)
124
 
125
  # streaming only the evaluation set
126
- commfor_eval_stream = ds.load_dataset("OwensLab/CommunityForensics", split='PublicEval', streaming=True)
127
 
128
  # optionally shuffle the streaming dataset
129
  commfor_sys_stream = commfor_sys_stream.shuffle(seed=123, buffer_size=3000)
@@ -146,6 +147,8 @@ To accurately reproduce our training settings, it is necessary to download all d
146
  We understand that this may be inconvenient for simple prototyping,
147
  and thus we also release [Community Forensics-Small](https://huggingface.co/datasets/OwensLab/CommunityForensics-Small) dataset, which is paired with real datasets that have redistributable licenses and contains roughly 11% of the base dataset.
148
 
 
 
149
  ### Real data composition for training
150
  When training our [classifiers](https://huggingface.co/OwensLab/commfor-model-384), we used the following real data composition:
151
  ```
 
84
  `Systematic` (1,919,493 images): Systematically downloaded subset of the data (data downloaded from Hugging Face via automatic pipeline) \
85
  `Manual` (774,023 images): Manually downloaded subset of the data \
86
  `Commercial` (14,918 images): Commercial models subset \
87
+ `PublicEval` (51,836 images): Evaluation set where generated images are paired with COCO or FFHQ for license-compliant redistribution. Note that these are not the "source" datasets used to sample the generated images (now superseded by [CommunityForensics-Eval](https://huggingface.co/datasets/OwensLab/CommunityForensics-Eval) evaluation set, which are paired with accurate source datasets.)
88
 
89
  ## Usage examples
90
 
 
96
 
97
  # default training set
98
  commfor_train = ds.load_dataset("OwensLab/CommunityForensics", split="Systematic+Manual", cache_dir="~/.cache/huggingface/datasets")
99
+ commfor_eval = ds.load_dataset("OwensLab/CommunityForensics-Eval", split="CompEval", cache_dir="~/.cache/huggingface/datasets")
100
+
101
 
102
  # optionally shuffle the dataset
103
  commfor_train = commfor_train.shuffle(seed=123, writer_batch_size=3000)
 
124
  commfor_sys_stream = ds.load_dataset("OwensLab/CommunityForensics", split='Systematic', streaming=True)
125
 
126
  # streaming only the evaluation set
127
+ commfor_eval_stream = ds.load_dataset("OwensLab/CommunityForensics-Eval", split='CompEval', streaming=True)
128
 
129
  # optionally shuffle the streaming dataset
130
  commfor_sys_stream = commfor_sys_stream.shuffle(seed=123, buffer_size=3000)
 
147
  We understand that this may be inconvenient for simple prototyping,
148
  and thus we also release [Community Forensics-Small](https://huggingface.co/datasets/OwensLab/CommunityForensics-Small) dataset, which is paired with real datasets that have redistributable licenses and contains roughly 11% of the base dataset.
149
 
150
+ For evaluation, please check our [Community Forensics-Eval](https://huggingface.co/datasets/OwensLab/CommunityForensics-Eval) set.
151
+
152
  ### Real data composition for training
153
  When training our [classifiers](https://huggingface.co/OwensLab/commfor-model-384), we used the following real data composition:
154
  ```