Shuang Wu
commited on
update README
Browse files
README.md
CHANGED
|
@@ -58,7 +58,7 @@ This dataset consists of two compressed CSV files used in the MOSTLY AI Prize co
|
|
| 58 |
|
| 59 |
### Data Format
|
| 60 |
|
| 61 |
-
You can load them directly using pandas
|
| 62 |
|
| 63 |
```python
|
| 64 |
import pandas as pd
|
|
@@ -70,29 +70,7 @@ flat_df = pd.read_csv('data/flat/train/flat-training.csv')
|
|
| 70 |
sequential_df = pd.read_csv('data/sequential/train/sequential-training.csv')
|
| 71 |
```
|
| 72 |
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
Note: Detailed column descriptions are intentionally not provided as part of the competition challenge. The task is to generate synthetic data that preserves the statistical properties of the original data without needing to understand the semantic meaning of each column.
|
| 76 |
-
|
| 77 |
-
### Notes on Holdout Data
|
| 78 |
-
|
| 79 |
-
The competition evaluates submissions against a hidden holdout set that:
|
| 80 |
-
- Has the same size as the training data
|
| 81 |
-
- Does not overlap with the training data
|
| 82 |
-
- Comes from the same source
|
| 83 |
-
- Has the same structure and statistical properties
|
| 84 |
-
|
| 85 |
-
Your synthetic data generation approach should generalize well to this unseen data.
|
| 86 |
-
|
| 87 |
-
## Evaluation
|
| 88 |
-
|
| 89 |
-
- CSV submissions are parsed using pandas.read_csv() and checked for expected structure & size
|
| 90 |
-
- Evaluated using the [Synthetic Data Quality Assurance](https://github.com/mostly-ai/mostlyai-qa) toolkit
|
| 91 |
-
- Compared against the released training set and a hidden holdout set (same size, non-overlapping, from the same source)
|
| 92 |
-
|
| 93 |
-
## Usage with Hugging Face Datasets
|
| 94 |
-
|
| 95 |
-
The dataset can be loaded using the Hugging Face Datasets library directly from the compressed CSV files:
|
| 96 |
|
| 97 |
```python
|
| 98 |
from datasets import load_dataset
|
|
@@ -106,13 +84,38 @@ sequential_dataset = load_dataset("mostlyai/mostlyaiprize", "sequential", split=
|
|
| 106 |
|
| 107 |
## Dataset Schema
|
| 108 |
|
| 109 |
-
The schema of each dataset can be retrieved as follows
|
| 110 |
|
| 111 |
```python
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 112 |
print(flat_dataset.features)
|
| 113 |
print(sequential_dataset.features)
|
| 114 |
```
|
| 115 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
## Citation
|
| 117 |
|
| 118 |
If you use this dataset in your research, please cite:
|
|
|
|
| 58 |
|
| 59 |
### Data Format
|
| 60 |
|
| 61 |
+
You can load them directly using `pandas`:
|
| 62 |
|
| 63 |
```python
|
| 64 |
import pandas as pd
|
|
|
|
| 70 |
sequential_df = pd.read_csv('data/sequential/train/sequential-training.csv')
|
| 71 |
```
|
| 72 |
|
| 73 |
+
Or using Hugging Face's `datasets`:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
```python
|
| 76 |
from datasets import load_dataset
|
|
|
|
| 84 |
|
| 85 |
## Dataset Schema
|
| 86 |
|
| 87 |
+
The schema of each dataset can be retrieved as follows:
|
| 88 |
|
| 89 |
```python
|
| 90 |
+
# pandas
|
| 91 |
+
print(flat_df.dtypes)
|
| 92 |
+
print(sequential_df.dtypes)
|
| 93 |
+
|
| 94 |
+
# HF datasets
|
| 95 |
print(flat_dataset.features)
|
| 96 |
print(sequential_dataset.features)
|
| 97 |
```
|
| 98 |
|
| 99 |
+
### Column Description
|
| 100 |
+
|
| 101 |
+
Note: Detailed column descriptions are intentionally not provided as part of the competition challenge. The task is to generate synthetic data that preserves the statistical properties of the original data without needing to understand the semantic meaning of each column.
|
| 102 |
+
|
| 103 |
+
### Notes on Holdout Data
|
| 104 |
+
|
| 105 |
+
The competition evaluates submissions against a hidden holdout set that:
|
| 106 |
+
- Has the same size as the training data
|
| 107 |
+
- Does not overlap with the training data
|
| 108 |
+
- Comes from the same source
|
| 109 |
+
- Has the same structure and statistical properties
|
| 110 |
+
|
| 111 |
+
Your synthetic data generation approach should generalize well to this unseen data.
|
| 112 |
+
|
| 113 |
+
## Evaluation
|
| 114 |
+
|
| 115 |
+
- CSV submissions are parsed using `pandas.read_csv()` and checked for expected structure & size
|
| 116 |
+
- Evaluated using the [Synthetic Data Quality Assurance](https://github.com/mostly-ai/mostlyai-qa) toolkit
|
| 117 |
+
- Compared against the released training set and a hidden holdout set (same size, non-overlapping, from the same source)
|
| 118 |
+
|
| 119 |
## Citation
|
| 120 |
|
| 121 |
If you use this dataset in your research, please cite:
|