Update README.md
Browse files
README.md
CHANGED
|
@@ -114,16 +114,6 @@ After the shuffle completes, automated checks confirm:
|
|
| 114 |
|
| 115 |
**HuggingFace Data Studio**: The parquet files were written without a page index, which prevents the HuggingFace Data Studio from serving random row previews without loading entire row groups. This does not affect programmatic consumption (PyArrow, pandas, DuckDB, etc.) — only the web-based Data Studio preview. A future re-serialization with `write_page_index=True` and smaller row-group sizes would resolve this.
|
| 116 |
|
| 117 |
-
## Usage
|
| 118 |
-
|
| 119 |
-
The output parquet files can be consumed by any parquet-compatible tool. For the Daisy training framework, feed them to the shard builder:
|
| 120 |
-
|
| 121 |
-
```bash
|
| 122 |
-
python data/build_smollm_fineweb_edu_dedup_shards.py \
|
| 123 |
-
--build fineweb-edu-dedup \
|
| 124 |
-
--local-parquet-dir data/fineweb-shuffled \
|
| 125 |
-
--fineweb-num-shards 2200
|
| 126 |
-
```
|
| 127 |
|
| 128 |
## License
|
| 129 |
|
|
|
|
| 114 |
|
| 115 |
**HuggingFace Data Studio**: The parquet files were written without a page index, which prevents the HuggingFace Data Studio from serving random row previews without loading entire row groups. This does not affect programmatic consumption (PyArrow, pandas, DuckDB, etc.) — only the web-based Data Studio preview. A future re-serialization with `write_page_index=True` and smaller row-group sizes would resolve this.
|
| 116 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
|
| 118 |
## License
|
| 119 |
|