Datasets:

Modalities:
Text
Formats:
arrow
Languages:
English
ArXiv:
Libraries:
Datasets
License:
AnnaWegmann commited on
Commit
c80cde6
·
1 Parent(s): 447b260

Add Thresholding Arrow files, keep CSVs and conversion script

Browse files
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .DS_Store
README.md CHANGED
@@ -14,11 +14,11 @@ configs:
14
  - config_name: Thresholding
15
  data_files:
16
  - split: train
17
- path: "thresholding/train.csv"
18
  - split: validation
19
- path: "thresholding/validation.csv"
20
  - split: test
21
- path: "thresholding/test.csv"
22
  ---
23
 
24
  AV task used in "Tokenization is Sensitive to Language Variation" paper, Arxiv [link](https://arxiv.org/abs/2502.15343).
 
14
  - config_name: Thresholding
15
  data_files:
16
  - split: train
17
+ path: "thresholding/train/data-00000-of-00001.arrow"
18
  - split: validation
19
+ path: "thresholding/validation/data-00000-of-00001.arrow"
20
  - split: test
21
+ path: "thresholding/test/data-00000-of-00001.arrow"
22
  ---
23
 
24
  AV task used in "Tokenization is Sensitive to Language Variation" paper, Arxiv [link](https://arxiv.org/abs/2502.15343).
convert_csv_to_arrow.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Convert the Thresholding CSV files to Arrow format,
3
+ downloading the real files from HuggingFace (bypassing LFS pointers).
4
+ """
5
+ from huggingface_hub import hf_hub_download
6
+ import pyarrow as pa
7
+ import pyarrow.csv as pcsv
8
+ from pathlib import Path
9
+
10
+ REPO = "AnnaWegmann/AV"
11
+ SPLITS = {
12
+ "train": "thresholding/train.csv",
13
+ "validation": "thresholding/validation.csv",
14
+ "test": "thresholding/test.csv",
15
+ }
16
+
17
+ for split, csv_repo_path in SPLITS.items():
18
+ print(f"\n--- {split} ---")
19
+
20
+ # Download real CSV from HuggingFace
21
+ local_csv = hf_hub_download(REPO, csv_repo_path, repo_type="dataset")
22
+ print(f" Downloaded: {local_csv}")
23
+
24
+ # Read CSV into Arrow table (texts contain newlines)
25
+ parse_opts = pcsv.ParseOptions(newlines_in_values=True)
26
+ table = pcsv.read_csv(local_csv, parse_options=parse_opts)
27
+ print(f" Rows: {table.num_rows}, Cols: {table.column_names}")
28
+
29
+ # Write as Arrow IPC streaming format (same as the working Contrastive_Learning files)
30
+ out_dir = Path("thresholding") / split
31
+ out_dir.mkdir(parents=True, exist_ok=True)
32
+ out_path = out_dir / "data-00000-of-00001.arrow"
33
+
34
+ with open(out_path, "wb") as f:
35
+ writer = pa.ipc.new_stream(f, table.schema)
36
+ writer.write_table(table)
37
+ writer.close()
38
+
39
+ print(f" Wrote: {out_path} ({out_path.stat().st_size:,} bytes)")
40
+
41
+ print("\nDone! Now delete the old CSV files, update README.md, and push.")
thresholding/test/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d14169d7016741481deb4b706f98786a5641312f39652260c15b01c339bcaa9
3
+ size 24113072
thresholding/train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22247c9616b84895b005d3b956aaa5a73866f6562b86adb39e9346504f8a677b
3
+ size 45387352
thresholding/validation/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58930c2eff92bd3373d502bc0f567cb3cdc75d616404de9bad79289b7fce4454
3
+ size 12386760