di2ox3 commited on
Commit
f6ab675
·
verified ·
1 Parent(s): 2d91b02

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -32,6 +32,7 @@ Long-context tokenized corpus for benchmarking LLM prefill computation with Qwen
32
  | `data/documents.parquet` | English documents with token IDs and char offsets | ~100-500 |
33
  | `data/tasks.parquet` | QA, translation, and retrieval tasks | ~1K-5K |
34
  | `data/translations.parquet` | French translations of OPUS-Books English documents | ~100-500 |
 
35
 
36
  ### `documents.parquet` Schema
37
 
@@ -67,6 +68,26 @@ Long-context tokenized corpus for benchmarking LLM prefill computation with Qwen
67
  | `target_token_ids` | list\<int32\> | Tokenized translation |
68
  | `target_char_offsets` | list\<int32\> | Char offsets for translation tokens |
69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
  ## Sources
71
 
72
  | Source | Purpose | Target Tokens |
 
32
  | `data/documents.parquet` | English documents with token IDs and char offsets | ~100-500 |
33
  | `data/tasks.parquet` | QA, translation, and retrieval tasks | ~1K-5K |
34
  | `data/translations.parquet` | French translations of OPUS-Books English documents | ~100-500 |
35
+ | `data/aligned_chunks.parquet` | EN/FR aligned chunk pairs packed to ~1k source tokens | ~1K-5K |
36
 
37
  ### `documents.parquet` Schema
38
 
 
68
  | `target_token_ids` | list\<int32\> | Tokenized translation |
69
  | `target_char_offsets` | list\<int32\> | Char offsets for translation tokens |
70
 
71
+ ### `aligned_chunks.parquet` Schema
72
+
73
+ | Column | Type | Description |
74
+ |--------|------|-------------|
75
+ | `chunk_id` | string | Unique chunk ID (`doc_id` + chunk index) |
76
+ | `doc_id` | string | References OPUS English document |
77
+ | `chunk_idx` | int32 | Chunk index within document |
78
+ | `segment_start_idx` | int32 | Start aligned segment index (inclusive) |
79
+ | `segment_end_idx` | int32 | End aligned segment index (exclusive) |
80
+ | `src_lang` | string | Always `"en"` |
81
+ | `tgt_lang` | string | Always `"fr"` |
82
+ | `src_text` | large_string | English chunk text |
83
+ | `tgt_text` | large_string | French chunk text |
84
+ | `src_char_start` / `src_char_end` | int32 | Character span in source document |
85
+ | `tgt_char_start` / `tgt_char_end` | int32 | Character span in translation document |
86
+ | `src_tok_start` / `src_tok_end` | int32 | Token span in source token IDs |
87
+ | `tgt_tok_start` / `tgt_tok_end` | int32 | Token span in target token IDs |
88
+ | `src_token_count` | int32 | Source tokens in chunk (target ~1000) |
89
+ | `tgt_token_count` | int32 | Target tokens in chunk |
90
+
91
  ## Sources
92
 
93
  | Source | Purpose | Target Tokens |