Datasets:
The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SkyGraph-1.5B
SkyGraph-1.5B is a large-scale Chinese text corpus packaged in HDF5 format with graph-structured linguistic annotations. Each example corresponds to one sentence and includes tokenizer input ids, word-level token ids, dependency parsing targets, AMR-style semantic graph targets, and references to external token embedding shards.
This release is intended for research on graph-aware language modeling, syntax-semantics representation learning, graph decoding, and Chinese text modeling.
Release note: the dataset files are stored as HDF5 shards. Hugging Face can host these files, but the Dataset Viewer/Data Studio may not provide a native table preview for the full corpus unless a Parquet or JSONL sample is added separately.
Dataset Summary
| Item | Value |
|---|---|
| Dataset name | SkyGraph-1.5B |
| Format | HDF5 (.h5) |
| Language | Chinese (zh) |
| Number of examples | 36,721,440 sentences |
| Number of shards | 74 |
| Disk size | Approximately 57 GB |
| Graph annotations | Dependency graph and AMR graph |
| Vocabulary files | vocabs/*.json |
| Conversion statistics | conversion_stats.json |
The HDF5 shards are named:
skypile_00000.h5
...
skypile_00073.h5
Repository Structure
.
├── README.md
├── conversion_stats.json
├── skypile_00000.h5
├── ...
├── skypile_00073.h5
└── vocabs/
├── all_vocabs.json
├── tok_vocab.json
├── ref_path_vocab.json
├── dep_kind_vocab.json
├── dep_edge_vocab.json
├── dep_special_node_vocab.json
├── amr_kind_vocab.json
├── amr_edge_vocab.json
├── amr_special_node_vocab.json
├── amr_special_node_label_class.json
└── dropped_unknown_noisy_label_counts.json
Dataset Creation
The conversion pipeline processed 48,470,913 candidate sentences and wrote 36,721,440 valid examples. Invalid or noisy samples were skipped during conversion.
Key conversion statistics:
| Field | Count |
|---|---|
files_total |
10 |
documents_total |
94,670 |
sentences_total |
48,470,913 |
sentences_written |
36,721,440 |
h5_shards_written |
74 |
skipped_invalid_amr |
7,838,272 |
skipped_invalid_dep |
79,479 |
skipped_unknown_noisy_samples |
3,823,302 |
skipped_missing_ref |
8,420 |
AMR special node class counts:
| Class | Count |
|---|---|
amr_ontology |
114,093,292 |
propbank_frame |
3,348,853 |
punctuation |
857,927 |
date_like |
8 |
unknown_noisy |
0 |
For the full conversion report, see conversion_stats.json.
Data Schema
Every skypile_*.h5 shard uses the same schema. Variable-length fields are stored as a flat array plus offsets. For example, the i-th sequence in a pair of *_flat and *_offsets datasets is:
flat[offsets[i]:offsets[i + 1]]
meta
| Field | Type | Description |
|---|---|---|
num_sentences |
scalar int64 | Number of examples in the shard. |
tokenizer_path |
scalar bytes | Tokenizer path used to generate token/tokens_flat. |
embedding_base_path |
scalar bytes | Base path for the external token embedding shards used by graph_output_mode="embedding". |
token
Pre-tokenized model input ids.
| Field | Type | Description |
|---|---|---|
tokens_flat |
uint16 | Concatenated tokenizer token ids for all examples. |
token_offsets |
int64 | Per-example offsets into tokens_flat. |
tok
Word-level token ids used to align dependency nodes, AMR nodes, and token embeddings.
| Field | Type | Description |
|---|---|---|
tok_ids_flat |
uint32 | Concatenated word-level token ids. The ids map to vocabs/tok_vocab.json. |
tok_offsets |
int64 | Per-example offsets into tok_ids_flat. |
embedding_ref
References to external packed token embedding shards.
| Field | Type | Description |
|---|---|---|
ref_path_id |
uint32 | Relative embedding shard path id. The ids map to vocabs/ref_path_vocab.json. |
ref_index |
int64 | Example index inside the referenced embedding shard. |
The referenced embedding path is resolved as:
Path(meta/embedding_base_path) / ref_path_vocab[ref_path_id]
If you only use token-class graph targets (graph_output_mode="toks"), these external embedding shards are not required.
The external token embedding shards are not included in this Hugging Face release because they are too large to upload. If you need to use graph_output_mode="embedding", you can regenerate compatible embeddings with:
https://github.com/yuanhuang0825/cilin-simcse.git
dep
Dependency graphs use one node per token and encode edges through a parent array.
| Field | Type | Description |
|---|---|---|
node_offsets |
int64 | Per-example dependency node offsets. |
kind_tgt_flat |
uint16 | Node kind target. 0=null, 1=non-special, 2=special:<punct>. |
embed_tgt_flat |
int16 | In-sentence token embedding index for each node. Special nodes may be -1. |
parent_flat |
int16 | Parent node index for each node. Root or missing parent is -1. |
parent_edge_label_flat |
uint16 | Dependency edge label id from parent to child. The ids map to vocabs/dep_edge_vocab.json. |
When loaded by the training dataset class, the parent array is converted to a dense edge target:
edge_tgt[parent, child] = parent_edge_label
Positions without edges use no_edge_id, which is 0 by default.
amr
AMR graphs use an edge list and store alignment between AMR nodes and in-sentence token indices.
| Field | Type | Description |
|---|---|---|
node_offsets |
int64 | Per-example AMR node offsets. |
kind_tgt_flat |
uint16 | AMR node kind target. 0=null, 1=non-special, >=2 means special node. |
edge_offsets |
int64 | Per-example AMR edge offsets. |
src_flat |
int16 | AMR edge source node index. |
dst_flat |
int16 | AMR edge destination node index. |
edge_label_flat |
uint16 | AMR edge label id. The ids map to vocabs/amr_edge_vocab.json. |
amr_node_embed_offsets |
int64 | Per-example offsets for AMR nodes with token alignment. |
amr_node_embed_node_idx_flat |
int16 | AMR node indices with token alignment. |
amr_node_embed_tok_offsets |
int64 | Offsets into aligned token indices for each aligned AMR node. |
amr_node_embed_tok_idx_flat |
int16 | In-sentence token indices aligned to each AMR node. |
When loaded by the training dataset class, the AMR edge list is converted to a dense edge target:
edge_tgt[src, dst] = edge_label
In graph_output_mode="embedding", non-special AMR node embeddings are computed from aligned token embeddings. In graph_output_mode="toks", non-special nodes are remapped to token class targets.
Vocabulary Files
| File | Size / Entries | Description |
|---|---|---|
tok_vocab.json |
5,307,229 | Word-level token to {id, count} mapping. |
ref_path_vocab.json |
2,960 | Embedding shard relative path to id mapping. |
dep_kind_vocab.json |
3 | Dependency node kinds: null=0, non-special=1, special:<punct>=2. |
dep_edge_vocab.json |
46 | Dependency edge label to id mapping. <no_edge>=0. |
dep_special_node_vocab.json |
1 | Dependency special node labels. Currently only <punct>. |
amr_kind_vocab.json |
6,636 | AMR node kinds: null=0, non-special=1, special:*>=2. |
amr_edge_vocab.json |
122 | AMR edge label to id mapping. <no_edge>=0. |
amr_special_node_vocab.json |
6,634 | AMR special node label to special id mapping. |
amr_special_node_label_class.json |
50,410 | AMR special node label classes such as amr_ontology, propbank_frame, punctuation, and unknown_noisy. |
dropped_unknown_noisy_label_counts.json |
43,776 | Noisy AMR labels dropped during conversion and their counts. |
all_vocabs.json |
8 groups | Combined vocabulary bundle containing graph vocabularies, tok_vocab, and ref_path_vocab. |
tok_vocab.json is a word-level token vocabulary, not the BPE vocabulary of an encoder tokenizer.
Quick Start
Read One Example With h5py
import h5py
h5_path = "skypile_00000.h5"
with h5py.File(h5_path, "r") as h5f:
i = 0
token_start = h5f["token"]["token_offsets"][i]
token_end = h5f["token"]["token_offsets"][i + 1]
input_ids = h5f["token"]["tokens_flat"][token_start:token_end]
dep_start = h5f["dep"]["node_offsets"][i]
dep_end = h5f["dep"]["node_offsets"][i + 1]
dep_kind = h5f["dep"]["kind_tgt_flat"][dep_start:dep_end]
dep_parent = h5f["dep"]["parent_flat"][dep_start:dep_end]
amr_node_start = h5f["amr"]["node_offsets"][i]
amr_node_end = h5f["amr"]["node_offsets"][i + 1]
amr_kind = h5f["amr"]["kind_tgt_flat"][amr_node_start:amr_node_end]
amr_edge_start = h5f["amr"]["edge_offsets"][i]
amr_edge_end = h5f["amr"]["edge_offsets"][i + 1]
amr_src = h5f["amr"]["src_flat"][amr_edge_start:amr_edge_end]
amr_dst = h5f["amr"]["dst_flat"][amr_edge_start:amr_edge_end]
amr_edge_label = h5f["amr"]["edge_label_flat"][amr_edge_start:amr_edge_end]
print("input_ids:", input_ids.shape)
print("dep:", dep_kind.shape, dep_parent.shape)
print("amr:", amr_kind.shape, amr_src.shape, amr_dst.shape, amr_edge_label.shape)
Build a PyTorch Dataset Loader
The following minimal example shows how to wrap one HDF5 shard as a PyTorch dataset without relying on project-specific code.
import h5py
import torch
from torch.utils.data import DataLoader
class SkyGraphH5Shard(torch.utils.data.Dataset):
def __init__(self, h5_path):
self.h5_path = h5_path
self.h5f = None
def _file(self):
if self.h5f is None:
self.h5f = h5py.File(self.h5_path, "r")
return self.h5f
def __len__(self):
h5f = self._file()
return int(h5f["meta"]["num_sentences"][()])
def __getitem__(self, i):
h5f = self._file()
token_start = h5f["token"]["token_offsets"][i]
token_end = h5f["token"]["token_offsets"][i + 1]
dep_start = h5f["dep"]["node_offsets"][i]
dep_end = h5f["dep"]["node_offsets"][i + 1]
amr_node_start = h5f["amr"]["node_offsets"][i]
amr_node_end = h5f["amr"]["node_offsets"][i + 1]
amr_edge_start = h5f["amr"]["edge_offsets"][i]
amr_edge_end = h5f["amr"]["edge_offsets"][i + 1]
return {
"input_ids": torch.tensor(
h5f["token"]["tokens_flat"][token_start:token_end],
dtype=torch.long,
),
"dep_kind": torch.tensor(
h5f["dep"]["kind_tgt_flat"][dep_start:dep_end],
dtype=torch.long,
),
"dep_parent": torch.tensor(
h5f["dep"]["parent_flat"][dep_start:dep_end],
dtype=torch.long,
),
"amr_kind": torch.tensor(
h5f["amr"]["kind_tgt_flat"][amr_node_start:amr_node_end],
dtype=torch.long,
),
"amr_src": torch.tensor(
h5f["amr"]["src_flat"][amr_edge_start:amr_edge_end],
dtype=torch.long,
),
"amr_dst": torch.tensor(
h5f["amr"]["dst_flat"][amr_edge_start:amr_edge_end],
dtype=torch.long,
),
"amr_edge_label": torch.tensor(
h5f["amr"]["edge_label_flat"][amr_edge_start:amr_edge_end],
dtype=torch.long,
),
}
dataset = SkyGraphH5Shard("skypile_00000.h5")
loader = DataLoader(dataset, batch_size=1, shuffle=True)
batch = next(iter(loader))
print(batch["input_ids"].shape)
After collation, graph and token tensors are typically padded to batch shapes such as [B, L_max], [B, N_dep_max], [B, N_dep_max, N_dep_max], [B, N_amr_max], and [B, N_amr_max, N_amr_max].
Graph Target Modes
The dataset schema supports two common graph target modes.
graph_output_mode="toks"
- Does not read the external embedding shards.
- Remaps non-special graph nodes to token class targets.
- Uses
graph_output_tok_top_nto keep the top-N most frequent word-level token classes. - Maps tokens outside the top-N list to an extra unknown token kind.
- Returns placeholder
embed_tgt;embed_maskis alwaysFalse.
graph_output_mode="embedding"
- Uses external token embedding shards for non-special nodes.
- Computes embedding loss only where
embed_mask=True. - Requires local access to the path stored in
meta/embedding_base_path. - The external embedding shards are not uploaded with this dataset because of their size.
- Compatible embeddings can be regenerated with
https://github.com/yuanhuang0825/cilin-simcse.git. - Supports optional embedding transforms such as mean centering, PCA component removal, and L2 normalization.
Known Limitations
- The files are HDF5 shards, so the Hugging Face Dataset Viewer may not provide full interactive preview or SQL support for the corpus.
meta/embedding_base_pathstores the original absolute path used during conversion. If you move the dataset or release embedding shards separately, update your loading code accordingly.- External token embedding shards are not included in this release because they are too large to upload. They are not required for
graph_output_mode="toks", but they are required forgraph_output_mode="embedding". Usehttps://github.com/yuanhuang0825/cilin-simcse.gitto regenerate them when needed. filter_by_max_node=Truescans node offsets during dataset initialization and can be slow on first load.tok_vocab.jsonandall_vocabs.jsonare large. Load the smaller graph-specific vocab files when you do not need the full token vocabulary.
Intended Uses
This dataset is suitable for:
- Chinese language modeling with syntax and semantic graph supervision.
- Dependency parsing and AMR graph representation experiments.
- Graph decoder training.
- Research on syntax-semantics alignment and graph-aware text representation learning.
Out-of-Scope Uses
This dataset should not be used as a factual knowledge source without verification. It may contain artifacts inherited from the source corpus and from automatic graph conversion. Users should evaluate quality, bias, and legal suitability before applying it in downstream systems.
License
SkyGraph-1.5B is released under the Apache License 2.0. See the LICENSE file for the full license text.
Please verify that redistribution of the source text, AMR annotations, dependency annotations, and any third-party lexical resources used by the pipeline is permitted under this release before publishing public mirrors.
Citation and Acknowledgements
If you use SkyGraph-1.5B, please cite this dataset release and the upstream resources used to build it.
The source text is derived from Skywork/SkyPile-150B. SkyPile-150B is associated with the Skywork technical report:
@dataset{skygraph_1_5b,
title = {SkyGraph-1.5B: A Chinese Corpus with AMR and Dependency Graph Annotations},
author = {BoYuan Huang},
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/HuangBoYuan/SkyGraph-1.5B}},
}
@misc{wei2023skywork,
title = {Skywork: A More Open Bilingual Foundation Model},
author = {Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei L{\"u} and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year = {2023},
eprint = {2310.19341},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
doi = {10.48550/arXiv.2310.19341},
url = {https://arxiv.org/abs/2310.19341}
}
Dependency and AMR annotations were generated with HanLP. Please also cite HanLP when using the graph annotations:
@inproceedings{he-choi-2021-stem,
title = {The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders},
author = {He, Han and Choi, Jinho D.},
booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
year = {2021},
pages = {5555--5577},
publisher = {Association for Computational Linguistics},
doi = {10.18653/v1/2021.emnlp-main.451},
url = {https://aclanthology.org/2021.emnlp-main.451}
}
We thank the Skywork team for releasing SkyPile-150B and the HanLP authors and maintainers for providing the dependency parsing and AMR tooling used in the annotation pipeline.
Contact
For questions about the dataset, conversion pipeline, or graph schema, please open a discussion on the Hugging Face dataset repository or contact the maintainers listed in the repository profile.
- Downloads last month
- 9