Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Vietnamese
Tags:
legal
Libraries:
Datasets
Dask
License:
TVPL / dedup /Readme.md
thanhnx12's picture
Upload folder using huggingface_hub
9964ae3 verified

[Update 18/03/2024: an assessment file at lower threshold added in the eval folder]

  1. merged_corpus/filtered_corpus.parquet: Merged corpus with contexts from [SFT-Law]+TVPL-structured+Zalo-corpus. Clustered and then filtered. The filtering process prefers candidates from SFT and the longer ones. merged/reindexed_corpus.parquet contains the clustered (without filtering) version. Short contexts are thrown away from these 2 files.

  2. data_remapped/{file_name}: data files taken from other repoes, added 2 fields: oid (Item's unique ID among 3 merged dataset, int) and __cluster__ (Cluster ID, int, the index of the cluster that is assigned to the item based on the text column). File format is parquet.

EXCEPTION: In sft_tvpl, context are stored in list. Hence, __context_cluster__ and contextoid are the list equivalent of __cluster__ and oid.

Please refer to the corresponding repositories for document on the data.

  1. tvpl_sft_resplit: tpvl_sft reindexed by question and divided into train&test set (Duplicating candidates in a cluster are put in text or train set but not both at the same time). Test set has the MIN size of 10000. __context_cluster__ is a mapping of context to the corpus.