Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,9 +8,9 @@ language:
|
|
| 8 |
# Dataset Card for Nexdata/Chinese-English_Parallel_Corpus_Data
|
| 9 |
|
| 10 |
## Description
|
| 11 |
-
3,060,000 sets of
|
| 12 |
|
| 13 |
-
For more details
|
| 14 |
|
| 15 |
# Specifications
|
| 16 |
## Storage format
|
|
|
|
| 8 |
# Dataset Card for Nexdata/Chinese-English_Parallel_Corpus_Data
|
| 9 |
|
| 10 |
## Description
|
| 11 |
+
This dataset is just a sample of 3,060,000 sets of Chinese-English_Parallel_Corpus_Data (paid dataset). It is stored in txt files. It covers files like travel, medicine, daily and TV play. Data cleaning, desensitization, and quality inspection have been carried out. It can be used as the basic corpus database in text data file as well as used in machine translation.
|
| 12 |
|
| 13 |
+
For more details & to download the rest of the dataset(paid),please refer to the link: https://www.nexdata.ai/datasets/nlu/147?source=Huggingface
|
| 14 |
|
| 15 |
# Specifications
|
| 16 |
## Storage format
|