Vu Anh commited on
Commit
984d9c5
·
1 Parent(s): 6ffef05

Convert dataset to new format with data files and YAML metadata

Browse files

- Remove deprecated Python script
- Add data/train.txt and data/test.txt
- Update README.md with dataset card and YAML configuration

Files changed (4) hide show
  1. README.md +41 -0
  2. UTS2017_Bank.py +0 -59
  3. data/test.txt +2 -0
  4. data/train.txt +2 -0
README.md CHANGED
@@ -1,3 +1,44 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ dataset_info:
4
+ features:
5
+ - name: text
6
+ dtype: string
7
+ splits:
8
+ - name: train
9
+ path: data/train.txt
10
+ - name: test
11
+ path: data/test.txt
12
  ---
13
+
14
+ # UTS2017_Bank Dataset
15
+
16
+ This dataset contains Vietnamese banking-related text samples for NLP tasks.
17
+
18
+ ## Dataset Structure
19
+
20
+ The dataset contains text samples split into:
21
+ - **train**: Training set with banking-related Vietnamese text
22
+ - **test**: Test set with banking-related Vietnamese text
23
+
24
+ ## Usage
25
+
26
+ ```python
27
+ from datasets import load_dataset
28
+
29
+ dataset = load_dataset("undertheseanlp/UTS2017_Bank")
30
+ ```
31
+
32
+ ## Features
33
+
34
+ - `text`: Vietnamese text string related to banking and finance
35
+
36
+ ## Citation
37
+
38
+ ```bibtex
39
+ @dataset{uts2017_bank,
40
+ title={UTS2017_Bank},
41
+ author={UnderTheSea NLP},
42
+ year={2017}
43
+ }
44
+ ```
UTS2017_Bank.py DELETED
@@ -1,59 +0,0 @@
1
- import datasets
2
-
3
- _DESCRIPTION = """\
4
- UTS2017_Bank
5
- """
6
-
7
- _CITATION = """\
8
- """
9
-
10
-
11
- class UTS2017Bank(datasets.GeneratorBasedBuilder):
12
- """UTS Word Tokenize datasets"""
13
- VERSION = datasets.Version("1.0.0")
14
-
15
- def _info(self):
16
- return datasets.DatasetInfo(
17
- description=_DESCRIPTION,
18
- features=datasets.Features(
19
- {
20
- "text": datasets.Value("string"),
21
- }
22
- ),
23
- supervised_keys=None,
24
- homepage=None,
25
- citation=_CITATION
26
- )
27
-
28
- def _split_generators(self, dl_manager):
29
- """Returns SplitGenerators."""
30
-
31
- # Sample texts for train and test
32
- train_texts = [
33
- "Ngân hàng Nhà nước Việt Nam",
34
- "Tài khoản tiết kiệm lãi suất cao"
35
- ]
36
-
37
- test_texts = [
38
- "Chuyển khoản nhanh 24/7",
39
- "Vay tín chấp không thế chấp"
40
- ]
41
-
42
- splits = [
43
- datasets.SplitGenerator(
44
- name=datasets.Split.TRAIN,
45
- gen_kwargs={"texts": train_texts}
46
- ),
47
- datasets.SplitGenerator(
48
- name=datasets.Split.TEST,
49
- gen_kwargs={"texts": test_texts}
50
- )
51
- ]
52
- return splits
53
-
54
- def _generate_examples(self, texts):
55
- for guid, text in enumerate(texts):
56
- item = {
57
- "text": text
58
- }
59
- yield guid, item
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/test.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ Chuyển khoản nhanh 24/7
2
+ Vay tín chấp không thế chấp
data/train.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ Ngân hàng Nhà nước Việt Nam
2
+ Tài khoản tiết kiệm lãi suất cao