Yunseo-Lab commited on
Commit
8705a21
·
1 Parent(s): e975d0b

Upload dataset with dataset card

Browse files
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Dataset Card for Custom Text Dataset
3
+
4
+ ## Dataset Name
5
+ Custom Text Dataset for Text Classification (Palestinian Authority and International Criminal Court)
6
+
7
+ ## Overview
8
+ This custom dataset contains text passages and corresponding labels that summarize key information from the provided sentences. The dataset was created to classify and extract significant details from text related to geopolitical events, such as the Palestinian Authority’s accession to the International Criminal Court (ICC). The dataset is intended for training models on summarization, text classification, and related natural language processing tasks.
9
+
10
+ - **Text Domain**: News, Geopolitics, International Relations
11
+ - **Task Type**: Text Classification, Summarization
12
+ - **Language**: English
13
+
14
+ ## Composition
15
+ - **Training Data**:
16
+ - Sentence: Text passages describing events.
17
+ - Labels: Summaries or key information extracted from the text.
18
+ - **Test Data**:
19
+ - A sample of articles and highlights taken from a larger dataset (e.g., the raw dataset's test data).
20
+ - 100 sentences paired with corresponding highlights (summaries).
21
+
22
+ ## Collection Process
23
+ The text passages for the custom dataset were manually selected from news articles, focusing on international legal and political events. Sentences related to the accession of the Palestinian Authority to the ICC were curated. The labels are short summaries highlighting key aspects of the text.
24
+
25
+ - **Source**: News article text (e.g., CNN)
26
+ - **Labeling**: Summarized by domain experts or curated manually to match the intent of the dataset.
27
+
28
+ ## Preprocessing
29
+ Before using this dataset for training, the following preprocessing steps are suggested:
30
+ - **Tokenization**: Tokenize the sentences into words or subword units (depending on the model).
31
+ - **Cleaning**: Remove unnecessary characters or artifacts, such as quotation marks, extra spaces, or newline characters.
32
+ - **Normalization**: Convert text to lowercase and standardize punctuation.
33
+
34
+ ## How to Use
35
+ ```python
36
+ # Example usage for training a text classification model
37
+ from transformers import Trainer, TrainingArguments
38
+
39
+ # Assuming the dataset is loaded in a huggingface Dataset object format
40
+ train_data = custom_train_data
41
+ test_data = custom_test_data
42
+
43
+ # Fine-tuning a text classification model (example with HuggingFace's Trainer API)
44
+ training_args = TrainingArguments(
45
+ output_dir='./results',
46
+ evaluation_strategy="epoch",
47
+ per_device_train_batch_size=16,
48
+ per_device_eval_batch_size=16,
49
+ num_train_epochs=3,
50
+ weight_decay=0.01,
51
+ )
52
+
53
+ # Initialize Trainer
54
+ trainer = Trainer(
55
+ model=model,
56
+ args=training_args,
57
+ train_dataset=train_data,
58
+ eval_dataset=test_data,
59
+ )
60
+
61
+ # Start training
62
+ trainer.train()
63
+ ```
64
+
65
+ ## Evaluation
66
+ Evaluation of the model can be done using standard text classification metrics:
67
+ - **Accuracy**: Compare predicted summaries or classifications to the labeled text.
68
+ - **F1-Score**: Evaluate the harmonic mean of precision and recall, especially useful for imbalanced datasets.
69
+ - **BLEU/ROUGE**: For summarization, comparing the generated summaries to the reference labels.
70
+
71
+ ## Limitations
72
+ - **Small Sample Size**: The dataset is relatively small and may not generalize well to other news topics or geopolitical events.
73
+ - **Narrow Focus**: The dataset is focused on a specific geopolitical event and may not cover other topics extensively.
74
+ - **Subjectivity in Labels**: Labels are summaries and may be subjective, depending on the labeler's interpretation of the event.
75
+
76
+ ## Ethical Considerations
77
+ - **Bias**: The dataset may reflect inherent biases from the original news sources, especially on sensitive political topics.
78
+ - **Data Sensitivity**: Since this dataset deals with real-world geopolitical events, careful consideration should be taken when using it for tasks that may influence public opinion or decision-making.
79
+ - **Privacy**: The dataset does not contain personal data, so privacy concerns are minimal.
80
+
81
+ This dataset is suitable for text classification and summarization tasks related to news articles on international relations and law.
test/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["test"]}
test/test/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e6aa13a3e10a33624931f6c220c9618528323886bd7b7ac334af681b8dc0646
3
+ size 346576
test/test/dataset_info.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "sentence": {
6
+ "feature": {
7
+ "dtype": "string",
8
+ "_type": "Value"
9
+ },
10
+ "_type": "Sequence"
11
+ },
12
+ "labels": {
13
+ "feature": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ },
17
+ "_type": "Sequence"
18
+ }
19
+ },
20
+ "homepage": "",
21
+ "license": ""
22
+ }
test/test/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a966e5e39a3a551f",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
train/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train"]}
train/train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3b84a293ed7afd9641f578c760558feab774e12174775ffef3bd6d130873903
3
+ size 1400
train/train/dataset_info.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "sentence": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "labels": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ }
13
+ },
14
+ "homepage": "",
15
+ "license": ""
16
+ }
train/train/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a1df46296853828f",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }