# Dataset Card for Custom Text Dataset ## Dataset Name Custom Text Dataset for Text Classification (Palestinian Authority and International Criminal Court) ## Overview This custom dataset contains text passages and corresponding labels that summarize key information from the provided sentences. The dataset was created to classify and extract significant details from text related to geopolitical events, such as the Palestinian Authority’s accession to the International Criminal Court (ICC). The dataset is intended for training models on summarization, text classification, and related natural language processing tasks. - **Text Domain**: News, Geopolitics, International Relations - **Task Type**: Text Classification, Summarization - **Language**: English ## Composition - **Training Data**: - Sentence: Text passages describing events. - Labels: Summaries or key information extracted from the text. - **Test Data**: - A sample of articles and highlights taken from a larger dataset (e.g., the raw dataset's test data). - 100 sentences paired with corresponding highlights (summaries). ## Collection Process The text passages for the custom dataset were manually selected from news articles, focusing on international legal and political events. Sentences related to the accession of the Palestinian Authority to the ICC were curated. The labels are short summaries highlighting key aspects of the text. - **Source**: News article text (e.g., CNN) - **Labeling**: Summarized by domain experts or curated manually to match the intent of the dataset. ## Preprocessing Before using this dataset for training, the following preprocessing steps are suggested: - **Tokenization**: Tokenize the sentences into words or subword units (depending on the model). - **Cleaning**: Remove unnecessary characters or artifacts, such as quotation marks, extra spaces, or newline characters. - **Normalization**: Convert text to lowercase and standardize punctuation. ## How to Use ```python # Example usage for training a text classification model from transformers import Trainer, TrainingArguments # Assuming the dataset is loaded in a huggingface Dataset object format train_data = custom_train_data test_data = custom_test_data # Fine-tuning a text classification model (example with HuggingFace's Trainer API) training_args = TrainingArguments( output_dir='./results', evaluation_strategy="epoch", per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, weight_decay=0.01, ) # Initialize Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_data, eval_dataset=test_data, ) # Start training trainer.train() ``` ## Evaluation Evaluation of the model can be done using standard text classification metrics: - **Accuracy**: Compare predicted summaries or classifications to the labeled text. - **F1-Score**: Evaluate the harmonic mean of precision and recall, especially useful for imbalanced datasets. - **BLEU/ROUGE**: For summarization, comparing the generated summaries to the reference labels. ## Limitations - **Small Sample Size**: The dataset is relatively small and may not generalize well to other news topics or geopolitical events. - **Narrow Focus**: The dataset is focused on a specific geopolitical event and may not cover other topics extensively. - **Subjectivity in Labels**: Labels are summaries and may be subjective, depending on the labeler's interpretation of the event. ## Ethical Considerations - **Bias**: The dataset may reflect inherent biases from the original news sources, especially on sensitive political topics. - **Data Sensitivity**: Since this dataset deals with real-world geopolitical events, careful consideration should be taken when using it for tasks that may influence public opinion or decision-making. - **Privacy**: The dataset does not contain personal data, so privacy concerns are minimal. This dataset is suitable for text classification and summarization tasks related to news articles on international relations and law.