Raniahossam33's picture
Upload dataset
38ff438 verified
metadata
language:
  - ar
license: apache-2.0
task_categories:
  - summarization
  - text-generation
pretty_name: Financial Reports Extractive Summarization Evaluation Dataset
tags:
  - finance
  - summarization
  - extractive
  - evaluation
  - benchmark
  - arabic
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: id
      dtype: string
    - name: prompt
      dtype: string
    - name: full_text
      dtype: string
    - name: answer
      dtype: string
    - name: report_type
      dtype: string
    - name: file_name
      dtype: string
    - name: split
      dtype: string
    - name: text_length
      dtype: int64
    - name: summary_length
      dtype: int64
    - name: compression_ratio
      dtype: float64
  splits:
    - name: test
      num_bytes: 571273
      num_examples: 80
  download_size: 249355
  dataset_size: 571273

Financial Reports Extractive Summarization Evaluation Dataset

Validation and test splits for evaluating models on Arabic financial reports extractive summarization.

Dataset Structure

  • Format: Simple prompt-answer pairs
  • Validation: ~20 examples (10%)
  • Test: ~20 examples (10%)
  • Language: Arabic
  • Domain: Financial reports and market news

Fields

  • id: Unique identifier
  • prompt: The summarization prompt
  • full_text: Complete financial report
  • answer: Ground truth extractive summary
  • report_type: Type of report
  • file_name: Original file
  • split: 'validation' or 'test'
  • text_length: Full text length
  • summary_length: Summary length
  • compression_ratio: Compression percentage

Usage

from datasets import load_dataset

dataset = load_dataset("SahmBenchmark/financial-reports-extractive-summarization_eval")

# Access splits
val_data = dataset['validation']
test_data = dataset['test']

# For evaluation
for example in test_data:
    model_output = model.generate(example['prompt'])
    ground_truth = example['answer']
    
    # Calculate ROUGE scores
    rouge_score = calculate_rouge(model_output, ground_truth)

Evaluation Metrics

  • ROUGE-1, ROUGE-2, ROUGE-L
  • Compression ratio accuracy
  • Extractive accuracy (sentences from original)

For training data, see: SahmBenchmark/financial-reports-extractive-summarization_train