ami-refined / README.md
eeoonn's picture
Update README.md
0544541 verified
metadata
dataset_info:
  features:
    - name: meeting_id
      dtype: string
    - name: sampling_rate
      dtype: int64
    - name: summary
      dtype: string
    - name: transcript
      dtype: string
    - name: duration_sec
      dtype: float64
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
  splits:
    - name: train
      num_bytes: 6527142721
      num_examples: 115
    - name: validation
      num_bytes: 842101467
      num_examples: 15
    - name: test
      num_bytes: 655024365
      num_examples: 12
  download_size: 7558908182
  dataset_size: 8024268553
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

๐ŸŽ™๏ธ AMI-Refined: High-Fidelity Meeting Summarization Dataset

This repository contains a refined version of the AMI Meeting Corpus, specifically re-engineered for Long-context Abstractive Speech Summarization. Unlike existing fragmented ASR datasets, this version restores the continuous discourse flow and ensures strict alignment between audio and human-annotated summaries.

๐Ÿ› ๏ธ Data Processing & Engineering (How we matched it)

To bridge the gap between fragmented ASR chunks and long-form summarization, we implemented a rigorous preprocessing pipeline:

1. Temporal Discourse Restoration

The original Hugging Face AMI dataset (e.g., edinburghcstr/ami) provides audio in short, shuffled segments. We restored the original meeting structure by:

  • Meeting-level Grouping: Grouping 100k+ utterances by their unique meeting_id.
  • Time-sequential Sorting: Sorting segments within each meeting based on the exact begin_time metadata to reconstruct the chronological conversation flow.
  • Physical Audio Reconstruction: Concatenating validated audio arrays using numpy and exporting them as single, high-quality WAV files (16kHz) to prevent any frame decoding errors found in streaming versions.

2. Multi-stage Validation & Cleaning

We ensured 100% data integrity through a strict filtering process:

  • Audio Integrity Check: Every audio chunk was pre-decoded to detect and exclude corrupted frames or empty arrays (RuntimeError prevention).
  • Textual Ground-Truth Alignment: We matched each reassembled audio with Gold-Standard Manual Annotations (XML-based transcripts and abstractive summaries) from the AMI native metadata.
  • Scenario-only Selection: We filtered for meetings that have verified human-written summaries (mostly ES and TS series), ensuring that the model is trained on professional-grade labels rather than noisy or synthetic ones.

3. Native Hugging Face Integration

The dataset is structured to be compatible with modern deep learning pipelines:

  • WAV-JSON Mapping: Audio is stored as physical WAV files and indexed via JSON to ensure persistent paths.
  • Hugging Face datasets Feature: The final DatasetDict uses the datasets.Audio feature, allowing for automatic resampling and seamless loading with map() functions.

๐Ÿ“‚ Dataset Structure & Usage

Data Fields

  • meeting_id: Unique identifier for each meeting (e.g., ES2002a).
  • audio: Audio feature containing the decoded array and sampling rate (16kHz).
  • summary: Human-annotated abstractive summary (Ground Truth).
  • transcript: Complete meeting transcript for context.
  • duration_sec: Total duration of the meeting audio.

How to Load

from datasets import load_dataset

# Load the refined AMI dataset
dataset = load_dataset("eeoonn/ami-refined", use_auth_token=True)

# Audio is ready to use with librosa or transformers
example = dataset['train'][0]
print(example['summary'])

๐Ÿ›ก๏ธ Reliability for Research (Defense against Reviewers)

When comparing this dataset to others used in recent research (like SQuBa):

  1. No Synthetic Bias: All labels are 100% human-annotated, avoiding the "synthetic noise" issue in LLM-generated labels.
  2. Verified Alignment: By sorting by begin_time and checking for corrupted frames, we guarantee that the audio signal and the transcript are perfectly synchronized.
  3. Long-form Context: Our re-assembly provides a real-world long-context challenge (average duration: ~30 min), which is far more rigorous than evaluating on short audio clips.