Chant2Action / README.md
TonyVlcek's picture
Update README.md
3135266 verified
metadata
license: gpl-3.0
task_categories:
  - text-classification
  - automatic-speech-recognition
language:
  - en
multilinguality:
  - monolingual
source_datasets:
  - original
tags:
  - auctions
  - art
  - live-auctions
size_categories:
  - 100K<n<1M
configs:
  - config_name: aligned_modalities
    data_files: aligned_modalities_sp0.5_cxl10.csv
  - config_name: chant_transcripts
    data_files: auctioneer_chant_transcripts.csv
  - config_name: clerk_commands
    data_files: clerk_commands.csv
  - config_name: gavel_strikes
    data_files: gavel_strikes.csv

Chant2Action

The Chant2Action dataset is a multimodal corpus derived from real-world, high-stakes online auctions. It combines audio-visual recordings of auctioneers with the digital "ground truth" logs of the actions taken by the auction clerk. The dataset is designed to facilitate research in Spoken Language Understanding (SLU), Event Extraction (EE), and multimodal learning in noisy, real-time environments.

Abstract

The role of the auction clerk in live online auctions—translating the rapid, unstructured speech of an auctioneer into discrete digital commands—is a critical bottleneck that restricts the scalability and efficiency of modern auction houses. This problem is particularly compelling because it sits at the intersection of high-stakes financial transactions and complex spoken language processing, where a single error in interpreting the "auctioneer's chant" can have significant legal and economic consequences. To address this technical challenge, this thesis presents an end-to-end automated pipeline that integrates a novel gavel strike detector, speaker diarisation, and a cascaded classification architecture to extract structured instructions from audio-visual streams. The primary contribution of this work is the creation of a first-of-its-kind multimodal dataset of live auctions and the demonstration that, by treating clerking as a supervised classification problem on irregular time series, it is feasible to automate this niche, high-pressure task using contemporary machine learning pipelines.

Dataset Structure

The data is organized into five distinct subsets, ranging from raw recordings to pre-processed, aligned training samples.

1. Audio-Visual Recordings

Located in recordings/*. This subset contains the raw footage of the auctions. It comprises approximately 86 hours of footage across 34 individual files (totaling ~40 GB).

The recordings appear in two formats:

  • Camera Recordings (*.flv): Raw webcam feeds (640×480 resolution, fixed 25 fps) capturing the auctioneer on the rostrum.
  • Screen Recordings (*.mp4): Captures of the client-facing browser window (1800×1080 resolution, variable fps ~28.89). These include the video feed alongside UI elements (e.g., current price updates), which provide visual context for modality alignment.

Both formats use the h264 video codec and maintain consistent audio properties: stereo recordings sampled at 48 kHz encoded with AAC.

2. Clerk Command Logs

Located in clerk_commands.parquet. This subset represents the target labels for instruction extraction. It contains the history of commands issued by the clerk via the auction platform's console. These logs serve as the ground truth for what action was taken at a specific wall-clock time.

Key metadata includes:

  • timestamp: The authoritative server-receipt time used for alignment.
  • command: The type of action taken (see Target Class Labels below).
  • from_clerk: A boolean flag used to filter commands triggered manually by the clerk versus automated backend responses.
  • value: The monetary amount (for bids).
  • paddleNumber: The identifier for the winning bidder (for sold lots).

3. Transcriptions

Located in chant_transcripts.parquet. This subset contains time-aligned text transcriptions of the auctioneer's speech, generated using the whisper-large-v3 model. Each row represents a single token (word) with the following attributes:

  • Timestamps: Precise start and end offsets relative to the recording start.
  • Confidence: The model's confidence score for the token.
  • Speaker ID: Diarisation labels identifying unique auctioneers across different recordings (derived from embedding clustering).
  • Hallucination Flag (is_anomaly): A boolean indicator marking segments with high repetition rates, often caused by ASR failure during silence.

4. Gavel Strikes

Located in gavel_strikes.parquet. This subset contains timestamp offsets of detected gavel strikes. These were identified using spectral feature analysis (RMS energy, spectral bandwidth, and onset strength) and serve as high-fidelity temporal anchors for aligning the audio stream with the clerk logs.

5. Aligned Multimodal Samples

Located in aligned_modalities.parquet. This is the processed, "ready-to-train" subset. It consolidates the audio, text, and log modalities into fixed time windows using a Continuous Sliding Window (CSW) strategy.

  • Sampling Period: 0.5 seconds.
  • Window Size: 10 seconds (look-back period).
  • Content: Each sample includes the feature vector (transcribed text, speaker ID, gavel presence) and the target label.
  • NO_ACTION: This subset explicitly includes samples representing periods of inactivity (silence or chatter), allowing models to learn to distinguish between active commands and background noise.

Target Class Labels

The dataset focuses on commands that determine the progression of an auction lot. The target classes are:

  • openLot: Initiates bidding for a specific item.
  • placeBid: Registers a new highest bid.
  • fairWarning: Signals the lot is about to close (e.g., "Going once...").
  • passLot: Closes the lot without a sale (unsold).
  • resolveSoldLot: Closes the lot as sold.
  • sellLot: Administrative confirmation of sale details.
  • sellAndOpen: A composite label representing a rapid transition where the clerk confirms a sale and immediately opens the next lot.
  • NO_ACTION: The null-class representing the absence of a command.

Note on Alignment

The raw logs (UTC timestamps) and the audio-visual recordings (relative time) were synchronized using a correlation-based alignment strategy. This method maximized the temporal overlap between acoustic "Gavel Strike" events and digital resolveSoldLot/passLot commands. The alignment was further validated visually using subtitle overlays to ensure high temporal fidelity.


Developed at TU Berlin as part of the High-Stakes Automation: Design and Evaluation of Instruction Extraction Strategies for Online Auctions project.

Research conducted in collaboration with Snoofa Ltd and Bellmans.