Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
Search is not available for this dataset
audio
audioduration (s)
1.6
348
label
class label
2 classes
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
0wav
End of preview. Expand in Data Studio

WearVox: An Egocentric Multichannel Voice Assistant Benchmark for Wearables

Paper: WearVox: An Egocentric Multichannel Voice Assistant Benchmark for Wearables
Authors: Zhaojiang Lin*, Yong Xu*, Kai Sun*, Jing Zheng, Yin Huang, Surya Appini, Krish Narang, Renjie Tao, Ishan Kapil Jain, Siddhant Arora, Ruizhi Li, Yiteng Huang, Kaushik Patnaik, Wenfang Xu, Suwon Shon, Yue Liu, Ahmed Aly, Anuj Kumar, Florian Metze, Luna Dong
Affiliations: Meta Reality Labs, Meta


πŸ“ Dataset Summary

WearVox Dataset Summary

WearVox is the first benchmark specifically designed to evaluate voice assistants in realistic wearable scenarios using devices like AI glasses.

  • 3,842 multi-channel, egocentric audio recordings collected via AI glasses
  • 5 diverse task types:
    • Search-Grounded QA (547)
    • Closed-Book QA (588)
    • Side-Talk Rejection (1,082 - 500 queries duplicated from tool-calling)
    • Tool Calling (1,125)
    • Speech Translation (1,000)

Each recording is accompanied by rich audio metadata, enabling nuanced analysis of model performance under real-world constraints. Benchmarking results show that leading real-time Speech LLMs achieve accuracies ranging from 29% to 59%, with substantial performance degradation on noisy outdoor audio.


πŸ“Š Dataset Structure

Each example in the dataset contains:

  • audio_query: The beanformed single-channel egocentric audio query
  • audio_query_mc: The multi-channel egocentric audio query
  • gt_transcript: The ground-truth query transcript
  • ground_truth: The ground-truth answer
  • task: The aforementioned 5 tasks
  • text_prompt: The task instruction for LLM
  • audio_metadata: audio metadata

Integrating Search Results

The released dataset does not, by default, include the search results needed for search-grounded QA. These search results can be obtained by joining the dataset with the CRAG dataset using the id field. Below is an example of code that generates search-augmented input (in data_public_rag.json) in a straightforward manner. This approach was used by the baseline models in the WearVox paper.

import json
rag = {}
ragmeta = {}
for i in range(10):
    with open("crag_task_3_dev_v4/crag_task_3_dev_v4_" + str(i) + ".jsonl", "r") as f:
        line = f.readline()
        while line:
            data = json.loads(line)
            result = data["search_results"]
            for j in range(len(result)):
                del result[j]["page_result"]
            rag[data["interaction_id"]] = result
            ragmeta[data["interaction_id"]] = {}
            ragmeta[data["interaction_id"]]["query_time"] = data["query_time"]
            line = f.readline()

with open("data_public.json", "r", encoding='utf8') as f:
    data = json.load(f)
    for i in range(len(data)-1, -1, -1):
        if data[i]["task"] == "grounding":
            iid = data[i]["id"]
            references = ["<DOC>\npage_name: " + rag[iid][j]["page_name"] + "\npage_last_modified: " + rag[iid][j]["page_last_modified"] + "\npage_snippet: " + rag[iid][j]["page_snippet"] + "\n</DOC>" for j in range(len(rag[iid])) if rag[iid][j]["page_name"] is not None and rag[iid][j]["page_last_modified"] is not None and rag[iid][j]["page_snippet"].strip() != ""]
            query_time = ragmeta[iid]["query_time"]
            data[i]["text_prompt"] = f"""You are given an audio question, which was asked at {query_time}. Your task is to answer the question in as few words as possible."""
            if len(references) != 0:
                references = "\n".join(references)
                data[i]["text_prompt"] += f""" You are also provided with the references below, which may or may not help answer the question.
### References
{references}"""
with open("data_public_rag.json", "w", encoding='utf8') as f:
    json.dump(data, f, indent=1, ensure_ascii=False)
Downloads last month
268