EdgeMMEval / README.md
bgaurav7's picture
Add README
3506923 verified
metadata
license: other
language:
  - en
task_categories:
  - visual-question-answering
  - automatic-speech-recognition
  - text-generation
tags:
  - evaluation
  - benchmark
  - multimodal
  - edge-inference
  - on-device
  - litert-lm
  - image
  - audio
  - text
  - multi-turn
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test/metadata.jsonl
pretty_name: EdgeMMEval

EdgeMMEval

Minimal multimodal evaluation dataset for on-device inference testing. Covers functional correctness, accuracy, latency stress, and memory pressure across image, audio, text, multi-turn, combination, structured output, and tool-calling cases.

Dataset summary

The test split is defined in data/test/metadata.jsonl (200 rows). Each row has a test_id (for example IMG-001, STO-020) and a modality.

Modality Samples Focus
Image 34 VQA, OCR, description, classification, resolution / loop stress
Audio 27 Transcription, spoken QA, translation, noise and edge cases
Text 41 QA, translation, summarization, reasoning-style prompts
Multi-turn 24 Context retention, KV-cache stress
Combination 28 Cross-modal alignment
Structured output 32 JSON schema, regex, grammar-style constraints (constraint_type)
Tool call 14 Correct tool name, arguments, or valid refusal text
Total 200

Usage

from datasets import load_dataset

ds = load_dataset("CortexSwarm/EdgeMMEval", split="test")
print(ds[0])

Scoring

Each sample includes a reference field and usually reference_variants for automatic scoring. The scorer lives in this repo at scripts/score.py.

Pipeline

  1. Run your model on each test_id and collect the model’s output string.
  2. Write a JSON object to results.json at the repo root: keys are test_id values, values are the raw model outputs (strings).
  3. Run python scripts/score.py. It reads data/test/metadata.jsonl and results.json, then writes report.json.

Metrics and pass rules (see constants at the top of score.py)

  • Most tasks: BLEU-1 vs reference_variants; pass if score ≥ 0.5.
  • Summarization-style tasks (task in summarization / summarize): ROUGE-L; pass if ≥ 0.4.
  • Structured output: format check (JSON Schema, regex, or grammar-style heuristic) plus content BLEU; pass if format is valid and BLEU ≥ 0.3.
  • Tool call: compares expected tool/args or valid text-only refusal; separate logic in score_tool_call.

report.json shape

  • summary: total_scored, total_passed, overall_avg, pass_rate_pct, verdict (✓ INFERENCE WORKING if pass rate ≥ 80%, else ✗ ISSUES DETECTED), and skipped (test IDs with no entry in results.json).
  • by_modality: average score and pass counts per modality (empty if nothing was scored).
  • samples: per-test rows with scores, metrics, and pass/fail.

If total_scored is 0, every test ID was skipped—typically results.json is missing or does not map test IDs to outputs. Fix the results file and re-run the scorer.

License

The majority of this dataset is CC BY 4.0. A small subset of image files comes from Unsplash and is governed by the Unsplash License instead.

CC BY 4.0 (metadata, text tasks, original media, tooling, and most images)

The following are licensed under Creative Commons Attribution 4.0 (CC BY 4.0):

  • data/test/metadata.jsonl (prompts, references, labels, structure).
  • All text, multi-turn, combination, structured output, and tool call samples.
  • Audio clips produced from project samples/ recordings (see scripts/collect_all_audio.sh).
  • Images generated in-repo by scripts/collect_all_images.py using PIL— synthetic shapes, UI mockups, charts, QR patterns, blank canvases, sequential frames, and all derived images (blur, crop, rotation, collage, meme, watermark, overexposed) whose source is a PIL-generated file rather than an Unsplash photo. This covers IMG-001IMG-005, IMG-007IMG-009, IMG-015IMG-017, IMG-021IMG-024, IMG-026IMG-033.
  • IMG-020 (built from samples/v2/real-receipt.webp, author-provided).
  • Scripts and scorer logic (scripts/, upload.py).

Reuse requires attribution to EdgeMMEval and a link to this dataset or source repository.

Unsplash License (specific image files)

The following files under data/test/images/ are photographs downloaded from Unsplash (URLs in scripts/collect_all_images.py) and remain under the Unsplash License:

Direct downloads: IMG-006.jpg (4K mountain; if the download failed and the script used its generated fallback, that copy is CC BY 4.0 instead), IMG-010.jpg, IMG-011.jpg, IMG-012.jpg, IMG-013.jpg, IMG-014.jpg, IMG-025.jpg, IMG-034.jpg.

Derivatives of those photos: IMG-018.jpg (180° rotation of IMG-010), IMG-019.jpg (JPEG-compressed from IMG-010).

The Unsplash License permits free use and modification; you may not sell unmodified copies or build a competing image-service from the content—see the full license text for details.

Summary

Part License
Metadata, text tasks, scripts, audio, PIL-generated images, receipt CC BY 4.0
Unsplash photos and two derivatives listed above Unsplash License