svld / README.md
davidchan's picture
Create README.md
90d6d48 verified
metadata
pretty_name: Social Vision and Language Dataset (SVLD)
license: other
language:
  - en
tags:
  - multimodal
  - vision-language
  - social-media
  - image
  - video
  - text
  - comment-trees
  - popularity-prediction
  - arxiv:2006.08335
  - datasets
task_categories:
  - image-text-to-text
  - image-to-text
  - video-text-to-text
  - image-classification
  - text-classification
  - visual-question-answering
  - tabular-regression
size_categories:
  - 1M<n<10M

Social Vision and Language Dataset (SVLD)

Original Paper: A Dataset and Benchmarks for Multimedia Social Analysis (2020)
Authors: Bofan Xue, David Chan, John Canny
Institution: University of California, Berkeley


📌 Overview

The Social Vision and Language Dataset (SVLD) is a large-scale multimodal social media dataset designed to support research in:

  • Vision–language modeling
  • Multimodal fusion
  • Social signal prediction
  • Comment-tree modeling
  • Temporal social dynamics
  • Content popularity prediction

SVLD combines images, videos, text, social engagement signals, and full comment trees within the same context, enabling joint modeling across modalities in realistic, in-the-wild social media settings.


📦 Current Release (S3 Shard Edition)

The dataset is currently distributed as:

1961 daily shards

Each shard corresponds approximately to one day of collected data.

⚠ Important Notice

Due to long-term storage issues and partial data corruption:

  • This release may not contain the full original dataset
  • Some days, posts, media files, or metadata may be missing
  • Total dataset size may vary

Researchers are strongly encouraged to:

  • Recompute dataset statistics locally
  • Avoid assuming counts from the original publication
  • Design pipelines that tolerate partial or missing data

🧩 Dataset Structure

Each Post May Contain

  • One or more images
  • One or more videos
  • Optional per-media descriptions
  • A natural language title
  • User-provided tags
  • Social signals (upvotes, downvotes, favorites, views)
  • Timestamp
  • A full comment forest

Each Comment May Contain

  • Text
  • Images
  • GIFs or videos
  • Recursive replies (tree structure)

🎯 Modalities

SVLD supports research across:

  • Images (posts + comments)
  • Videos (posts + comments)
  • Text (titles, descriptions, comments)
  • Social Metrics (votes, favorites, views)
  • Tags (user-generated)
  • Tree Structure (comment forests)
  • Temporal Data (timestamps)

🔬 Research Directions

SVLD enables work in:

  • Multimodal fusion architectures
  • Image/video + language modeling
  • Popularity and engagement prediction
  • Social dynamics modeling
  • Tag and metadata prediction
  • Comment tree reasoning
  • Temporal distribution analysis
  • Multimodal retrieval
  • Content moderation research

⚙ Data Quality Notes

  • Some media files may be unavailable
  • Some shards may be incomplete
  • Social metrics reflect snapshot-at-scrape time
  • Engagement distributions are heavily long-tailed
  • Content reflects real-world social media (unfiltered, in-the-wild)

📖 Citation

If you use SVLD, please cite:

Xue, B., Chan, D., & Canny, J. (2020).
A Dataset and Benchmarks for Multimedia Social Analysis.
arXiv:2006.08335


📜 License & Usage

This dataset is intended for academic research use only.
Users are responsible for complying with platform terms and ethical research standards.