Daydream Scope

Scope is an open-source tool for running real-time, interactive generative AI video pipelines. It supports multiple autoregressive video diffusion models with WebRTC streaming, Spout integration for live VJ workflows, and a composable pipeline architecture.

Highlights

  • Multiple model backends โ€” StreamDiffusionV2, LongLive, Krea Realtime, RewardForcing, MemFlow, and more via plugins
  • VACE integration โ€” Use reference images and control videos (depth, scribble, optical flow, etc.) to guide generation in real time
  • Spout / NDI โ€” Pipe video in and out of tools like Resolume Arena, TouchDesigner, OBS
  • LoRA support โ€” Customize styles and concepts on the fly
  • Composable pipelines โ€” Chain video diffusion with depth mapping, frame interpolation, and other processors
  • WebRTC streaming API โ€” Low-latency output to any browser

Demos

Resolume Arena as live input into Scope via Spout:

Real-time depth-conditioned generation:

Links

Citation

The VACE integration in Scope is described in:

@article{fosdick2026adapting,
  title={Adapting VACE for Real-Time Autoregressive Video Diffusion},
  author={Fosdick, Ryan},
  journal={arXiv preprint arXiv:2602.14381},
  year={2026}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Paper for daydreamlive/scope