EurekaTian commited on
Commit
936754e
·
verified ·
1 Parent(s): 789f831

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -3
README.md CHANGED
@@ -1,3 +1,84 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ tags:
5
+ - multimodal
6
+ - video-understanding
7
+ - audio-understanding
8
+ - streaming
9
+ - real-time
10
+ - omni-modal
11
+ pipeline_tag: video-text-to-text
12
+ ---
13
+
14
+ # ROMA: Real-time Omni-Multimodal Assistant with Interactive Streaming Understanding
15
+
16
+ <div align="center">
17
+ <img src="[INSERT LINK TO FIGURE 2 (ARCHITECTURE) HERE]" width="800"/>
18
+ <p>Figure: ROMA processes streaming inputs as aligned multimodal units, using a 'Speak Head' to decide when to respond.</p>
19
+ </div>
20
+
21
+ ## Model Summary
22
+
23
+ [cite_start]**ROMA** is a Real-time Omni-Multimodal Assistant designed for unified streaming audio-video understanding[cite: 9, 46]. [cite_start]Unlike traditional video LLMs that only answer after a query, ROMA integrates both **Reactive** (Question Answering) and **Proactive** (Event-Driven Alert, Real-Time Narration) capabilities within a single framework[cite: 58].
24
+
25
+ [cite_start]ROMA introduces a "Speak Head" mechanism to decouple response timing from content generation, allowing it to autonomously decide *when* to speak based on the continuous audio-visual stream[cite: 11, 49].
26
+
27
+ - **Paper:** [ROMA: Real-time Omni-Multimodal Assistant with Interactive Streaming Understanding](https://arxiv.org/abs/250x.xxxxx)
28
+ - [cite_start]**Project Page:** [Link](https://eureka-maggie.github.io/ROMA_show/) [cite: 20]
29
+ - **Repository:** [INSERT GITHUB LINK]
30
+ - **Developed by:** Institute of Computing Technology, CAS; UCAS; [cite_start]Tsinghua University[cite: 2, 3, 4].
31
+
32
+ ## Key Capabilities
33
+
34
+ [cite_start]ROMA excels in three main interaction modes[cite: 53]:
35
+
36
+ 1. [cite_start]**Event-Driven Alert (Proactive):** Monitors the stream and notifies the user immediately when a specific event occurs (e.g., "Notify me when a bird pops out")[cite: 23, 210].
37
+ 2. [cite_start]**Real-Time Narration (Proactive):** Continuously describes the evolving video and audio context without needing user prompts[cite: 25, 223].
38
+ 3. [cite_start]**Reactive QA:** Answers questions based on the past context, handling synchronized audio and video inputs[cite: 27, 227].
39
+
40
+ ## Model Architecture
41
+
42
+ [cite_start]ROMA processes continuous inputs as synchronized **Multimodal Units** (1-second intervals aligning dense audio with video frames)[cite: 10, 128].
43
+
44
+ Key architectural innovations include:
45
+ - [cite_start]**Chunked TMROPE:** Ensures proper temporal position encoding across streaming chunks[cite: 48, 134].
46
+ - [cite_start]**Speak Head:** A lightweight module parallel to the LM head that predicts a binary probability to trigger a response, solving the issue of task conflict between listening and speaking[cite: 144, 145].
47
+
48
+ ## Performance
49
+
50
+ [cite_start]ROMA achieves state-of-the-art performance on proactive benchmarks while remaining competitive on reactive settings[cite: 14].
51
+
52
+ | Benchmark Type | Task | ROMA Performance | State-of-the-Art? |
53
+ | :--- | :--- | :--- | :--- |
54
+ | **Proactive** | Event-Driven Alert (QVHighlights) | [cite_start]**53.7 mAP** [cite: 241] | ✅ Yes |
55
+ | **Proactive** | Real-Time Narration (YouCook2) | [cite_start]**35.21 F1** [cite: 251] | ✅ Yes |
56
+ | **Reactive** | Omni-Source Understanding (StreamingBench) | [cite_start]**Top Rank** [cite: 255] | ✅ Yes |
57
+ | **Reactive** | Full-Modality QA (Video-MME w/ Audio) | [cite_start]**33.30 Accuracy** [cite: 260] | ✅ Yes |
58
+
59
+ ## Quick Start
60
+
61
+ ```python
62
+ # Note: This is a pseudo-code example. Please refer to the official GitHub repo for the exact inference loop.
63
+
64
+ from transformers import AutoModel, AutoTokenizer
65
+
66
+ model = AutoModel.from_pretrained("Your-HF-Username/ROMA")
67
+ tokenizer = AutoTokenizer.from_pretrained("Your-HF-Username/ROMA")
68
+
69
+ # Input: Streaming chunks of Video and Audio (1-second units)
70
+ # The model uses a "Speak Head" to decide when to output text.
71
+
72
+ stream = load_video_audio_stream("example_video.mp4")
73
+
74
+ history_cache = None
75
+
76
+ for multimodal_unit in stream:
77
+ # Process 1-second unit
78
+ response, history_cache = model.streaming_inference(
79
+ multimodal_unit,
80
+ past_key_values=history_cache
81
+ )
82
+
83
+ if response:
84
+ print(f"ROMA says: {response}")