File size: 3,484 Bytes
591fa59
 
 
 
 
 
 
 
 
b04fdd8
 
591fa59
 
f04a2ba
 
591fa59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14903c5
c015ed1
 
14903c5
 
c015ed1
 
14903c5
c015ed1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: input
    list:
    - name: type
      dtype: string
    - name: content
      dtype: string
  - name: output
    struct:
    - name: veo3
      list: string
    - name: framepack
      list: string
    - name: framepack_seleted_video
      dtype: string
    - name: hunyuan
      list: string
    - name: hunyuan_seleted_video
      dtype: string
    - name: wan2.2-14b
      list: string
    - name: wan2.2-14b_seleted_video
      dtype: string
    - name: wan2.2-5b
      list: string
    - name: wan2.2-5b_seleted_video
      dtype: string
  splits:
  - name: train
    num_bytes: 98746
    num_examples: 99
  download_size: 36034
  dataset_size: 98746
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Visual-Intelligence

## πŸ”— Links

- [πŸ’Ύ Github Repo](https://github.com/Entroplay/Visual-Intelligence)
- [πŸ€— HF Dataset](https://huggingface.co/datasets/Entroplay/Visual-Intelligence)
- [πŸ“‘ Blog](https://entroplay.ai/research/video-intelligence)

## πŸ“– Dataset Introduction 

### Dataset Schema

- **id**: Unique sample identifier.
- **input**: Ordered list describing the input context.
  - **type**: Either "image" or "text".
  - **content**: For "image", a relative path to the first-frame image. For "text", the prompt text.
- **output**: Generated candidates and final selections by model.
  - **veo3**: Relative paths to videos generated by the VEO3 pipeline.
  - **framepack**: Relative paths to videos generated by FramePack across multiple runs.
  - **hunyuan**: Relative paths to videos generated by Hunyuan across multiple runs.
  - **wan2.2-5b**: Relative paths to videos generated by Wan-2.2-5B across multiple runs.
  - **wan2.2-14b**: Relative paths to videos generated by Wan-2.2-14B across multiple runs.
  - **framepack_seleted_video**: Selected best video among FramePack candidates.
  - **hunyuan_seleted_video**: Selected best video among Hunyuan candidates.
  - **wan2.2-5b_seleted_video**: Selected best video among Wan 2.2 5B candidates.
  - **wan2.2-14b_seleted_video**: Selected best video among Wan 2.2 14B candidates.

### Data Format:

```json
{
  "id": 1,
  "input": [
    { "type": "image", "content": "thumbnails/mp4/keypoint_localization.jpg" },
    { "type": "text",  "content": "Add a bright blue dot at the tip of the branch on which the macaw is sitting. ..." }
  ],
  "output": {
    "veo3": ["videos/mp4/keypoint_localization.mp4"],
    "framepack": [
      "videos/1_framepack_1.mp4",
      "videos/1_framepack_2.mp4"
    ],
    "hunyuan": [
      "videos/1_hunyuan_1.mp4",
      "videos/1_hunyuan_2.mp4"
    ],
    "wan2.2-5b": [
      "videos/1_wan2.2-5b_1.mp4",
      "videos/1_wan2.2-5b_2.mp4"
    ],
    "wan2.2-14b": [
      "videos/1_wan2.2-14b_1.mp4",
      "videos/1_wan2.2-14b_2.mp4"
    ],
    "framepack_seleted_video": "videos/1_framepack_1.mp4",
    "hunyuan_seleted_video": "videos/1_hunyuan_1.mp4",
    "wan2.2-5b_seleted_video": "videos/1_wan2.2-5b_1.mp4",
    "wan2.2-14b_seleted_video": "videos/1_wan2.2-14b_1.mp4"
  }
}
```

## πŸš€ About project

Google' Veo 3 shows extreme promise in visual intelligence, demonstrating strong visual commonsense and reasoning in visual generation. We aim to construct a fully open-source evaluation suite to measure current progress in video generative intelligence across various dimensions among several state-of-the-art proprietary and open-source models.