atMrMattV commited on
Commit
95a7287
·
verified ·
1 Parent(s): ba8114e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -155
README.md CHANGED
@@ -10,165 +10,157 @@ base_model:
10
  - Qwen/Qwen3-TTS-12Hz-1.7B-VoiceDesign
11
  - Qwen/Qwen3-TTS-12Hz-1.7B-Base
12
  pipeline_tag: text-to-image
13
-
 
 
 
 
 
 
 
 
 
 
 
14
  ---
 
15
  <p align="center">
16
- <img
17
  src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/YJHpzH436J828nNymCNk7.png"
18
  width="600" />
19
- </p>
20
-
21
- <p align="center">
22
- <strong>Full creative production suite that runs entirely on your GPU.</strong><br/>
23
- No cloud. No API keys. No subscriptions. Every model runs on-device.
24
- </p>
25
-
26
- <p align="center">
27
- <a
28
- href="https://www.notion.so/Visione-Technical-Documentation-3194a74185bb8015b154e234606497e2">Documentation</a>
29
- </p>
30
-
31
- ---
32
-
33
- ## Why Visione
34
-
35
- Most AI creative tools are fragmented: one app for image gen, another for video, another for audio, each with its
36
- own cloud dependency and pricing tier. Visione puts the entire pipeline — from concept to final export — inside a
37
- single desktop application that runs on a consumer NVIDIA GPU (16GB VRAM).
38
-
39
- You own your hardware, your models, and your outputs. Nothing is transmitted externally. Ever.
40
-
41
- <table align="center"><tr>
42
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/X0pIezsKwIRl-Guw3k58
43
- A.png"><img
44
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/X0pIezsKwIRl-Guw3k58A.png"
45
- width="300" /></a></td>
46
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/euOPxXTNWxjmRl-C88uU
47
- 2.png"><img
48
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/euOPxXTNWxjmRl-C88uU2.png"
49
- width="300" /></a></td>
50
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/lW_zGi1O8HblIoamV0RL
51
- r.png"><img
52
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/lW_zGi1O8HblIoamV0RLr.png"
53
- width="300" /></a></td>
54
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/qKWonqa8ZQvl3CTdD0Pj
55
- e.png"><img
56
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/qKWonqa8ZQvl3CTdD0Pje.png"
57
- width="300" /></a></td>
58
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/IjNbVVpnLepr9NI8cdxA
59
- 3.png"><img
60
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/IjNbVVpnLepr9NI8cdxA3.png"
61
- width="300" /></a></td>
62
- </tr></table>
63
-
64
- ---
65
-
66
- ## What You Can Do
67
-
68
- | Component | Description |
69
- |---|---|
70
- | **Imagine** | Text-to-image generation with 90+ style LoRAs across 3 model tiers (Z-Image Turbo, Klein 9B). Character `@mentions` for consistent subjects. |
71
- | **Animate** | Image-to-video and text-to-video via LTX 2.3. 5 workflow modes: standard I2V/T2V, Best 3-stage, first-last-frame, audio-conditioned. |
72
- | **Retouch** | Full image editor — inpainting, upscaling, reframing, face swap (InsightFace + FaceFusion), background removal, LUT color grading, optical realism effects, multi-reference compositing, and smart selection (SAM). |
73
- | **Retexture** | Apply any of 90+ preset styles to existing images via LoRA, or transfer the style of a reference image using depth-conditioned generation. |
74
- | **Enhance** | SeedVR2 video enhancement (3B/7B models), Real-ESRGAN upscaling, and RIFE frame interpolation. |
75
- | **Storyboard** | 12-stage AI filmmaking pipeline: concept development with multi-agent LLM collaboration, character library, shot-by-shot generation, and ZIP export. |
76
- | **Sound Studio** | ACE-Step music generation, Qwen3-TTS voiceover (28 preset voices + clone + design), and HunyuanVideo-Foley for video-to-audio. |
77
- | **Characters** | Persistent character library with full-body 5-shot reference generation for visual consistency across shots and components. |
78
- | **Styles** | Browse and install LoRAs from CivitAI directly inside the app. Manage custom styles with per-preset strength tuning. |
79
- | **Gallery** | Unified asset browser across all components with metadata, output modal, and send-to integration for cross-component workflows. |
80
-
81
- <table align="center"><tr>
82
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/No1ABmspTrCWqpvsukaf
83
- Q.png"><img
84
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/No1ABmspTrCWqpvsukafQ.png"
85
- width="300" /></a></td>
86
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/mXVAiuj8Vpik0a_UNREI
87
- U.png"><img
88
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/mXVAiuj8Vpik0a_UNREIU.png"
89
- width="300" /></a></td>
90
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/Gmzmavqm9antFHYsbl4K
91
- a.png"><img
92
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/Gmzmavqm9antFHYsbl4Ka.png"
93
- width="300" /></a></td>
94
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/BbYSmMGcXZENjW-LBiIU
95
- z.png"><img
96
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/BbYSmMGcXZENjW-LBiIUz.png"
97
- width="300" /></a></td>
98
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/jcy5-_cKf0oa_Utf3ZXb
99
- K.png"><img
100
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/jcy5-_cKf0oa_Utf3ZXbK.png"
101
- width="300" /></a></td>
102
- </tr></table>
103
-
104
- ---
105
-
106
- ## Key Features
107
-
108
- - **90+ style presets** — LoRA-based styles spanning cinematic, illustration, animation, photography, design, and
109
- artist-specific looks. Browse and install more from CivitAI directly inside the app.
110
- - **Character consistency** — Generate a persistent character once, then reference them by name across Imagine,
111
- Retouch, and Storyboard with `@mentions`.
112
- - **Smart VRAM management** — Models load and unload sequentially to fit within 16GB. One active model at a time,
113
- no manual memory management needed.
114
- - **Multilingual UI** — English, Italian, Spanish, French, German. (COMING SOON)
115
- - **Local LLM + VLM** — Qwen3.5-4B handles prompt enhancement, image captioning, and storyboard agents. Falls back
116
- to Llama 3.2 3B on CPU if needed. No external API calls.
117
- - **Optical realism** — Client-side film emulation: grain, halation, vignette, pro-mist, chromatic aberration,
118
- highlight roll-off, color temperature and tint.
119
-
120
- ---
121
-
122
- ## Architecture
123
-
124
- Visione is a local client-server desktop app. The React frontend talks to a FastAPI backend over localhost;
125
- real-time progress streams via SSE. Heavy inference runs in-process (diffusers/PyTorch) or through a headless
126
- ComfyUI subprocess for video pipelines. The Tauri 2 shell wraps it as a native window and manages the backend
127
- lifecycle.
128
-
129
- Models are shared across components wherever possible — the same image generation backbone serves Imagine,
130
- Retouch, Retexture, and Storyboard. All assets, models, and outputs stay on local storage.
131
-
132
- **Stack:** Python 3.12 + FastAPI + SSE / React 18 + TypeScript + Zustand / Tauri 2 / ComfyUI headless / PyTorch
133
- 2.7 + CUDA
134
-
135
- <table align="center"><tr>
136
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/7_CDVBV6B08IosFIkr5j
137
- q.png"><img
138
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/7_CDVBV6B08IosFIkr5jq.png"
139
- width="300" /></a></td>
140
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/fRhZcUYtK_TE8uIlXyPH
141
- -.png"><img
142
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/fRhZcUYtK_TE8uIlXyPH-.png"
143
- width="300" /></a></td>
144
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/B1J7kJuRPiPY12-Wja0j
145
- W.png"><img
146
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/B1J7kJuRPiPY12-Wja0jW.png"
147
- width="300" /></a></td>
148
- <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/MXtHgy7hlq9YZVaQED_W
149
- A.png"><img
150
- src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/MXtHgy7hlq9YZVaQED_WA.png"
151
- width="300" /></a></td>
152
- </tr></table>
153
-
154
- ---
155
-
156
- ## Hardware Requirements
157
-
158
- | | Minimum | Recommended |
159
- |---|---|---|
160
- | **GPU** | NVIDIA 12GB VRAM (RTX 3060) | NVIDIA 16GB VRAM (RTX 4080) |
161
- | **RAM** | 16GB | 32GB |
162
- | **Storage** | ~50GB (core models) | ~210GB (all models) |
163
- | **OS** | Windows 10/11 | Windows 11 |
164
-
165
- ---
166
-
167
- ## License
168
-
169
- MIT
170
-
171
- ---
172
  license: mit
173
  tags:
174
  - art
 
10
  - Qwen/Qwen3-TTS-12Hz-1.7B-VoiceDesign
11
  - Qwen/Qwen3-TTS-12Hz-1.7B-Base
12
  pipeline_tag: text-to-image
13
+ license: mit
14
+ tags:
15
+ - art
16
+ - agent
17
+ - image-generation
18
+ - video-generation
19
+ - text-to-image
20
+ - text-to-video
21
+ - style-transfer
22
+ - image-editing
23
+ - tts
24
+ - local-inference
25
  ---
26
+
27
  <p align="center">
28
+ <img
29
  src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/YJHpzH436J828nNymCNk7.png"
30
  width="600" />
31
+ </p>
32
+
33
+ <p align="center">
34
+ <strong>Full creative production suite that runs entirely on your GPU.</strong><br/>
35
+ No cloud. No API keys. No subscriptions. Every model runs on-device.
36
+ </p>
37
+
38
+ <p align="center">
39
+ <a href="https://www.notion.so/Visione-Technical-Documentation-3194a74185bb8015b154e234606497e2">Documentation</a>
40
+ &nbsp;&nbsp;|&nbsp;&nbsp;
41
+ <a href="https://github.com/atMattV/visione">GitHub</a>
42
+ </p>
43
+
44
+ ---
45
+
46
+ ## Why Visione
47
+
48
+ Most AI creative tools are fragmented: one app for image gen, another for video, another for audio, each with its
49
+ own cloud dependency and pricing tier. Visione puts the entire pipeline — from concept to final export — inside a
50
+ single desktop application that runs on a consumer NVIDIA GPU (16GB VRAM).
51
+
52
+ You own your hardware, your models, and your outputs. Nothing is transmitted externally. Ever.
53
+
54
+ <table align="center"><tr>
55
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/X0pIezsKwIRl-Guw3k58A.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/X0pIezsKwIRl-Guw3k58A.png" width="300" /></a></td>
56
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/euOPxXTNWxjmRl-C88uU2.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/euOPxXTNWxjmRl-C88uU2.png" width="300" /></a></td>
57
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/lW_zGi1O8HblIoamV0RLr.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/lW_zGi1O8HblIoamV0RLr.png" width="300" /></a></td>
58
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/qKWonqa8ZQvl3CTdD0Pje.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/qKWonqa8ZQvl3CTdD0Pje.png" width="300" /></a></td>
59
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/IjNbVVpnLepr9NI8cdxA3.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/IjNbVVpnLepr9NI8cdxA3.png" width="300" /></a></td>
60
+ </tr></table>
61
+
62
+ ---
63
+
64
+ ## What You Can Do
65
+
66
+ | Component | Description |
67
+ |---|---|
68
+ | **Imagine** | Text-to-image generation with 90+ style LoRAs across 3 model tiers (Z-Image Turbo, Klein 9B). Character `@mentions` for consistent subjects. |
69
+ | **Animate** | Image-to-video and text-to-video via LTX 2.3. 5 workflow modes: standard I2V/T2V, Best 3-stage, first-last-frame, audio-conditioned. |
70
+ | **Retouch** | Full image editor — inpainting, upscaling, reframing, face swap (InsightFace + FaceFusion), background removal, LUT color grading, optical realism effects, multi-reference compositing, and smart selection (SAM). |
71
+ | **Retexture** | Apply any of 90+ preset styles to existing images via LoRA, or transfer the style of a reference image using depth-conditioned generation. |
72
+ | **Enhance** | SeedVR2 video enhancement (3B/7B models), Real-ESRGAN upscaling, and RIFE frame interpolation. |
73
+ | **Storyboard** | 12-stage AI filmmaking pipeline: concept development with multi-agent LLM collaboration, character library, shot-by-shot generation, and ZIP export. |
74
+ | **Sound Studio** | ACE-Step music generation, Qwen3-TTS voiceover (28 preset voices + clone + design), and HunyuanVideo-Foley for video-to-audio. |
75
+ | **Characters** | Persistent character library with full-body 5-shot reference generation for visual consistency across shots and components. |
76
+ | **Styles** | Browse and install LoRAs from CivitAI directly inside the app. Manage custom styles with per-preset strength tuning. |
77
+ | **Gallery** | Unified asset browser across all components with metadata, output modal, and send-to integration for cross-component workflows. |
78
+
79
+ <table align="center"><tr>
80
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/No1ABmspTrCWqpvsukafQ.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/No1ABmspTrCWqpvsukafQ.png" width="300" /></a></td>
81
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/mXVAiuj8Vpik0a_UNREIU.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/mXVAiuj8Vpik0a_UNREIU.png" width="300" /></a></td>
82
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/Gmzmavqm9antFHYsbl4Ka.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/Gmzmavqm9antFHYsbl4Ka.png" width="300" /></a></td>
83
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/BbYSmMGcXZENjW-LBiIUz.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/BbYSmMGcXZENjW-LBiIUz.png" width="300" /></a></td>
84
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/jcy5-_cKf0oa_Utf3ZXbK.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/jcy5-_cKf0oa_Utf3ZXbK.png" width="300" /></a></td>
85
+ </tr></table>
86
+
87
+ ---
88
+
89
+ ## Key Features
90
+
91
+ - **90+ style presets** — LoRA-based styles spanning cinematic, illustration, animation, photography, design, and artist-specific looks. Browse and install more from CivitAI directly inside the app.
92
+ - **Character consistency** — Generate a persistent character once, then reference them by name across Imagine, Retouch, and Storyboard with `@mentions`.
93
+ - **Smart VRAM management** — Models load and unload sequentially to fit within 16GB. One active model at a time, no manual memory management needed.
94
+ - **Multilingual UI** — English, Italian, Spanish, French, German. (COMING SOON)
95
+ - **Local LLM + VLM** — Qwen3.5-4B handles prompt enhancement, image captioning, and storyboard agents. Falls back to Llama 3.2 3B on CPU if needed. No external API calls.
96
+ - **Image Edit** — Client-side film emulation: grain, halation, vignette, pro-mist, chromatic aberration, highlight roll-off, color temperature and tint.
97
+
98
+ ---
99
+
100
+ ## Architecture
101
+
102
+ Visione is a local client-server desktop app. The React frontend talks to a FastAPI backend over localhost; real-time progress streams via SSE. Heavy inference runs in-process (diffusers/PyTorch) or through a headless ComfyUI subprocess for video pipelines. The Tauri 2 shell wraps it as a native window and manages the backend lifecycle.
103
+
104
+ Models are shared across components wherever possible — the same image generation backbone serves Imagine, Retouch, Retexture, and Storyboard. All assets, models, and outputs stay on local storage.
105
+
106
+ **Stack:** Python 3.12 + FastAPI + SSE / React 18 + TypeScript + Zustand / Tauri 2 / ComfyUI headless / PyTorch 2.7 + CUDA
107
+
108
+ <table align="center"><tr>
109
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/7_CDVBV6B08IosFIkr5jq.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/7_CDVBV6B08IosFIkr5jq.png" width="300" /></a></td>
110
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/fRhZcUYtK_TE8uIlXyPH-.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/fRhZcUYtK_TE8uIlXyPH-.png" width="300" /></a></td>
111
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/B1J7kJuRPiPY12-Wja0jW.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/B1J7kJuRPiPY12-Wja0jW.png" width="300" /></a></td>
112
+ <td><a href="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/MXtHgy7hlq9YZVaQED_WA.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/695017bb0c3fc8b9c78497e9/MXtHgy7hlq9YZVaQED_WA.png" width="300" /></a></td>
113
+ </tr></table>
114
+
115
+ ---
116
+
117
+ ## Hardware Requirements
118
+
119
+ | | Minimum | Recommended |
120
+ |---|---|---|
121
+ | **GPU** | NVIDIA 12GB VRAM (RTX 3060) | NVIDIA 16GB VRAM (RTX 4080) |
122
+ | **RAM** | 16GB | 32GB |
123
+ | **Storage** | ~50GB (core models) | ~210GB (all models) |
124
+ | **OS** | Windows 10/11 | Windows 11 |
125
+
126
+ ---
127
+
128
+ ## License
129
+
130
+ MIT
131
+
132
+ ---
133
+
134
+ ## FAQ
135
+
136
+ **Will this OOM my PC?**
137
+ There is a chance. While I tried to build in as many safeguards and memory management as possible (78 OOMs in 3 weeks really do something to a man), there's indeed a possibility. Everything has been stress-tested on my hardware to ensure such eventualities don't present, but it isn't a guarantee.
138
+
139
+ **What are the minimum specs?**
140
+ Visione requires an NVIDIA GPU with at least 12GB VRAM (RTX 3060 or equivalent), 16GB of RAM, and approximately 50GB of free storage for the core models. Windows 10 or 11 is supported. For the full model set and the experience it was designed and tested around, the recommended setup is an NVIDIA GPU with 16GB VRAM (RTX 4080), 32GB of RAM, and around 210GB of storage. Settings includes a model manager where you can see what is installed, download what you need, and get prompted automatically if a feature requires a model you don't have yet.
141
+
142
+ **Does it run on Mac or Linux?**
143
+ Not at this time. Visione is built on a CUDA stack and the models it runs (video generation, video enhancement, audio foley, and more) have no meaningful support outside of NVIDIA hardware. It requires a Windows machine with an NVIDIA GPU. Nothing is ruled out for future versions, but it isn't on the current roadmap.
144
+
145
+ **Is this safe?**
146
+ The application runs completely offline once you have downloaded the models. You will only need an internet connection to browse new styles, otherwise you can run completely offline. Furthermore, nothing is encrypted in the installer, you can fully unpack it and check for yourself. I am just delivering a final package to remove the hassle of CLI installations.
147
+
148
+ **Why is it free then?**
149
+ Because since the first moment I sat down and wrote the very first line on the design document, this was always conceived as a free application. If you find it useful, and without any obligation, there is a KoFi link hidden at the bottom of the settings.
150
+
151
+ **But... I really wanna see the code now, are you publishing it?**
152
+ Yes, as soon as Visione reaches 1.0 in the coming weeks, I have already planned to release the whole codebase on a dedicated GitHub repository (that already exists).
153
+
154
+ **Why do I need to use those specific models?**
155
+ While I understand you might be keen to experiment with your own models (and BYOM is a feature I'm considering), at this moment in time everything has been built with those specific models in mind, along with all the testing done.
156
+
157
+ **Are you planning to include "X" in v1.0?**
158
+ Here are, in no particular order and without commitment, what didn't make the cut for this initial release but has already been scoped and defined: Multi-lingual support; Characters in Animate; Elements for Imagine and Animate; Video Editor; Session Mode. And a few other things more. One I'm particularly looking forward to: automatic hardware detection that identifies your GPU tier and tailors settings accordingly, so Visione adapts to your machine rather than the other way around.
159
+
160
+ **Feedback or suggestions?**
161
+ Please, do. Feel free to reach out directly here, or via any of the other channels linked. Your feedback is much appreciated.
162
+
163
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
164
  license: mit
165
  tags:
166
  - art