Datasets:
attachment_count int64 0 10 | attachments listlengths 0 10 | author dict | channel dict | created_at stringlengths 25 29 | edited_at stringlengths 0 32 | id stringlengths 20 20 | reaction_count int64 0 300 | reactor_count int64 0 139 | reference_id stringlengths 0 20 | text stringlengths 1 3.99k | thread_id stringclasses 777
values |
|---|---|---|---|---|---|---|---|---|---|---|---|
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_e332c67952aed251",
"name": "chatter",
"nsfw": false
} | 2026-05-14T16:25:35.778+00:00 | msg_7a7e434ce93be2ba | 0 | 0 | or just let you use shit you want to use | |||
0 | [] | {
"id": "user_50affbb1a78d67fc"
} | {
"category_id": "cat_fc38e2b5e8e8c690",
"id": "chan_7715bc9059812582",
"name": "wan_chatter",
"nsfw": false
} | 2026-05-14T16:24:49.93+00:00 | msg_4290b2109f771aef | 0 | 0 | but I don't know if it then can work as a LoRA :/ | |||
0 | [] | {
"id": "user_069cf87189f06e40"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_7013e987cc573aef",
"name": "comfyui",
"nsfw": false
} | 2026-05-14T16:24:18.971+00:00 | msg_85539dcee63a62c4 | 0 | 0 | msg_fd3aac6cf1563323 | this is kind of a complicated story haha | ||
0 | [] | {
"id": "user_50affbb1a78d67fc"
} | {
"category_id": "cat_fc38e2b5e8e8c690",
"id": "chan_7715bc9059812582",
"name": "wan_chatter",
"nsfw": false
} | 2026-05-14T16:23:06.382+00:00 | msg_6975f2fa65350351 | 0 | 0 | that's certainly interesting | |||
0 | [] | {
"id": "user_a46f472a0a097b25"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_709e2d556c3e5a9f",
"name": "ltx_training",
"nsfw": false
} | 2026-05-14T16:23:06.365+00:00 | msg_e560e34c431af049 | 0 | 0 | on 3090 with images only no blockswap needed, with videos i had to go 20 | |||
1 | [
{
"content_type": "",
"filename": "ComfyUI_00365_.mp4",
"size": 0
}
] | {
"id": "user_50affbb1a78d67fc"
} | {
"category_id": "cat_fc38e2b5e8e8c690",
"id": "chan_7715bc9059812582",
"name": "wan_chatter",
"nsfw": false
} | 2026-05-14T16:22:48.892+00:00 | msg_ff4016356c104235 | 0 | 0 | hmm with the meanflow applied | |||
0 | [] | {
"id": "user_df9ef7ec7fbcddc5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T16:09:47.482+00:00 | msg_c9790d05b461b512 | 0 | 0 | msg_34c89b2cc28bdadc | I did that too with LTX audio, but this one is much better as it was trained for audio and control, I didn't get any artifacts with it, at least for my use case | ||
0 | [] | {
"id": "user_9c3213e25d1556b5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T16:09:35.983+00:00 | msg_6c1b379fcab36bf7 | 0 | 0 | mtg, but be back soon | |||
0 | [] | {
"id": "user_50affbb1a78d67fc"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T16:08:05.831+00:00 | msg_878187c1c35aaf56 | 1 | 1 | but ffn being peak means it works anyway | |||
0 | [] | {
"id": "user_9c3213e25d1556b5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T16:06:02.062+00:00 | msg_ee7eedc365a7de30 | 0 | 0 | <@228118453062467585> so with dynamic VRAM disabled on a 4090 (--disable-dynamic-vram --disable-async-offload --reserve-vram 0) ... sage handled 254 masked self-attn calls cleanly via fp8_cuda++ on my fork, then the render OOM'd in the ffn linear projection during stage-2 refine, not in attention. your mask-shape work ... | |||
0 | [] | {
"id": "user_9c3213e25d1556b5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T16:04:40.947+00:00 | msg_ed4de106bca71872 | 0 | 0 | using this workflow i adapted from rune https://github.com/fblissjr/ComfyUI-AudioLoopHelper/blob/main/example_workflows/benchmark_workflows/fml2v_var_d_audio_input.json
with the params i shared above (minus fast) | |||
0 | [] | {
"id": "user_9c3213e25d1556b5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T16:04:08.954+00:00 | msg_d75cc534017876e7 | 0 | 0 | msg_ba65ff8a0cbe6380 | ok i OOMd on this now on my 4090. | ||
0 | [] | {
"id": "user_9c3213e25d1556b5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T16:01:18.742+00:00 | msg_43413661cdc8ad21 | 0 | 0 | i'm slowly chipping away at making this all much faster. just from yesterday, some performance data with my sage fork (now with mask supported cuda kernels finally) and nodes - on my 4090
cuda kernels for ada with mask support in super alpha: https://github.com/fblissjr/SageAttention-ada (but it works!)
not really aud... | |||
0 | [] | {
"id": "user_2164a729ca1981ae"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:59:44.89+00:00 | msg_16dfbb039735c0cc | 5 | 5 | For the record: This community is hands-down amazing. I'm continually blown away by the amount of care and effort on display any given day here. Seriously. You guys rock. | |||
0 | [] | {
"id": "user_9c3213e25d1556b5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:56:34.197+00:00 | 2026-05-14T15:57:03.359614+00:00 | msg_987699b5335f5e0c | 0 | 0 | eval metrics defined, 'what good is' defined, and tons of data. before i was dumping a giant 1-2GB exec log with every single torch call. which is also useful but not needed all the time. | ||
0 | [] | {
"id": "user_9c3213e25d1556b5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:56:19.114+00:00 | msg_eb1ec67581640562 | 0 | 0 | the backend part is even better cuz now i can keep running my sage-fork, my audio loop and benchmark nodes, and my workflow automation, all in a claude loop | |||
0 | [] | {
"id": "user_9c3213e25d1556b5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:55:46.372+00:00 | msg_c05862ed299ffbbf | 0 | 0 | msg_5db1fa1b83787b53 | i love this thing | ||
0 | [] | {
"id": "user_36c6a323f442d20a"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_03972a533e4d9c2b",
"name": "ltx_resources",
"nsfw": false
} | 2026-05-14T15:51:21.611+00:00 | msg_ce15f292a94cbdf0 | 0 | 0 | msg_f7321aacf3d9e165 | I swear releasing a node is just as hard as making one 😂
it's always the little things I overlook | thread_f4a8ed2881583476 | |
0 | [] | {
"id": "user_75e5c4680abf35e3"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_03972a533e4d9c2b",
"name": "ltx_resources",
"nsfw": false
} | 2026-05-14T15:48:37.046+00:00 | msg_a00131798b326ec7 | 0 | 0 | working great now | thread_f4a8ed2881583476 | ||
0 | [] | {
"id": "user_75e5c4680abf35e3"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_03972a533e4d9c2b",
"name": "ltx_resources",
"nsfw": false
} | 2026-05-14T15:48:12.113+00:00 | msg_f7321aacf3d9e165 | 0 | 0 | msg_0fd6bb22db780ee5 | Had to troubleshoot this cause I ran through the start w/o checking everything - good update | thread_f4a8ed2881583476 | |
0 | [] | {
"id": "user_069cf87189f06e40"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_7013e987cc573aef",
"name": "comfyui",
"nsfw": false
} | 2026-05-14T15:46:53.042+00:00 | msg_e37ffb54c1c8f3ef | 0 | 0 | it's a low bar to say, does it produce output? i mean you can fine tune a little bit on random images too and you'll still get good output | |||
0 | [] | {
"id": "user_069cf87189f06e40"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_7013e987cc573aef",
"name": "comfyui",
"nsfw": false
} | 2026-05-14T15:46:31.447+00:00 | msg_2d6689895366a4bd | 0 | 0 | msg_1f1894d97dddfd9d | the story with whether or not these make any sense is kind of complicated | ||
1 | [
{
"content_type": "",
"filename": "image.png",
"size": 0
}
] | {
"id": "user_cac87c552dcfce16"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:46:18.649+00:00 | msg_5db1fa1b83787b53 | 2 | 2 | <@228118453062467585> really liking this | |||
0 | [] | {
"id": "user_36c6a323f442d20a"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_03972a533e4d9c2b",
"name": "ltx_resources",
"nsfw": false
} | 2026-05-14T15:40:46.675+00:00 | msg_0fd6bb22db780ee5 | 1 | 1 | I just realized in the example workflows I didn't set the clip loader type to ltxv. I just updated the workflows and they are now available here https://github.com/WhatDreamsCost/WhatDreamsCost-ComfyUI/tree/main/example_workflows | thread_f4a8ed2881583476 | ||
0 | [] | {
"id": "user_9c3213e25d1556b5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:39:35.644+00:00 | msg_f4549791920c9a8b | 0 | 0 | msg_ba65ff8a0cbe6380 | thanks. good info. | ||
0 | [] | {
"id": "user_50affbb1a78d67fc"
} | {
"category_id": "cat_fc38e2b5e8e8c690",
"id": "chan_7715bc9059812582",
"name": "wan_chatter",
"nsfw": false
} | 2026-05-14T15:36:52.911+00:00 | msg_1be534dd9be9847a | 0 | 0 | well it aims to be better at that | |||
1 | [
{
"content_type": "video/mp4",
"filename": "path_animator1_00002.mp4",
"size": 861147
}
] | {
"id": "user_9519ac8d8e858a7a"
} | {
"category_id": "cat_fc38e2b5e8e8c690",
"id": "chan_a4812cb1296256df",
"name": "wan_resources",
"nsfw": false
} | 2026-05-14T15:29:55.207+00:00 | 2026-05-14T15:30:32.655417+00:00 | msg_1b3f600e6241d412 | 0 | 0 | Yep, got it to work on my 12Gb card. I guess more vram means longer clips. Are will still limited to the 81 frame wan limit? | thread_e696776b1d067898 | |
0 | [] | {
"id": "user_50affbb1a78d67fc"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:22:55.478+00:00 | msg_58200e8e4acb15ea | 0 | 0 | the newer quants have per layer setup where we set which layer is allowed to run with fp8 matmuls, and those work without any flags | |||
0 | [] | {
"id": "user_50affbb1a78d67fc"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:21:55.062+00:00 | msg_ba65ff8a0cbe6380 | 1 | 1 | msg_3dc9c80d35f76103 | don't use blanket --fast, it has sub options | ||
0 | [] | {
"id": "user_61001cbc5943d3de"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_e332c67952aed251",
"name": "chatter",
"nsfw": false
} | 2026-05-14T15:21:36.736+00:00 | msg_e50b688503d354de | 0 | 0 | Or perfect prompt adherence. | |||
0 | [] | {
"id": "user_61001cbc5943d3de"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_e332c67952aed251",
"name": "chatter",
"nsfw": false
} | 2026-05-14T15:21:02.463+00:00 | msg_363abfe5765ceacc | 0 | 0 | I’ll give Google some grace for now. Visuals aren’t everything. Perhaps this model will do something really amazing with direction and control and Seedance can’t. | |||
0 | [] | {
"id": "user_9c3213e25d1556b5"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:20:08.794+00:00 | msg_3dc9c80d35f76103 | 0 | 0 | msg_3aa35e55a4055950 | do these startup flags for comfy effectively disable the dynamic vram so i can OOM test some more? i wanna see how much the cuda sage kernels i wrote impact but i dont feel like im able to get good baseline results with comfy's memory management.
```
- --cuda-malloc --fast --supports-fp8-compute --mmap-torch-files (... | ||
1 | [
{
"content_type": "",
"filename": "EP101_BTPv2.mov",
"size": 0
}
] | {
"id": "user_a249d3dc48a9628c"
} | {
"category_id": "cat_fc38e2b5e8e8c690",
"id": "chan_83e5824b511edb7b",
"name": "wan_gens",
"nsfw": false
} | 2026-05-14T15:17:05.709+00:00 | msg_f60d676853528727 | 0 | 0 | Process video | |||
0 | [] | {
"id": "user_a249d3dc48a9628c"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:14:10.123+00:00 | msg_cac2bea3b3d8263f | 0 | 0 | I'm finally going to dive into LTX, does anyone have a simple WF they can share? (or should I start with the templates comfy provides) Someone did gimme one waaay back but tbh it was a bit over my head lol, and maybe they're all like that? Anyways, thanks! | |||
0 | [] | {
"id": "user_ab54fa7de5ee6fc2"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_709e2d556c3e5a9f",
"name": "ltx_training",
"nsfw": false
} | 2026-05-14T15:11:36.515+00:00 | msg_ece8ca35fb816655 | 0 | 0 | yean and a bit of blockswap | |||
0 | [] | {
"id": "user_ab54fa7de5ee6fc2"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_709e2d556c3e5a9f",
"name": "ltx_training",
"nsfw": false
} | 2026-05-14T15:11:19.689+00:00 | msg_9f52a61883d7ea17 | 1 | 1 | msg_4f58ec8bb400bf98 | just use musubi fork with fp8 enabled | ||
0 | [] | {
"id": "user_4b99bcc6cfd0e7e8"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T15:08:15.522+00:00 | msg_93fffdb823a5437b | 0 | 0 | Hey, nice. I've been trying to figure out if this is the "Singularity" you are using.. <https://huggingface.co/WarmBloodAban/Singularity_LTX-2.3_OmniCine_Preview0.1> | |||
0 | [] | {
"id": "user_2af94129162fd928"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_03972a533e4d9c2b",
"name": "ltx_resources",
"nsfw": false
} | 2026-05-14T15:01:14.232+00:00 | msg_7559a46ce54c4ef8 | 0 | 0 | msg_88ca8a0fa8a1aa9e | suuuper nice! Apart from the double turn there, it's pretty good! | thread_18f32f7338040cc6 | |
0 | [] | {
"id": "user_a249d3dc48a9628c"
} | {
"category_id": "cat_fc38e2b5e8e8c690",
"id": "chan_7715bc9059812582",
"name": "wan_chatter",
"nsfw": false
} | 2026-05-14T15:01:00.086+00:00 | msg_a7a5d7c0fa004ff6 | 0 | 0 | Anyflow is just for steps? So if you're using a lightning lora does it really matter? | |||
0 | [] | {
"id": "user_98106c4fb3aa3f0f"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_7013e987cc573aef",
"name": "comfyui",
"nsfw": false
} | 2026-05-14T15:00:47.297+00:00 | msg_fd3aac6cf1563323 | 0 | 0 | does flash-attention require cuda, or can I compile a Mac wheel? | |||
0 | [] | {
"id": "user_36c6a323f442d20a"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_03972a533e4d9c2b",
"name": "ltx_resources",
"nsfw": false
} | 2026-05-14T15:00:17.94+00:00 | msg_d41997d46a7e239a | 0 | 0 | msg_faeaa75980c603ed | Thanks! | thread_f4a8ed2881583476 | |
0 | [] | {
"id": "user_75e5c4680abf35e3"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T14:56:27.992+00:00 | msg_8ca68a0bad7d45e1 | 1 | 1 | msg_f1bab87e1742918c | This looks really killer from the YT clip - appreciate it | ||
0 | [] | {
"id": "user_75e5c4680abf35e3"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_575ca31a8c98a1a9",
"name": "vibecoding",
"nsfw": false
} | 2026-05-14T14:50:16.986+00:00 | msg_3de38dab5e9f0cb7 | 0 | 0 | "Belt and suspenders" | |||
0 | [] | {
"id": "user_75e5c4680abf35e3"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_575ca31a8c98a1a9",
"name": "vibecoding",
"nsfw": false
} | 2026-05-14T14:50:06.242+00:00 | msg_518baa89cdef2e4f | 0 | 0 | msg_8f7f022de93d82b1 | Yes! that too for sure.... | ||
0 | [] | {
"id": "user_75e5c4680abf35e3"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_03972a533e4d9c2b",
"name": "ltx_resources",
"nsfw": false
} | 2026-05-14T14:48:39.965+00:00 | msg_faeaa75980c603ed | 0 | 0 | Just caught your teaser on Youtube (came up on subs list) - looks bad-ass, man! Gonna grab and play. Thx! | thread_f4a8ed2881583476 | ||
0 | [] | {
"id": "user_36c6a323f442d20a"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T14:47:50.16+00:00 | 2026-05-14T15:04:39.124192+00:00 | msg_f1bab87e1742918c | 9 | 9 | Just released a new node, LTX Director - An All-In-One Timeline Editor inside of ComfyUI. I2V, T2V, FLFF, Prompt Relay, Custom Audio, and more!
https://www.youtube.com/watch?v=fZgtkRcu4_k
https://discord.com/channels/1076117621407223829/1484614256602386663 | ||
0 | [] | {
"id": "user_36c6a323f442d20a"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_03972a533e4d9c2b",
"name": "ltx_resources",
"nsfw": false
} | 2026-05-14T14:39:38.583+00:00 | msg_af9a0150e7a45cbe | 0 | 0 | New node LTX Director - An All-In-One Timeline Editor. I2V, T2V, FLFF, Prompt Relay, Custom Audio, and more!
**Main Features:**
- **Fully Functional Timeline Editor:** Add image, text, and audio segments to control exactly what happens and when. Easily trim, cut, and edit segments with a (hopefully) intuitive interfa... | thread_f4a8ed2881583476 | ||
1 | [
{
"content_type": "",
"filename": "firefox_pxJyBvlerz.mp4",
"size": 0
}
] | {
"id": "user_50affbb1a78d67fc"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_2d455c88aac027b2",
"name": "hidream",
"nsfw": false
} | 2026-05-14T14:39:09.336+00:00 | msg_294cc46f0a3bb892 | 2 | 2 | for edit tasks it can be helpful to actually use image2image to assist | |||
1 | [
{
"content_type": "",
"filename": "message.txt",
"size": 0
}
] | {
"id": "user_8e18b7129773a94e"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_445f31c7b88f94ff",
"name": "flux",
"nsfw": false
} | 2026-05-14T14:33:59.557+00:00 | msg_5bc6dc6dc8d62856 | 1 | 1 | msg_d5627c04e18de0ab | had this before and fixed it with claude's help , updated to latest code and now it is still there ; using gguf | ||
2 | [
{
"content_type": "",
"filename": "LTX_2.3-LQ_00366.mp4",
"size": 0
},
{
"content_type": "",
"filename": "LTX_2.3-LQ_00365.mp4",
"size": 0
}
] | {
"id": "user_2a8a0f3271d3fbdf"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T14:29:29.619+00:00 | msg_14655c1d42e96591 | 1 | 1 | ok, here is my long-suffering example.
left - nothing. right - omninft, singularity, vbvr | |||
0 | [] | {
"id": "user_f61c92bca5103146"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_445f31c7b88f94ff",
"name": "flux",
"nsfw": false
} | 2026-05-14T14:27:03.492+00:00 | msg_dca75fed0d0a7382 | 0 | 0 | yeah that is what I was trying to avoid ahhahaa | |||
0 | [] | {
"id": "user_98106c4fb3aa3f0f"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_445f31c7b88f94ff",
"name": "flux",
"nsfw": false
} | 2026-05-14T14:24:06.041+00:00 | msg_511d9e652cabed48 | 0 | 0 | you have wade through the craziness | |||
0 | [] | {
"id": "user_98106c4fb3aa3f0f"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_445f31c7b88f94ff",
"name": "flux",
"nsfw": false
} | 2026-05-14T14:23:44.734+00:00 | msg_6bec12be5e9377d5 | 0 | 0 | civitai ? | |||
0 | [] | {
"id": "user_f61c92bca5103146"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_445f31c7b88f94ff",
"name": "flux",
"nsfw": false
} | 2026-05-14T14:23:15.743+00:00 | msg_0cc9714c6c57d12f | 0 | 0 | is there a place with Flux2klein loras, especially the ones that help on quality, consistency etc? | |||
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_7013e987cc573aef",
"name": "comfyui",
"nsfw": false
} | 2026-05-14T14:21:08.777+00:00 | msg_4492bc5dccee4124 | 0 | 0 | would be cool to also see a base model trained with the tech to compare, wonder if its cause its trained as latents, then like retrained into a pixel that it works decent ? | |||
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_7013e987cc573aef",
"name": "comfyui",
"nsfw": false
} | 2026-05-14T14:20:24.193+00:00 | msg_5182254f8fccbfff | 0 | 0 | so latent model to pixel space model ? | |||
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_7013e987cc573aef",
"name": "comfyui",
"nsfw": false
} | 2026-05-14T14:20:16.111+00:00 | msg_9c8d8c72bbe05ce9 | 0 | 0 | hmm | |||
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_e332c67952aed251",
"name": "chatter",
"nsfw": false
} | 2026-05-14T14:19:33.752+00:00 | msg_dcc6f61ccf7091cb | 0 | 0 | besides fast/distilled versions of course | |||
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_e332c67952aed251",
"name": "chatter",
"nsfw": false
} | 2026-05-14T14:19:12.924+00:00 | msg_d3b8c37176c11bfe | 0 | 0 | i really dont like the whole multiple video model route haha | |||
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_e332c67952aed251",
"name": "chatter",
"nsfw": false
} | 2026-05-14T14:18:47.147+00:00 | msg_b7531fc4cfbd19e9 | 0 | 0 | not sure | |||
0 | [] | {
"id": "user_cc46bd273299ef24"
} | {
"category_id": "cat_826def62d4079d0d",
"id": "chan_c91c85545d6c435b",
"name": "voice",
"nsfw": false
} | 2026-05-14T14:13:32.156+00:00 | msg_fe0d2b58e95ae5be | 0 | 0 | i decided to play with eleven labs a little bit... and i dunno is it me but it sucks | |||
0 | [] | {
"id": "user_d54e845b7ece5115"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_e332c67952aed251",
"name": "chatter",
"nsfw": false
} | 2026-05-14T14:13:24.112+00:00 | msg_f0ff49035b9b379b | 0 | 0 | msg_9e955d28daef04c3 | Hopefully veo4 and this google omni thats circulating are different models... | ||
0 | [] | {
"id": "user_8ec7e86ba42c4b23"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_709e2d556c3e5a9f",
"name": "ltx_training",
"nsfw": false
} | 2026-05-14T14:12:50.741+00:00 | msg_79f1a1472e318177 | 1 | 1 | msg_4f58ec8bb400bf98 | Umm guess we are talking about LTX2.3... if its not fitting on 24 GB VRAM decrease the rank/dim, lower res and make big blockswap... that's what people do in that case. | ||
3 | [
{
"content_type": "",
"filename": "ComfyUI_temp_vnxah_00001_.png",
"size": 0
},
{
"content_type": "",
"filename": "2026-05-15-000142_hyprshot.png",
"size": 0
},
{
"content_type": "",
"filename": "image.png",
"size": 0
}
] | {
"id": "user_12867c0032c4d0b8"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_445f31c7b88f94ff",
"name": "flux",
"nsfw": false
} | 2026-05-14T14:08:28.398+00:00 | 2026-05-14T14:13:42.277118+00:00 | msg_d5627c04e18de0ab | 0 | 0 | msg_844a45da4d954915 | i updated some things in the repo <a:scrajj:1136141358919336006>
there was an issue with the patch being reapplied on changes - wot turbo model are you trying it with??
i think the authors are working on proper comfyui nodes so dont take my janky version as how it do be <:3_Pepe_Heart:976754832016822272> | |
0 | [] | {
"id": "user_8fae2a7681c7eebb"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_f60d3b5374f53b4a",
"name": "z-image",
"nsfw": false
} | 2026-05-14T13:56:31.14+00:00 | msg_beb31b31c991e13e | 3 | 3 | https://tenor.com/view/cute-cat-kitty-heart-love-gif-25053494 | |||
0 | [] | {
"id": "user_93915dc90a6e7734"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T13:52:52.557+00:00 | msg_49337aeb639ec5e1 | 1 | 1 | someone went ahead and harvested the audio transformer to allow for zero shot cloning and emotion https://github.com/ScenemaAI/scenema-audio | |||
1 | [
{
"content_type": "",
"filename": "20251004_1531_01k6rj3v8kf738h51s7r1cb2sb.mp4",
"size": 0
}
] | {
"id": "user_0cac156f4db6d588"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T13:52:23.713+00:00 | msg_59f04655e6d96b0b | 0 | 0 | msg_409cb0783e77f0f7 | Turns out they just needed Jesus (you reminded me of this silly sora gen from awhile back) | ||
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_7eb82d9117fc224b",
"name": "seedance",
"nsfw": false
} | 2026-05-14T13:49:26.467+00:00 | msg_9ef7571e1ba1c5a3 | 0 | 0 | doesnt happyhorse allow any ppl in ref ? | |||
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_7eb82d9117fc224b",
"name": "seedance",
"nsfw": false
} | 2026-05-14T13:49:18.648+00:00 | msg_de3404bf592fff93 | 0 | 0 | it makeks no fking sense | |||
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_7eb82d9117fc224b",
"name": "seedance",
"nsfw": false
} | 2026-05-14T13:49:15.491+00:00 | msg_2edb554267d7bed1 | 0 | 0 | why do some places with seedance 2 still block random ppl | |||
0 | [] | {
"id": "user_41ec6d125ef4c297"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_f60d3b5374f53b4a",
"name": "z-image",
"nsfw": false
} | 2026-05-14T13:49:11.24+00:00 | msg_d6739b7098b6bbc7 | 1 | 1 | msg_046806944230fb7e | + 1, <@1040850604525879356> has done incredible work and anything that can be done to support them if v. worthwhile | ||
0 | [] | {
"id": "user_1d8b4399d305a90d"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_7eb82d9117fc224b",
"name": "seedance",
"nsfw": false
} | 2026-05-14T13:49:04.426+00:00 | msg_76b9124563f92770 | 0 | 0 | sigh | |||
0 | [] | {
"id": "user_0cac156f4db6d588"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_ee2ade1e5d114d55",
"name": "ltx_chatter",
"nsfw": false
} | 2026-05-14T13:43:41.887+00:00 | msg_34c89b2cc28bdadc | 0 | 0 | msg_c0e84655ade3a3ea | Yeah, I messed with using just the audio model in LTX, it's kinda fun, but not super-controllable and has lots of artifacts/issues (I think if I remember correctly the audio is worse because it needs the video for full conditioning). Kinda slowish compared to other audio models, but neat. | ||
0 | [] | {
"id": "user_8e18b7129773a94e"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_445f31c7b88f94ff",
"name": "flux",
"nsfw": false
} | 2026-05-14T13:40:08.525+00:00 | msg_844a45da4d954915 | 0 | 0 | so it "works" with turbo just something is wrong in the settings ... | |||
1 | [
{
"content_type": "",
"filename": "AsymFLUX2_00004_.png",
"size": 0
}
] | {
"id": "user_8e18b7129773a94e"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_445f31c7b88f94ff",
"name": "flux",
"nsfw": false
} | 2026-05-14T13:39:47.648+00:00 | msg_33b552e764019531 | 1 | 1 | interesting does the same thing with full model with cfg4 | |||
0 | [] | {
"id": "user_a249d3dc48a9628c"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_7013e987cc573aef",
"name": "comfyui",
"nsfw": false
} | 2026-05-14T13:38:14.461+00:00 | msg_d33e6d5237d64524 | 0 | 0 | msg_86aa8b4eff0fedab | Thank you! Gonna stick with my settings then, no need to update | ||
0 | [] | {
"id": "user_afea84ba291dedf6"
} | {
"category_id": "cat_cca0800520ca07ed",
"id": "chan_709e2d556c3e5a9f",
"name": "ltx_training",
"nsfw": false
} | 2026-05-14T13:37:55.837+00:00 | msg_4f58ec8bb400bf98 | 0 | 0 | msg_a54de8927d698bc1 | thanks man, nothing at the moment actually. I was speaking more historically. I just use vast now since I've never figured out the right settings to make training work locally. I have a 3090 so 24G vram and 96G ram. most of the time I dont mind because I dont want to tie up my PC ,but there are times it'd be nice t... | ||
1 | [
{
"content_type": "",
"filename": "AsymFLUX2_00002_.png",
"size": 0
}
] | {
"id": "user_8e18b7129773a94e"
} | {
"category_id": "cat_9b2ecef9db9a9238",
"id": "chan_445f31c7b88f94ff",
"name": "flux",
"nsfw": false
} | 2026-05-14T13:22:58.327+00:00 | 2026-05-14T13:36:42.43312+00:00 | msg_bb8d688b14face5f | 0 | 0 | produces this with turbo model (4 step, cfg1) , probably not compatible with it, I don't time to try a full 38 step gen , one step takes around 20 seconds ... (for me , ofc most nvidia would be faster) | ||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:22:12.723+00:00 | msg_3c596e3fa7e2371e | 0 | 0 | jailbroke it somehow | |||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:21:56.944+00:00 | msg_8ac5150871822b84 | 0 | 0 | msg_1a3c621b8529e26a | interesting. a little while ago, some dude made a website called KrawAI, and it had completely unrestricted Edit and Image Gen, but he shut it down suddenly and went dark. | ||
5 | [
{
"content_type": "",
"filename": "3F3F3F3F3F3F283F3F3F3F29.webp",
"size": 0
},
{
"content_type": "",
"filename": "61.png",
"size": 0
},
{
"content_type": "",
"filename": "62.png",
"size": 0
},
{
"content_type": "",
"filename": "63.png",
"size": 0
},
{... | {
"id": "user_8fa256c6cc28cc00"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:18:53.632+00:00 | msg_1a3c621b8529e26a | 0 | 0 | Image edit. But to remove mature filter, you need to buy a subscription. For video you dont need it though | |||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:16:18.327+00:00 | msg_3fd90c84c955198a | 0 | 0 | gotcha | |||
0 | [] | {
"id": "user_8fa256c6cc28cc00"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:15:42.198+00:00 | msg_8d5e337327d575e3 | 0 | 0 | only teenage and somewhat yonger | |||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:15:02.678+00:00 | msg_951ce579de7110e7 | 0 | 0 | msg_2413d90c0f89c556 | and i2v, any rejections there or is it prompt based that freaks out the filter? | ||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:13:40.082+00:00 | msg_03fb8c370f0fecd7 | 0 | 0 | gotcha | |||
0 | [] | {
"id": "user_8e18b7129773a94e"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_7013e987cc573aef",
"name": "comfyui",
"nsfw": false
} | 2026-05-14T13:13:38.688+00:00 | msg_d7b79f0e95d19988 | 0 | 0 | did eventually load -again fixed it with claude 🙂 - but it seems it is not compatible with turbo models | |||
0 | [] | {
"id": "user_8fa256c6cc28cc00"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:13:36.442+00:00 | msg_2413d90c0f89c556 | 0 | 0 | Current | |||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:13:26.736+00:00 | msg_bd3b6ac9b39910fc | 0 | 0 | msg_02cd3fe4ed417661 | Is it the "retiring soon" model or the current? | ||
0 | [] | {
"id": "user_50affbb1a78d67fc"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_575ca31a8c98a1a9",
"name": "vibecoding",
"nsfw": false
} | 2026-05-14T13:11:56.507+00:00 | msg_40e8fc5b6ff216ad | 0 | 0 | it just started using it at one point and now it's doing it all the time, since 4.7 I think | |||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:11:28.379+00:00 | msg_560a186bb266e2bc | 0 | 0 | which I had to do for months not too long ago | |||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:11:14.606+00:00 | msg_f91ec7d5d5f1b941 | 0 | 0 | No, but you're not having to use stickers and emojis to trick any filters | |||
0 | [] | {
"id": "user_50affbb1a78d67fc"
} | {
"category_id": "cat_9090f344a731ef92",
"id": "chan_575ca31a8c98a1a9",
"name": "vibecoding",
"nsfw": false
} | 2026-05-14T13:10:56.079+00:00 | msg_8f7f022de93d82b1 | 1 | 1 | nah it's "foot gun" | |||
0 | [] | {
"id": "user_8fa256c6cc28cc00"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:10:50.127+00:00 | msg_b5c9a526e33eed5b | 0 | 0 | So its not always perfect | |||
1 | [
{
"content_type": "",
"filename": "d509bd4a-e1a1-479e-8934-40d4fbf888e6.mp4",
"size": 0
}
] | {
"id": "user_8fa256c6cc28cc00"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:10:30.362+00:00 | msg_02cd3fe4ed417661 | 0 | 0 | Quick missionary | |||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:10:06.381+00:00 | msg_1fba32c49f0c6922 | 0 | 0 | msg_3ca0f9cbb824b9d8 | yeah I work the same way | ||
0 | [] | {
"id": "user_8fa256c6cc28cc00"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:09:47.567+00:00 | msg_3ca0f9cbb824b9d8 | 0 | 0 | But I like to generate a lot of the same prompts many times in search of the perfect | |||
0 | [] | {
"id": "user_8fa256c6cc28cc00"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:09:12.429+00:00 | msg_98912eaa2442b5c8 | 0 | 0 | I bought a max subscription for 200 bucks and didn't notice how in a couple of days I was already down at half | |||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:09:12.333+00:00 | msg_8cb32a0752d73974 | 0 | 0 | but man | |||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:09:10.332+00:00 | msg_d434c8d1fe670f0c | 0 | 0 | I was trying SuperGrok Heavy with the discount | |||
0 | [] | {
"id": "user_1e5efd70684008d2"
} | {
"category_id": "cat_0983d4583f32e593",
"id": "chan_ecd16d71359a66a5",
"name": "nsfw",
"nsfw": true
} | 2026-05-14T13:08:55.09+00:00 | msg_b0e124b5b5fe27e5 | 0 | 0 | Yeah I was looking at it lmao |
Discord Archive
This is an archive of messages from the Banodoco Discord community, where technical and artistic practitioners have been discussing open source AI art for the past three years.
The archive captures a long-running community record of people learning, training, evaluating, and using open source AI art models in practice. It contains discussion around model releases, workflows, tooling, troubleshooting, creative experiments, training details, and the many small technical and artistic nuances that are hard to recover from model cards or formal documentation alone.
Banodoco is an open source AI art community with many talented contributors. This dataset is intended as a research and search resource for understanding how people have explored and unlocked open source AI art models over the past few years.
Use With Agents
If you want to query this archive from a coding agent instead of downloading the full dataset, use the Hivemind repo:
https://github.com/banodoco/hivemind
Hivemind packages the Banodoco Discord message feed as an agent skill with query patterns, channel guidance, and examples for finding community knowledge.
Privacy Filter
Messages are excluded when the author has members.allow_content_sharing = false.
Unset sharing preferences are treated as allowed, matching the production bot's
existing sharing behavior. Bot messages and deleted messages are excluded by default.
Discord user, message, channel, thread, category, and reference IDs are hashed by
default. Raw Discord IDs are included only when the exporter is run with
--include-raw-ids.
Attachment URLs are not included by default. Attachment records keep only basic metadata such as filename, content type, and size.
Export Stats
- Exported messages: 1080008
- Skipped opted-out author messages: 8311
- Skipped bot messages: 8440
- Skipped empty messages: 68743
- Opted-out authors: 2
- Bot authors: 19
- Guild filter: 1076117621407223829
- Start date: none
- End date: none
- Downloads last month
- 9