Red Boxes LOL
In the two new workflows, V2 and Instant Action, I cannot resolve the "missing" model in the Distillation Break-up section. Lora loader advanced is looking for the new ceil72 lora, which is in my lora directory, but I keep getting the error. So I have four red boxes. Is there somewhere else in the workflow where I can select the lora ? I don't see a relevant subgraph anywhere. Sorry to be such an idiot, thanks for the help !
Try reselecting it. I currently have a better workflow that loads them all though one lora-name node and I reset my file structure to be more normal, they come with mine where I have ltx2.3 loras in loras/23/distiiled. All these will be antiquated and moved when I upload them all v3 soon.
please tell me in the workflow 10Eros_10SNodes_9-16Vertical_TiledSampler.json this error occurs. How can I fix it?
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 15810, 32, 128) (torch.bfloat16)
key : shape=(1, 15810, 32, 128) (torch.bfloat16)
value : shape=(1, 15810, 32, 128) (torch.bfloat16)
attn_bias : <class 'torch.Tensor'>
p : 0.0fa3F@0.0.0 is not supported because:
requires device with capability <= (9, 0) but your GPU has capability (12, 0) (too new)
attn_bias type is <class 'torch.Tensor'>
operator wasn't built - see python -m xformers.info for more infofa2F@2.8.3 is not supported because:
attn_bias type is <class 'torch.Tensor'>cutlassF-pt is not supported because:
requires device with capability <= (9, 0) but your GPU has capability (12, 0) (too new)
please tell me in the workflow 10Eros_10SNodes_9-16Vertical_TiledSampler.json this error occurs. How can I fix it?
NotImplementedError: No operator found for
memory_efficient_attention_forwardwith inputs:
query : shape=(1, 15810, 32, 128) (torch.bfloat16)
key : shape=(1, 15810, 32, 128) (torch.bfloat16)
value : shape=(1, 15810, 32, 128) (torch.bfloat16)
attn_bias : <class 'torch.Tensor'>
p : 0.0fa3F@0.0.0is not supported because:
requires device with capability <= (9, 0) but your GPU has capability (12, 0) (too new)
attn_bias type is <class 'torch.Tensor'>
operator wasn't built - seepython -m xformers.infofor more infofa2F@2.8.3is not supported because:
attn_bias type is <class 'torch.Tensor'>cutlassF-ptis not supported because:
requires device with capability <= (9, 0) but your GPU has capability (12, 0) (too new)
--use-pytorch-cross-attention launch argument. I've used that personally since Blackwell still hasn't been really configured with xformers yet and this was made and tested only on pytorch attention on 5090. The node was just updated as well with some patches but probably not gonna fix that one that's an xformers bug you probably would get on other stuff too.
please tell me in the workflow 10Eros_10SNodes_9-16Vertical_TiledSampler.json this error occurs. How can I fix it?
NotImplementedError: No operator found for
memory_efficient_attention_forwardwith inputs:
query : shape=(1, 15810, 32, 128) (torch.bfloat16)
key : shape=(1, 15810, 32, 128) (torch.bfloat16)
value : shape=(1, 15810, 32, 128) (torch.bfloat16)
attn_bias : <class 'torch.Tensor'>
p : 0.0fa3F@0.0.0is not supported because:
requires device with capability <= (9, 0) but your GPU has capability (12, 0) (too new)
attn_bias type is <class 'torch.Tensor'>
operator wasn't built - seepython -m xformers.infofor more infofa2F@2.8.3is not supported because:
attn_bias type is <class 'torch.Tensor'>cutlassF-ptis not supported because:
requires device with capability <= (9, 0) but your GPU has capability (12, 0) (too new)--use-pytorch-cross-attention launch argument. I've used that personally since Blackwell still hasn't been really configured with xformers yet and this was made and tested only on pytorch attention on 5090. The node was just updated as well with some patches but probably not gonna fix that one that's an xformers bug you probably would get on other stuff too.
thank you, I'll try it
another question, workflow 10Eros_10SNodes_InstantAction_I2V.json
what settings should I use so that there is no reference image at the end of the video, no matter what prompt I write, it always ends with a reference image, and the first frames start without it, as if it starts generating from the end
thank you, I'll try it
another question, workflow 10Eros_10SNodes_InstantAction_I2V.json
what settings should I use so that there is no reference image at the end of the video, no matter what prompt I write, it always ends with a reference image, and the first frames start without it, as if it starts generating from the end
Yeah it just uses a weird interaction between latent anchor doing frame_0 and the guide making it an end frame. That's to be explored later just don't use that WF try v3 for most use.