distilled GGUF available for those with low vram requirements

#17
by realrebelai - opened

smthem provided a gguf for the distilled model. i provided a workflow for it if anyones interested. its a different node setup than what this requires and takes a little work but runs PRETTY MUCH just as well! not sure if im stepping on toes by posting this here but the link to my workflow so you can grab the files and run it is here, download and installation instructions in description:

https://civitai.red/models/2606616/rebels-sulphur-2-ltx-23-nsfw-model-gguf

hope this helps some struggling vram cards! <3

I'm trying to package this model for MLX, for the same reasons (VRAM poor). Wondering if you could share how you built yours, since the readme on this repo is basically a one-liner. No shame thrown, people on here are purely in it for the love of the game.

I'm just struggling on which loras in this repo matter, and for which use cases. I planned on packaging a dev model and a distilled model, but I want to bake in the loras because I also want it quantized.

Nice work on yours!

I'm trying to package this model for MLX, for the same reasons (VRAM poor). Wondering if you could share how you built yours, since the readme on this repo is basically a one-liner. No shame thrown, people on here are purely in it for the love of the game.

i must regretfully but respectfully admit this is not my work entirely, just the workflow itself. The guts behind it is smthem, hes a contributor on the site. ill link his nodes and model files so you can take a look. seems like everytime i touch his nodes with claude they break so i just leave it to him everytime. im not sure how he runs his nodes on the inside. definately give him a follow because he stays up to date on the less viral models. i use alot of his work in my youtube content.

github nodes:
https://github.com/smthemex/ComfyUI_LTX2_SM

HF model files:
https://huggingface.co/smthem/LTX-2.3-test-gguf/tree/main

No shame there, credit where it's due, to all involved.

ComfyUI on Mac is pretty "Meh" at present, but I hope it improves over time. The few times I tried it, I had no luck. It's partly what sent me down the rabbit hole of creating MLX versions of this model, because we don't all have RTX 4090's and MLX + quantization = heaven, on MacOS generation.

This comment has been hidden (marked as Resolved)

Model has been removed the Civitai Red site.

no it wasnt, its probably a bug. i just checked my page and its still there :)

Screenshot (109)

Sign up or log in to comment