What's the Point? [Learn More]

#1
by qpqpqpqpqpqp - opened

If it's a quant of Kaoru8/T5XXL-Unchained, it is useless then because Kaoru8 did nothing but modified t5's tokenizer. If it's AbstractPhil's fine-tune, it may be something

Found out that for nsfw content it worked well, tried the added words with base and unchained, and there is a difference. used it as original the last 3 month and thougt i do a quant for me.
maybe i also just make a version of https://huggingface.co/AbstractPhil/t5xxl-unchained to try out.

https://huggingface.co/AbstractPhil/SD35-SIM-V1/blob/main/T5xxl_Unchained-step5750.safetensors
It is t5xxl-unchained-f16.safetensors + their trained LoRA for SD 3.5

Thats for SD3.5, i didn't use SD3.5, so for me its pretty useless. the abstract phil unchained is the same hash as Kaoru8/T5XXL-Unchained.

so as i understand with the shunt node i could apply it to maybe sdxl and had a 2048 token context window!?
that would be freaking awesome if it works, do you have some workflows tht work, if i would then know the exct model ill do a gguf in whatever quant possible.

Sign up or log in to comment