Instructions to use Freepik/F-Lite with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Freepik/F-Lite with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Freepik/F-Lite", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Trim the text encoder weights
#2
by ttj - opened
The encode config says 24 layers but if I understand correctly you can set it to 17 and trim the later layers
You are totally right. This could be done to save some memory and compute... but the highest cost here is the generation. The T5 inference is done just once at the begining, whereas the DiT model is run as many times as steps are done.