Ege Yavuzcan
EgeYavuzcan
·
AI & ML interests
Generative Modeling, Inference Optimization, Model Quantization-Optimization
Recent Activity
liked
a dataset
10 days ago
Rapidata/human-style-preferences-images
updated
a collection
24 days ago
Model Distillation
updated
a collection
24 days ago
Image Editing
Organizations
None yet
Personalized - ID Preserving Image Generation
-
InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity
Paper • 2503.16418 • Published • 36 -
LoRAShop: Training-Free Multi-Concept Image Generation and Editing with Rectified Flow Transformers
Paper • 2505.23758 • Published • 22 -
FluxSpace: Disentangled Semantic Editing in Rectified Flow Transformers
Paper • 2412.09611 • Published • 11 -
Flux Already Knows -- Activating Subject-Driven Image Generation without Training
Paper • 2504.11478 • Published
Model Distillation
-
Tongyi-MAI/Z-Image-Turbo
Text-to-Image • Updated • 538k • • 3.97k -
SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation
Paper • 2503.09641 • Published • 42 -
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers
Paper • 2410.10629 • Published • 12 -
Efficient Distillation of Classifier-Free Guidance using Adapters
Paper • 2503.07274 • Published • 4
Model Quantization
Image Editing
-
Tongyi-MAI/Z-Image-Turbo
Text-to-Image • Updated • 538k • • 3.97k -
ByteDance-Seed/BAGEL-7B-MoT
Any-to-Any • 15B • Updated • 602 • 1.18k -
FireFlow: Fast Inversion of Rectified Flow for Image Semantic Editing
Paper • 2412.07517 • Published • 11 -
FlowEdit: Inversion-Free Text-Based Editing Using Pre-Trained Flow Models
Paper • 2412.08629 • Published • 13
Flow Matching Concept
Model Optimization
-
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Paper • 2307.08691 • Published • 9 -
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
Paper • 2407.08608 • Published • 1 -
1.58-bit FLUX
Paper • 2412.18653 • Published • 86
Diffuson Models Inference Acceleration
articles I've been working on
Efficent Video Generation
Image Editing
-
Tongyi-MAI/Z-Image-Turbo
Text-to-Image • Updated • 538k • • 3.97k -
ByteDance-Seed/BAGEL-7B-MoT
Any-to-Any • 15B • Updated • 602 • 1.18k -
FireFlow: Fast Inversion of Rectified Flow for Image Semantic Editing
Paper • 2412.07517 • Published • 11 -
FlowEdit: Inversion-Free Text-Based Editing Using Pre-Trained Flow Models
Paper • 2412.08629 • Published • 13
Personalized - ID Preserving Image Generation
-
InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity
Paper • 2503.16418 • Published • 36 -
LoRAShop: Training-Free Multi-Concept Image Generation and Editing with Rectified Flow Transformers
Paper • 2505.23758 • Published • 22 -
FluxSpace: Disentangled Semantic Editing in Rectified Flow Transformers
Paper • 2412.09611 • Published • 11 -
Flux Already Knows -- Activating Subject-Driven Image Generation without Training
Paper • 2504.11478 • Published
Flow Matching Concept
Model Distillation
-
Tongyi-MAI/Z-Image-Turbo
Text-to-Image • Updated • 538k • • 3.97k -
SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation
Paper • 2503.09641 • Published • 42 -
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers
Paper • 2410.10629 • Published • 12 -
Efficient Distillation of Classifier-Free Guidance using Adapters
Paper • 2503.07274 • Published • 4
Model Optimization
-
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Paper • 2307.08691 • Published • 9 -
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
Paper • 2407.08608 • Published • 1 -
1.58-bit FLUX
Paper • 2412.18653 • Published • 86
Model Quantization
Diffuson Models Inference Acceleration
articles I've been working on