Add library name and link to code

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -1,9 +1,10 @@
1
  ---
2
- license: mit
3
  base_model:
4
  - black-forest-labs/FLUX.1-dev
5
  - stabilityai/stable-diffusion-3.5-medium
 
6
  pipeline_tag: text-to-image
 
7
  ---
8
 
9
  <div align="center">
@@ -46,8 +47,9 @@ pipeline_tag: text-to-image
46
  <p align="center">
47
  <a href="https://arxiv.org/abs/2506.07986">Paper</a> |
48
  <a href="https://vchitect.github.io/TACA/">Project Page</a> |
49
- <a href="https://huggingface.co/ldiex/TACA/tree/main">LoRA Weights</a>
 
50
  </p>
51
 
52
  # About
53
- We propose **TACA**, a parameter-efficient method that dynamically rebalances cross-modal attention in multimodal diffusion transformers to improve text-image alignment.
 
1
  ---
 
2
  base_model:
3
  - black-forest-labs/FLUX.1-dev
4
  - stabilityai/stable-diffusion-3.5-medium
5
+ license: mit
6
  pipeline_tag: text-to-image
7
+ library_name: diffusers
8
  ---
9
 
10
  <div align="center">
 
47
  <p align="center">
48
  <a href="https://arxiv.org/abs/2506.07986">Paper</a> |
49
  <a href="https://vchitect.github.io/TACA/">Project Page</a> |
50
+ <a href="https://huggingface.co/ldiex/TACA/tree/main">LoRA Weights</a> |
51
+ <a href="https://github.com/Vchitect/TACA">Code</a>
52
  </p>
53
 
54
  # About
55
+ We propose **TACA**, a parameter-efficient method that dynamically rebalances cross-modal attention in multimodal diffusion transformers to improve text-image alignment.