Add task category and update license metadata
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,9 +1,11 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
-
datasets:
|
| 4 |
-
- internlm/EndoCoT-Data
|
| 5 |
language:
|
| 6 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
base_model:
|
| 8 |
- Qwen/Qwen-Image-Edit-2511
|
| 9 |
---
|
|
@@ -25,36 +27,21 @@ base_model:
|
|
| 25 |
<img src="fig/teaser.jpg" alt="Teaser" width="100%" style="border-radius: 10px; box-shadow: 0 6px 20px rgba(0,0,0,0.2);">
|
| 26 |
</p>
|
| 27 |
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
# EndoCoT: Scaling Endogenous Chain-of-Thought Reasoning in Diffusion Models
|
| 32 |
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
- [x] Open source the training code
|
| 36 |
-
- [ ] Open source the training data
|
| 37 |
-
- [x] Open source the main task ckpt
|
| 38 |
-
- [ ] Open source the edit model ckpt
|
| 39 |
-
- [ ] Refactor the codebase for better usability and maintainability
|
| 40 |
-
|
| 41 |
-
## 📰News
|
| 42 |
-
|
| 43 |
-
- 🚀 [2026/3/12] We have released the EndoCoT [repository](https://github.com/InternLM/EndoCoT) and [ckpts](https://huggingface.co/internlm/EndoCoT).
|
| 44 |
-
|
| 45 |
-
## 🌟Highlight
|
| 46 |
-
|
| 47 |
-

|
| 48 |
|
| 49 |
-
- EndoCoT
|
|
|
|
|
|
|
| 50 |
|
| 51 |
-
|
| 52 |
|
| 53 |
-
-
|
|
|
|
|
|
|
| 54 |
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
## ⚡Quick Start
|
| 58 |
|
| 59 |
### Setup environment
|
| 60 |
|
|
@@ -63,141 +50,51 @@ git clone https://github.com/InternLM/EndoCoT
|
|
| 63 |
cd EndoCoT
|
| 64 |
conda create -n EndoCoT python=3.10
|
| 65 |
conda activate EndoCot
|
| 66 |
-
# Please install the version of torch compatible with your machine.
|
| 67 |
pip install -r requirements.txt
|
| 68 |
-
# Please install the version of vLLM compatible with your machine.
|
| 69 |
```
|
| 70 |
|
| 71 |
-
### Inference
|
| 72 |
-
|
| 73 |
-
1. Download the ckpt:
|
| 74 |
-
|
| 75 |
-
- You may find our pretrained weights at: [**EndoCoT**](https://huggingface.co/InternLM/EndoCoT)
|
| 76 |
|
| 77 |
-
|
| 78 |
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
--output_dir ./outputs/sudoku_results
|
| 89 |
-
```
|
| 90 |
|
| 91 |
-
|
| 92 |
|
| 93 |
-
|
|
|
|
| 94 |
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
|
|
|
|
|
|
| 100 |
|
| 101 |
-
##
|
| 102 |
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
- You may find our training data at: [**EndoCoT dataset**](https://huggingface.co/datasets/InternLM/EndoCoT)
|
| 106 |
-
|
| 107 |
-
> Since the metadata uses relative paths, please ensure the dataset files are placed in the same directory as `metadata.csv`
|
| 108 |
-
|
| 109 |
-
2. Train your model
|
| 110 |
-
|
| 111 |
-
```bash
|
| 112 |
-
cd DiffSynth-Studio
|
| 113 |
-
bash add/Maze/stage1.sh
|
| 114 |
-
python change_ckpt_prefix.py --src /path/to/the/Maze/save/dir/Maze_stage1
|
| 115 |
-
bash add/Maze/stage2.sh
|
| 116 |
-
python change_ckpt_prefix.py --src /path/to/the/Maze/save/dir/Maze_stage2
|
| 117 |
-
```
|
| 118 |
-
|
| 119 |
-
### How to change the latent reasoning steps?
|
| 120 |
-
|
| 121 |
-
> **Note on Customization:** Since the current implementation is straightforward, you can only manually adjust the latent reasoning steps in `DiffSynth-Studio/diffsynth/pipelines/qwen_image.py`:
|
| 122 |
-
>
|
| 123 |
-
> - **Line 442:** Modify `infer_steps`.
|
| 124 |
-
> - **Line 471:** Modify `training_steps`.
|
| 125 |
-
>
|
| 126 |
-
> ##### **We plan to optimize this in future releases.**
|
| 127 |
-
|
| 128 |
-
```python
|
| 129 |
-
def encode_prompt_edit(self, pipe: QwenImagePipeline, prompt, edit_image, is_final, gt_prompt=None, idx=None):
|
| 130 |
-
|
| 131 |
-
drop_idx = 64
|
| 132 |
-
if type(prompt[0])==str:
|
| 133 |
-
template = "<|im_start|>system\nDescribe the key features of the input image (color, shape, size, texture, objects, background), then explain how the user's text instruction should alter or modify the image. Generate a new image that meets the user's requirements while maintaining consistency with the original input where appropriate.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>{}<|im_end|>\n<|im_start|>assistant\n"
|
| 134 |
-
txt = template.format(prompt[0])
|
| 135 |
-
model_inputs = pipe.processor(text=txt, images=edit_image, padding=True, return_tensors="pt").to(pipe.device)
|
| 136 |
-
embedding_layers = pipe.text_encoder.model.language_model.get_input_embeddings()
|
| 137 |
-
with torch.no_grad():
|
| 138 |
-
inputs_embeds = embedding_layers(model_inputs.input_ids)
|
| 139 |
-
self.attention_mask = model_inputs.attention_mask
|
| 140 |
-
self.pixel_values = model_inputs.pixel_values
|
| 141 |
-
self.image_grid_thw = model_inputs.image_grid_thw
|
| 142 |
-
else:
|
| 143 |
-
inputs_embeds= prompt[0]
|
| 144 |
-
|
| 145 |
-
# dxl: test use
|
| 146 |
-
if is_final==None or idx!=None:
|
| 147 |
-
print("现在在inference。或者stage2训练")
|
| 148 |
-
if idx!=None:
|
| 149 |
-
iter_times = idx-2
|
| 150 |
-
else:
|
| 151 |
-
# infer step
|
| 152 |
-
iter_times = 50
|
| 153 |
-
|
| 154 |
-
with torch.no_grad():
|
| 155 |
-
inputs_embeds = self.manual_generate_eval(
|
| 156 |
-
pipe,
|
| 157 |
-
inputs_embeds=inputs_embeds,
|
| 158 |
-
max_new_tokens=iter_times,
|
| 159 |
-
).detach()
|
| 160 |
-
|
| 161 |
-
# dxl: only update the last 2 tokens
|
| 162 |
-
if idx!=None:
|
| 163 |
-
inputs_embeds = self.manual_generate_eval(
|
| 164 |
-
pipe,
|
| 165 |
-
inputs_embeds=inputs_embeds,
|
| 166 |
-
max_new_tokens=2,
|
| 167 |
-
)
|
| 168 |
-
|
| 169 |
-
generated_embeds = inputs_embeds
|
| 170 |
-
|
| 171 |
-
... ...
|
| 172 |
-
|
| 173 |
-
# dxl:training
|
| 174 |
-
if is_final!=None and idx==None:
|
| 175 |
-
try:
|
| 176 |
-
generated_embeds, _ = self.manual_generate(
|
| 177 |
-
pipe,
|
| 178 |
-
inputs_embeds=inputs_embeds,
|
| 179 |
-
is_final=is_final,
|
| 180 |
-
# training steps
|
| 181 |
-
max_new_tokens=2,
|
| 182 |
-
)
|
| 183 |
-
except Exception as e:
|
| 184 |
-
print(f"Error!: {type(e).__name__} - {e}")
|
| 185 |
-
print(inputs_embeds.shape)
|
| 186 |
-
assert False
|
| 187 |
-
|
| 188 |
-
try:
|
| 189 |
-
return split_hidden_states, generated_embeds, eos_loss
|
| 190 |
-
except:
|
| 191 |
-
print(f"[WARNING] Prompt was not updated correctly for inference.")
|
| 192 |
-
return split_hidden_states
|
| 193 |
-
```
|
| 194 |
|
| 195 |
## 📖 Citation
|
| 196 |
|
| 197 |
```
|
| 198 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 199 |
```
|
| 200 |
|
| 201 |
## ⚖️ License
|
| 202 |
|
| 203 |
-
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: cc-by-nc-4.0
|
| 5 |
+
task_categories:
|
| 6 |
+
- image-to-image
|
| 7 |
+
datasets:
|
| 8 |
+
- internlm/EndoCoT-Data
|
| 9 |
base_model:
|
| 10 |
- Qwen/Qwen-Image-Edit-2511
|
| 11 |
---
|
|
|
|
| 27 |
<img src="fig/teaser.jpg" alt="Teaser" width="100%" style="border-radius: 10px; box-shadow: 0 6px 20px rgba(0,0,0,0.2);">
|
| 28 |
</p>
|
| 29 |
|
|
|
|
|
|
|
|
|
|
| 30 |
# EndoCoT: Scaling Endogenous Chain-of-Thought Reasoning in Diffusion Models
|
| 31 |
|
| 32 |
+
This repository contains the training data for **EndoCoT**, a novel framework that activates the reasoning potential of Multimodal Large Language Models (MLLMs) within diffusion frameworks through an iterative thought guidance module.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
+
- **Paper:** [EndoCoT: Scaling Endogenous Chain-of-Thought Reasoning in Diffusion Models](https://arxiv.org/abs/2603.12252)
|
| 35 |
+
- **Project Page:** [https://internlm.github.io/EndoCoT/](https://internlm.github.io/EndoCoT/)
|
| 36 |
+
- **Repository:** [https://github.com/InternLM/EndoCoT](https://github.com/InternLM/EndoCoT)
|
| 37 |
|
| 38 |
+
## 🌟 Highlights
|
| 39 |
|
| 40 |
+
- **EndoCoT** is a reasoning paradigm for diffusion models that enables step-by-step inference.
|
| 41 |
+
- It outperforms conventional training methods on complex tasks like Maze, TSP, VSP, and Sudoku.
|
| 42 |
+
- Provides transparent, intermediate reasoning trajectories.
|
| 43 |
|
| 44 |
+
## ⚡ Quick Start
|
|
|
|
|
|
|
| 45 |
|
| 46 |
### Setup environment
|
| 47 |
|
|
|
|
| 50 |
cd EndoCoT
|
| 51 |
conda create -n EndoCoT python=3.10
|
| 52 |
conda activate EndoCot
|
|
|
|
| 53 |
pip install -r requirements.txt
|
|
|
|
| 54 |
```
|
| 55 |
|
| 56 |
+
### Sample Usage (Inference)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
+
To test a single case using the codebase:
|
| 59 |
|
| 60 |
+
```bash
|
| 61 |
+
cd test
|
| 62 |
+
python test.py \
|
| 63 |
+
--task Maze \
|
| 64 |
+
--model_root /path/to/merged_ckpts \
|
| 65 |
+
--lora_path /path/to/your_lora_weight.safetensors \
|
| 66 |
+
--input_image ./data/sudoku_sample.png \
|
| 67 |
+
--output_dir ./outputs/sudoku_results
|
| 68 |
+
```
|
|
|
|
|
|
|
| 69 |
|
| 70 |
+
### Training
|
| 71 |
|
| 72 |
+
1. Download the datasets & `metadata.csv` and ensure they are placed in the same directory.
|
| 73 |
+
2. Run the training scripts:
|
| 74 |
|
| 75 |
+
```bash
|
| 76 |
+
cd DiffSynth-Studio
|
| 77 |
+
bash add/Maze/stage1.sh
|
| 78 |
+
python change_ckpt_prefix.py --src /path/to/the/Maze/save/dir/Maze_stage1
|
| 79 |
+
bash add/Maze/stage2.sh
|
| 80 |
+
python change_ckpt_prefix.py --src /path/to/the/Maze/save/dir/Maze_stage2
|
| 81 |
+
```
|
| 82 |
|
| 83 |
+
## 📰 News
|
| 84 |
|
| 85 |
+
- 🚀 [2026/3/12] We have released the EndoCoT [repository](https://github.com/InternLM/EndoCoT) and [ckpts](https://huggingface.co/internlm/EndoCoT).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
## 📖 Citation
|
| 88 |
|
| 89 |
```
|
| 90 |
+
@article{dai2026endocot,
|
| 91 |
+
title={EndoCoT: Scaling Endogenous Chain-of-Thought Reasoning in Diffusion Models},
|
| 92 |
+
author={Dai, Xuanlang and Zhou, Yujie and Xing, Long and Bu, Jiazi and Wei, Xilin and Liu, Yuhong and Zhang, Beichen and Chen, Kai and Zang, Yuhang},
|
| 93 |
+
journal={arXiv preprint arXiv:2603.12252},
|
| 94 |
+
year={2026}
|
| 95 |
+
}
|
| 96 |
```
|
| 97 |
|
| 98 |
## ⚖️ License
|
| 99 |
|
| 100 |
+
The code in the associated repository is licensed under the **MIT License**. The dataset is licensed under the **CC BY-NC 4.0 License**.
|