Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ library_name: transformers
|
|
| 12 |
* Apr 25, 2025: π We release the inference code and model weights of Step1X-Edit. [inference code](https://github.com/stepfun-ai/Step1X-Edit)
|
| 13 |
* Apr 25, 2025: π We have made our technical report available as open source. [Read](https://arxiv.org/abs/2504.17761)
|
| 14 |
|
| 15 |
-
## Image Edit Demos
|
| 16 |
|
| 17 |
<div align="center">
|
| 18 |
<img width="720" alt="demo" src="assets/image_edit_demo.gif">
|
|
@@ -31,8 +31,7 @@ images using a DiT-based network.More details please refer to our [technical rep
|
|
| 31 |
|
| 32 |
|
| 33 |
## Benchmark
|
| 34 |
-
We release [GEdit-Bench](https://huggingface.co/datasets/stepfun-ai/GEdit-Bench) as a new benchmark, grounded in real-world usages is developed to support more authentic and comprehensive evaluation. This benchmark, which is carefully curated to reflect actual user editing needs and a wide range of editing scenarios, enables more authentic and comprehensive evaluations of image editing models.
|
| 35 |
-
The evaluation process and related code can be found in [GEdit-Bench/EVAL.md](GEdit-Bench/EVAL.md). Part results of the benchmark are shown below:
|
| 36 |
<div align="center">
|
| 37 |
<img width="1080" alt="results" src="assets/eval_res_en.png">
|
| 38 |
</div>
|
|
|
|
| 12 |
* Apr 25, 2025: π We release the inference code and model weights of Step1X-Edit. [inference code](https://github.com/stepfun-ai/Step1X-Edit)
|
| 13 |
* Apr 25, 2025: π We have made our technical report available as open source. [Read](https://arxiv.org/abs/2504.17761)
|
| 14 |
|
| 15 |
+
<!-- ## Image Edit Demos -->
|
| 16 |
|
| 17 |
<div align="center">
|
| 18 |
<img width="720" alt="demo" src="assets/image_edit_demo.gif">
|
|
|
|
| 31 |
|
| 32 |
|
| 33 |
## Benchmark
|
| 34 |
+
We release [GEdit-Bench](https://huggingface.co/datasets/stepfun-ai/GEdit-Bench) as a new benchmark, grounded in real-world usages is developed to support more authentic and comprehensive evaluation. This benchmark, which is carefully curated to reflect actual user editing needs and a wide range of editing scenarios, enables more authentic and comprehensive evaluations of image editing models. Part results of the benchmark are shown below:
|
|
|
|
| 35 |
<div align="center">
|
| 36 |
<img width="1080" alt="results" src="assets/eval_res_en.png">
|
| 37 |
</div>
|