Ryann829 nielsr HF Staff commited on
Commit
ad2f241
·
verified ·
1 Parent(s): bbfd692

Update pipeline tag and add library name (#1)

Browse files

- Update pipeline tag and add library name (8dd48b95567372a0571e0dc1d2bba4bc47b607c1)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +392 -6
README.md CHANGED
@@ -1,14 +1,15 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
  - Ryann829/Scone-S2I-57K
5
  language:
6
  - en
7
- base_model:
8
- - ByteDance-Seed/BAGEL-7B-MoT
9
- pipeline_tag: image-text-to-image
10
  tags:
11
  - subject-driven
 
12
  ---
13
 
14
  <p align="center">
@@ -24,10 +25,39 @@ tags:
24
  <a href="https://huggingface.co/Ryann829/Scone"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Model&color=green"></a>
25
  <a href="https://huggingface.co/datasets/Ryann829/Scone-S2I-57K"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Data&color=yellow"></a>
26
  <a href="https://huggingface.co/datasets/Ryann829/SconeEval"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Benchmark&color=yellow"></a>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
 
28
 
 
29
 
 
30
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  # 🔧 Environment setup
33
 
@@ -41,6 +71,76 @@ pip install flash_attn==2.5.8 --no-build-isolation
41
  ```
42
 
43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  # 🔍 Inference and Evaluation
45
 
46
  ## Scone model preparation
@@ -491,8 +591,285 @@ bash scripts/inference_single_case.sh
491
  > - To mitigate randomness, we perform 3 rounds of sampling at 1024x1024 resolution, scoring 3 times per round, yielding 9 group results. The final score is the average of these results.
492
 
493
 
 
 
 
 
 
 
 
 
 
 
 
494
 
495
- # 🚰 Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
496
  If you find Scone helpful, please consider giving the repo a star ⭐.
497
 
498
  If you find this project useful for your research, please consider citing our paper:
@@ -506,4 +883,13 @@ If you find this project useful for your research, please consider citing our pa
506
  primaryClass={cs.CV},
507
  url={https://arxiv.org/abs/2512.12675},
508
  }
509
- ```
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - ByteDance-Seed/BAGEL-7B-MoT
4
  datasets:
5
  - Ryann829/Scone-S2I-57K
6
  language:
7
  - en
8
+ license: apache-2.0
9
+ pipeline_tag: text-to-image
 
10
  tags:
11
  - subject-driven
12
+ library_name: transformers
13
  ---
14
 
15
  <p align="center">
 
25
  <a href="https://huggingface.co/Ryann829/Scone"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Model&color=green"></a>
26
  <a href="https://huggingface.co/datasets/Ryann829/Scone-S2I-57K"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Data&color=yellow"></a>
27
  <a href="https://huggingface.co/datasets/Ryann829/SconeEval"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Benchmark&color=yellow"></a>
28
+ </p>
29
+
30
+ ><p align="center">
31
+ > <span style="color:#137cf3; font-family: Gill Sans;">
32
+ > Yuran Wang<sup>1,2</sup><strong>*</strong> Bohan Zeng<sup>1,2</sup><strong>*</strong> Chengzhuo Tong<sup>1,2</sup> Wenxuan Liu<sup>1</sup> Yang Shi<sup>1,2</sup><br>Xiaochen Ma<sup>1</sup> Hao Liang<sup>1</sup> Yuanxing Zhang<sup>2</sup> Wentao Zhang<sup>1</sup><strong>†</strong>
33
+ > </span>
34
+ > <br>
35
+ > <span><sup>1</sup>Peking University <sup>2</sup>Kling Team, Kuaishou Technology</span>
36
+ > <br>
37
+ > <span><strong>*</strong> Equal contribution, <strong>†</strong> Corresponding author</span>
38
+ ></p>
39
+
40
+ # 📢 News
41
+
42
 
43
+ - 2025.12.16: The [paper](https://arxiv.org/abs/2512.12675), [training code](https://github.com/Ryann-Ran/Scone?tab=readme-ov-file#-train), [inference and evaluation code](https://github.com/Ryann-Ran/Scone?tab=readme-ov-file#-inference-and-evaluation), [model weight](https://huggingface.co/Ryann829/Scone), [training data](https://huggingface.co/datasets/Ryann829/Scone-S2I-57K), [SconeEval benchmark](https://huggingface.co/datasets/Ryann829/SconeEval) are now released.
44
 
45
+ # 📖 Introduction
46
 
47
+ Subject-driven image generation has recently gained significant attention, with the focus evolving from single-subject to multi-subject generation, incorporating more input images. Existing methods can process two or more input images and combine subjects based on instructions, showcasing potential for more complex composition tasks.
48
 
49
+ However, existing works primarily focus on expanding subject combinations while neglecting the ability to distinguish target subjects in complex contexts. As shown in Figure 1.(a), although current models can combine multiple subjects, they may fail to identify and generate the correct target subject when a reference image contains multiple candidates, leading to problems such as subject omissions (none of the candidate subjects appear) or errors (misidentification of the target subject).
50
+ Real-world images often involve interference and intricate details, which further limit practical performance.
51
+ Thus, we emphasize examining the input subjects themselves, focusing on the model’s ability to ***distinguish the target subject within complex contexts and leverage this information for generation***.
52
+
53
+ <figure style="text-align: center; border: none; margin: auto;">
54
+ <img src="assets/problem.png" width="512" alt="The distinction problem and challenges."/>
55
+ <figcaption><b>Figure 1. The distinction problem and challenges.</b></figcaption>
56
+ </figure>
57
+
58
+ * We propose the **Scone** (**S**ubject-driven **co**mposition and distinctio**n** **e**nhancement) model, which supports multi-subject composition and excels in subject distinction in complex contexts. Experiments show our Scone ranks first among open-source models on OmniContext benchmark.
59
+ * We introduce the **understanding bridge strategy**, which transforms the understanding expert into a semantic bridge, enabling early multimodal alignment and attention-based semantic filtering to guide the generation expert, enhancing subject distinction and semantic fidelity without adding extra parameters.
60
+ * We develop **SconeEval**, a challenging benchmark with three difficulty levels, to evaluate performance on subject-driven image generation tasks from both composition and distinction perspectives.
61
 
62
  # 🔧 Environment setup
63
 
 
71
  ```
72
 
73
 
74
+ # 🔥 Train
75
+
76
+ ## Data and base model preparation
77
+
78
+ 1. Download our **22K refined single-candidate data** and **35K multi-candidate data** from [Scone-S2I-57K](https://huggingface.co/datasets/Ryann829/Scone-S2I-57K). The 70K base single-canididate data are sampled from open-source datasets like [X2I](https://huggingface.co/datasets/yzwang/X2I-subject-driven), [MUSAR-Gen](https://huggingface.co/datasets/guozinan/MUSAR-Gen), [UNO-1M](https://huggingface.co/datasets/bytedance-research/UNO-1M), and [Echo-4o-Image](https://huggingface.co/datasets/Yejy53/Echo-4o-Image). Please refer to the dataset links for more details.
79
+
80
+ ```bash
81
+ cd Scone
82
+ # pip install -U huggingface_hub
83
+ hf download Ryann829/Scone-S2I-57K --repo-type=dataset --local-dir ./datasets/Scone-S2I-57K
84
+ ```
85
+
86
+ 2. Organize the data hierarchy as follows:
87
+
88
+ ```
89
+ Scone-S2I-57K
90
+ ├── parquet_data
91
+ │ ├── scone_single_candidate_base/
92
+ │ ├── scone_single_candidate_refined/
93
+ │ └── scone_multi_candidate/
94
+ └── parquet_info
95
+ ├── scone_single_candidate_base.json
96
+ ├── scone_single_candidate_refined.json
97
+ └── scone_multi_candidate.json
98
+ ```
99
+
100
+ 3. Replace each `your_data_path` placeholder with your **actual absolute path** in:
101
+
102
+ * Parquet information files: `./datasets/Scone-S2I-57K/parquet_info/*.json`
103
+
104
+ * Dataset information file: `./data/dataset_info.py`
105
+
106
+ 4. Download the checkpoint of our base model [BAGEL](https://github.com/ByteDance-Seed/Bagel) from [HuggingFace](https://huggingface.co/ByteDance-Seed/BAGEL-7B-MoT):
107
+
108
+ ```bash
109
+ cd Scone
110
+ # pip install -U huggingface_hub
111
+ hf download ByteDance-Seed/BAGEL-7B-MoT --local-dir ./ckpts/BAGEL-7B-MoT
112
+ ```
113
+
114
+ > - **Note**: To avoid out-of-memory (OOM) issues, we disable the EMA update strategy originally used in BAGEL. All our training processes are conducted on 8 Nvidia A800 GPUs.
115
+ > - The usage of semantic mask in the understanding bridge strategy is controlled by the training argument `--use_semantic_mask`.
116
+
117
+ ## Stage I: Composition training
118
+
119
+ For Step 1, please use base single-candidate data for 1 epoch (~30 hours):
120
+
121
+ ```bash
122
+ bash scripts/train_stage1_step1.sh # 🔥 Und., Gen.
123
+ ```
124
+
125
+ For Step 2, please use refined single-candidate data for 1 epoch (~15 hours) and replace `model_path` in the script with your Step 1 checkpoint :
126
+
127
+ ```bash
128
+ bash scripts/train_stage1_step2.sh # 🔥 Und., Gen.
129
+ ```
130
+
131
+ ## Stage II: Distinction training with understanding bridge strategy
132
+ For Step 1, please use refined sinlgle-candidate data and multi-candidate data for 1k steps (~5 hours) and replace `model_path` in the script with your Stage 1 Step 2 checkpoint:
133
+
134
+ ```bash
135
+ bash scripts/train_stage2_step1.sh # 🔥 Und. ❄️ Gen.
136
+ ```
137
+
138
+ For Step 2, please use refined sinlgle-candidate data and multi-candidate data for 1k steps (~5 hours) and replace `model_path` in the script with your Stage 2 Step 1 checkpoint:
139
+
140
+ ```bash
141
+ bash scripts/train_stage2_step2.sh # 🔥 Und., Gen.
142
+ ```
143
+
144
  # 🔍 Inference and Evaluation
145
 
146
  ## Scone model preparation
 
591
  > - To mitigate randomness, we perform 3 rounds of sampling at 1024x1024 resolution, scoring 3 times per round, yielding 9 group results. The final score is the average of these results.
592
 
593
 
594
+ ## SconeEval benchmark
595
+ <p align="center">
596
+ <img src="assets/logo_sconeeval.png" alt="SconeEval Benchmark" width="400"/>
597
+ </p>
598
+
599
+ To evaluate a model’s ability to distinguish and generate the referred subject in complex visual contexts, we introduce a new benchmark, **SconeEval**. It contains 409 test cases across character, object, and scene combinations and subject distinction, with 19 case types in Figure 2(a) and 6 subtasks in Figure 2(b), providing a comprehensive evaluation of a model’s ability to distinguish and utilize subject features.
600
+
601
+ Unlike traditional benchmarks that emphasize visual fidelity or text alignment, SconeEval focuses on cross-modal reasoning from complex contexts involving reference images and instructions, which requires deciding *whom* to generate when multiple candidates appear within or across images.
602
+
603
+ SconeEval includes three progressively challenging tasks, as shown in Figure 2(c): composition, distinction, and distinction & composition. In the composition task, each reference image contains a subject, and one or more images correspond to single or multiple generated subjects. In the distinction task, each reference image contains multiple subjects, and the model generates one target subject. The distinction & composition task integrates both settings, where each reference image contains multiple subjects and multiple images are used for multi-subject generation. Tasks involving distinction include cross-category and intra-category cases, indicating whether candidate subjects in a reference image belong to the same category.
604
+
605
 
606
+ <figure style="text-align: center; border: none; margin: auto;">
607
+ <img src="assets/sconeeval.png" width="1024" alt="Overview of our SconeEval benchmark."/>
608
+ <figcaption><b>Figure 2. Overview of our SconeEval benchmark.</b></figcaption>
609
+ </figure>
610
+
611
+ ### 📊 LeaderBoard
612
+
613
+ <table border="1" style="border-collapse: collapse; width: 100%;">
614
+ <thead>
615
+ <tr>
616
+ <th rowspan="3">Method</th>
617
+ <th colspan="2">Composition ↑</th>
618
+ <th colspan="4">Distinction ↑</th>
619
+ <th colspan="4">Distinction & Composition ↑</th>
620
+ <th colspan="3">Average ↑</th>
621
+ </tr>
622
+ <tr>
623
+ <th>Single</th>
624
+ <th>Multi</th>
625
+ <th colspan="2">Cross</th>
626
+ <th colspan="2">Intra</th>
627
+ <th colspan="2">Cross</th>
628
+ <th colspan="2">Intra</th>
629
+ <th rowspan="2">COM</th>
630
+ <th rowspan="2">DIS</th>
631
+ <th rowspan="2">Overall</th>
632
+ </tr>
633
+ <tr>
634
+ <th>COM</th>
635
+ <th>COM</th>
636
+ <th>COM</th>
637
+ <th>DIS</th>
638
+ <th>COM</th>
639
+ <th>DIS</th>
640
+ <th>COM</th>
641
+ <th>DIS</th>
642
+ <th>COM</th>
643
+ <th>DIS</th>
644
+ </tr>
645
+ </thead>
646
+ <tbody>
647
+ <tr>
648
+ <td colspan="14" style="background-color: #ffefe6; text-align: center; font-weight: bold; font-style: italic;">Closed-Source Model</td>
649
+ </tr>
650
+ <tr>
651
+ <td>Gemini-2.5-Flash-Image</td>
652
+ <td>8.87</td>
653
+ <td>7.94</td>
654
+ <td>9.12</td>
655
+ <td><strong>9.15</strong></td>
656
+ <td>9.00</td>
657
+ <td>8.50</td>
658
+ <td>8.27</td>
659
+ <td><strong>8.87</strong></td>
660
+ <td>8.17</td>
661
+ <td>8.85</td>
662
+ <td>8.56</td>
663
+ <td>8.84</td>
664
+ <td>8.70</td>
665
+ </tr>
666
+ <tr>
667
+ <td>GPT-4o<sup>*</sup></td>
668
+ <td><strong>8.92</strong></td>
669
+ <td><strong>8.51</strong></td>
670
+ <td><strong>9.18</strong></td>
671
+ <td>8.55</td>
672
+ <td><strong>9.45</strong></td>
673
+ <td><strong>9.01</strong></td>
674
+ <td><strong>8.83</strong></td>
675
+ <td>8.49</td>
676
+ <td><strong>8.99</strong></td>
677
+ <td><strong>9.56</strong></td>
678
+ <td><strong>8.98</strong></td>
679
+ <td><strong>8.90</strong></td>
680
+ <td><strong>8.94</strong></td>
681
+ </tr>
682
+ <tr>
683
+ <td colspan="14" style="background-color: #e0eef9; text-align: center; font-weight: bold; font-style: italic;">Generation Model</td>
684
+ </tr>
685
+ <tr>
686
+ <td>FLUX.1 Kontext [dev]</td>
687
+ <td>7.92</td>
688
+ <td>-</td>
689
+ <td>7.93</td>
690
+ <td>8.45</td>
691
+ <td>6.20</td>
692
+ <td>6.11</td>
693
+ <td>-</td>
694
+ <td>-</td>
695
+ <td>-</td>
696
+ <td>-</td>
697
+ <td>-</td>
698
+ <td>-</td>
699
+ <td>-</td>
700
+ </tr>
701
+ <tr>
702
+ <td>USO</td>
703
+ <td>8.03</td>
704
+ <td>5.19</td>
705
+ <td>7.96</td>
706
+ <td>8.50</td>
707
+ <td>7.14</td>
708
+ <td>6.51</td>
709
+ <td>5.10</td>
710
+ <td>6.25</td>
711
+ <td>5.07</td>
712
+ <td>5.57</td>
713
+ <td>6.41</td>
714
+ <td>6.71</td>
715
+ <td>6.56</td>
716
+ </tr>
717
+ <tr>
718
+ <td>UNO</td>
719
+ <td>7.53</td>
720
+ <td>5.38</td>
721
+ <td>7.27</td>
722
+ <td>7.90</td>
723
+ <td>6.76</td>
724
+ <td>6.53</td>
725
+ <td>5.27</td>
726
+ <td>7.02</td>
727
+ <td>5.61</td>
728
+ <td>6.27</td>
729
+ <td>6.31</td>
730
+ <td>6.93</td>
731
+ <td>6.62</td>
732
+ </tr>
733
+ <tr>
734
+ <td>UniWorld-V2<br>(Edit-R1-Qwen-Image-Edit-2509)</td>
735
+ <td>8.41</td>
736
+ <td><strong>7.16</strong></td>
737
+ <td>8.63</td>
738
+ <td>8.24</td>
739
+ <td><strong>7.44</strong></td>
740
+ <td>6.77</td>
741
+ <td>7.52</td>
742
+ <td>8.03</td>
743
+ <td><strong>7.70</strong></td>
744
+ <td><strong>7.24</strong></td>
745
+ <td><strong>7.81</strong></td>
746
+ <td>7.57</td>
747
+ <td>7.69</td>
748
+ </tr>
749
+ <tr>
750
+ <td>Qwen-Image-Edit-2509</td>
751
+ <td><strong>8.54</strong></td>
752
+ <td>6.85</td>
753
+ <td><strong>8.85</strong></td>
754
+ <td><strong>8.57</strong></td>
755
+ <td>7.32</td>
756
+ <td><strong>6.86</strong></td>
757
+ <td><strong>7.53</strong></td>
758
+ <td><strong>8.13</strong></td>
759
+ <td>7.49</td>
760
+ <td>7.02</td>
761
+ <td>7.76</td>
762
+ <td><strong>7.65</strong></td>
763
+ <td><strong>7.70</strong></td>
764
+ </tr>
765
+ <tr>
766
+ <td colspan="14" style="background-color: #E6E6FA; text-align: center; font-weight: bold; font-style: italic;">Unified Model</td>
767
+ </tr>
768
+ <tr>
769
+ <td>BAGEL</td>
770
+ <td>7.14</td>
771
+ <td>5.55</td>
772
+ <td>7.49</td>
773
+ <td>7.95</td>
774
+ <td>6.93</td>
775
+ <td>6.21</td>
776
+ <td>6.44</td>
777
+ <td>7.38</td>
778
+ <td>6.87</td>
779
+ <td>7.27</td>
780
+ <td>6.74</td>
781
+ <td>7.20</td>
782
+ <td>6.97</td>
783
+ </tr>
784
+ <tr>
785
+ <td>OmniGen2</td>
786
+ <td>8.00</td>
787
+ <td>6.59</td>
788
+ <td>8.31</td>
789
+ <td>8.99</td>
790
+ <td>6.99</td>
791
+ <td>6.80</td>
792
+ <td>7.28</td>
793
+ <td>8.30</td>
794
+ <td>7.14</td>
795
+ <td>7.13</td>
796
+ <td>7.39</td>
797
+ <td>7.81</td>
798
+ <td>7.60</td>
799
+ </tr>
800
+ <tr>
801
+ <td>Echo-4o</td>
802
+ <td><strong>8.58</strong></td>
803
+ <td><strong>7.73</strong></td>
804
+ <td>8.36</td>
805
+ <td>8.33</td>
806
+ <td>7.74</td>
807
+ <td>7.18</td>
808
+ <td>7.87</td>
809
+ <td>8.72</td>
810
+ <td>8.01</td>
811
+ <td>8.33</td>
812
+ <td>8.05</td>
813
+ <td>8.14</td>
814
+ <td>8.09</td>
815
+ </tr>
816
+ <tr>
817
+ <td><strong>Scone (Ours)</strong></td>
818
+ <td>8.52</td>
819
+ <td>7.40</td>
820
+ <td><strong>8.98</strong></td>
821
+ <td><strong>9.73</strong></td>
822
+ <td><strong>7.97</strong></td>
823
+ <td><strong>7.74</strong></td>
824
+ <td><strong>8.20</strong></td>
825
+ <td><strong>9.25</strong></td>
826
+ <td><strong>8.21</strong></td>
827
+ <td><strong>8.44</strong></td>
828
+ <td><strong>8.21</strong></td>
829
+ <td><strong>8.79</strong></td>
830
+ <td><strong>8.50</strong></td>
831
+ </tr>
832
+ </tbody>
833
+ </table>
834
+
835
+ > - *: GPT-4o responded to 365~370 test cases out of the total 409 cases due to OpenAI safety restrictions.
836
+ > - To mitigate randomness, we perform 3 rounds of sampling at 1024x1024 resolution, scoring 3 times per round, yielding 9 group results. The final score is the average of these results.
837
+
838
+ ### Inference
839
+
840
+ Download the data:
841
+
842
+ ```bash
843
+ # pip install -U huggingface_hub
844
+ hf download Ryann829/SconeEval --repo-type=dataset --local-dir ../SconeEval
845
+ ```
846
+
847
+ Run the script:
848
+
849
+ ```bash
850
+ bash scripts/inference_sconeeval.sh
851
+ ```
852
+
853
+ ### Evaluation
854
+
855
+ Use GPT-4.1 to evaluate the quality of the generated images and calculate the final score. Please ensure your API key is configured before running the script.
856
+
857
+ ```bash
858
+ bash eval/s2i/sconeeval/eval.sh
859
+ ```
860
+
861
+ # 🚀 Updates
862
+
863
+ - [x] Release paper
864
+ - [x] Release training code
865
+ - [x] Release inference and evaluation code
866
+ - [x] Release model weight
867
+ - [x] Release training data
868
+ - [x] Release SconeEval benchmark
869
+
870
+
871
+
872
+ # 🚰 Citation
873
  If you find Scone helpful, please consider giving the repo a star ⭐.
874
 
875
  If you find this project useful for your research, please consider citing our paper:
 
883
  primaryClass={cs.CV},
884
  url={https://arxiv.org/abs/2512.12675},
885
  }
886
+ ```
887
+
888
+ # 💪 Acknowledgements
889
+
890
+ This project builds upon the following repositories:
891
+
892
+ * [BAGEL](https://github.com/ByteDance-Seed/Bagel)
893
+ * [OmniContext](https://github.com/VectorSpaceLab/OmniGen2)
894
+
895
+ Special thanks to these original projects and open-source datasets for their valuable contributions.