sabato-nocera commited on
Commit
d4a0fdb
·
verified ·
1 Parent(s): 37f1c45

Dear TTPlanet,
We are a group of researchers investigating the usefulness of sharing AIBOMs (Artificial Intelligence Bill of Materials) to document AI models and to improve transparency in AI model supply chains. AIBOMs are machine-readable, structured inventories of components—such as datasets and models—used in the development of AI-powered systems.

We would like to emphasize that we have no financial or competing interests related to AIBOMs. Our sole interest is to advance the collective understanding of AIBOMs within both academia and industry. As part of this effort, we are contributing to randomly selected open and popular models on Hugging Face (like yours) and are happy to offer support to you and the maintainers of your model if needed.

Based on your model card (and some configuration information available in Hugging Face), we generated the AIBOM according to the CyclonDX (v1.6) standard (see https://cyclonedx.org/docs/1.6/json/). This AIBOM is generated as a JSON file by using the following open-source supporting tool: https://github.com/MSR4SBOM/ALOHA (technical details are available in the research paper: https://github.com/MSR4SBOM/ALOHA/blob/main/ALOHA.pdf). This tool is freely available online and can be downloaded and used at your own convenience. We are also happy to assist you directly if you need help generating or reviewing an AIBOM for your model.

The JSON file in this pull request is your AIBOM (see https://github.com/MSR4SBOM/ALOHA/blob/main/documentation.json for details on its structure). Clearly, the submitted AIBOM matches the current model information, yet it can be easily regenerated when the model evolves, using the aforementioned AIBOM generation tool.

We understand that initiatives like ours may raise questions, especially in open communities like Hugging Face. Therefore, we would like to further remark that our interest in AIBOMs is only to enhance the body of knowledge on AIBOMs and to make this easy and low-friction for maintainers of AI models and developers of AI-powered systems.

We open this pull request containing an AIBOM of your AI model, and hope it will be considered. We would also like to hear your opinion on the usefulness (or not) of AIBOM by answering a 3-minute anonymous survey: https://forms.gle/WGffSQD5dLoWttEe7.

Thanks in advance, and regards,
Riccardo D’Avino, Fatima Ahmed, Sabato Nocera, Simone Romano, Giuseppe Scanniello (University of Salerno, Italy),
Massimiliano Di Penta (University of Sannio, Italy),
The MSR4SBOM team

TTPlanet_TTPLanet_SDXL_Controlnet_Tile_Realistic.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bomFormat": "CycloneDX",
3
+ "specVersion": "1.6",
4
+ "serialNumber": "urn:uuid:69c5a1a4-8a57-4933-b2bf-63466c356e71",
5
+ "version": 1,
6
+ "metadata": {
7
+ "timestamp": "2025-06-05T09:39:22.860541+00:00",
8
+ "component": {
9
+ "type": "machine-learning-model",
10
+ "bom-ref": "TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic-f16d3e39-3e1d-5342-b691-dd13c74a44ea",
11
+ "name": "TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic",
12
+ "externalReferences": [
13
+ {
14
+ "url": "https://huggingface.co/TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic",
15
+ "type": "documentation"
16
+ }
17
+ ],
18
+ "modelCard": {
19
+ "modelParameters": {
20
+ "task": "image-feature-extraction"
21
+ },
22
+ "properties": [
23
+ {
24
+ "name": "library_name",
25
+ "value": "diffusers"
26
+ }
27
+ ],
28
+ "consideration": {
29
+ "useCases": "- **Important: Tile model is not a upscale model!!! it enhance or change the detial of the original size image, remember this before you use it!**- This model will not significant change the base model style. it only adding the features to the upscaled pixel blocks....- --Just use a regular controlnet model in Webui by select as tile model and use tile_resample for Ultimate Upscale script.- --Just use load controlnet model in comfyui and apply to control net condition.- --if you try to use it in webui t2i, need proper prompt setup, otherwise it will significant modify the original image color. I don't know the reason, as I don't really use this function.- --it do perform much better with the image from the datasets. However, everything works fine for the i2i model and what is the place usually the ultimate upscale is applied!!- **--Please also notice this is a realistic training set, so no comic, animation application are promised.**- --For tile upscale, set the denoise around 0.3-0.4 to get good result.- --For controlnet strength, set to 0.9 will be better choice- --For human image fix, IPA and early stop on controlnet will provide better reslut- **--Pickup a good realistic base model is important!**![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/zPyYn2fSFmD1Q07ME0Hkg.jpeg)![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/00gDy93frzcF-WH8hh1NS.png)- **bsides the basic function, Tile can also change the picture style based on you model, please select the preprocessor as None(not resample!!!!) you can build different style from one single picture with great control!**- Just enjoy![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/RjZiSX1oBXas1y1Tjq_dW.png)-- **additional instruction to use this tile**- **Part 1\uff1aupdate for style change application instruction\uff08**cloth change and keep consistent pose**\uff09:**- 1. Open a A1111 webui.- 2. select a image you want to use for controlnet tile- 3. remember the setting is like this, make 100% preprocessor is none. and control mode is My prompt is more important.![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/RfSSfKxjpxvHSUmswTfhH.png)- 4. type in the prompts in positive and negative text box, gen the image as you wish. if you want to change the cloth, type like a woman dressed in yellow T-shirt, and change the background like in a shopping mall,- 5. Hires fix is supported!!!- 6. You will get the result as below:![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/XS-Qi-FuofnPABl5hZAoi.png)![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/KYyRUJjuxg5YKs0UFYUw0.png)- **Part2\uff1a for ultimate sd upscale application**Here is the simplified workflow just for ultimate upscale, you can modify and add pre process for your image based on the real condition. In my case, I usually make a image to image with 0.1 denoise rate for the real low quality image such as 600*400 to 1200*800 before I through it into this ultimate upscale process.Please add IPA process if you need the face likes identical, please also add IPA in the raw pre process for low quality image i2i. Remember, over resolution than downscale is always the best way to boost the quality from low resolution image.https://civitai.com/models/333060/simplified-workflow-for-ultimate-sd-upscale"
30
+ }
31
+ },
32
+ "authors": [
33
+ {
34
+ "name": "TTPlanet"
35
+ }
36
+ ],
37
+ "licenses": [
38
+ {
39
+ "license": {
40
+ "name": "openrail"
41
+ }
42
+ }
43
+ ],
44
+ "description": "Here's a refined version of the update notes for the Tile V2:-Introducing the new Tile V2, enhanced with a vastly improved training dataset and more extensive training steps.-The Tile V2 now automatically recognizes a wider range of objects without needing explicit prompts.-I've made significant improvements to the color offset issue. if you are still seeing the significant offset, it's normal, just adding the prompt or use a color fix node.-The control strength is more robust, allowing it to replace canny+openpose in some conditions.If you encounter the edge halo issue with t2i or i2i, particularly with i2i, ensure that the preprocessing provides the controlnet image with sufficient blurring. If the output is too sharp, it may result in a 'halo'\u2014a pronounced shape around the edges with high contrast. In such cases, apply some blur before sending it to the controlnet. If the output is too blurry, this could be due to excessive blurring during preprocessing, or the original picture may be too small.Enjoy the enhanced capabilities of Tile V2!![TBT9$5UL`53RKP`85JXIZ_H.jpg](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/yS1ax7FWZS7b5Zz1co8_b.jpeg)![Q5A0[{{0{]I~`KJFCZJ7`}4.jpg](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/HMGmYz7IiLSqfoiMgcmgU.jpeg)<!-- Provide a longer summary of what this model is. -->- This is a SDXL based controlnet Tile model, trained with huggingface diffusers sets, fit for Stable diffusion SDXL controlnet.- It is original trained for my personal realistic model project used for Ultimate upscale process to boost the picture details. with a proper workflow, it can provide a good result for high detailed, high resolution image fix.- As there is no SDXL Tile available from the most open source, I decide to share this one out.- I will share my workflow soon as I am still working on it to have better result.- **I am still working on the better workflow for super upscale as I showed in the example, trust me, it's all real!!! and Enjoy**-![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/ddFT3326ddNOWBeoFnfZl.png)![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/OETMPhSCVEKdyUvILMsyp.png)![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/lznGyTnKy91AwRmSaCxTF.png)![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/iokmuDnYy7UC47t7AoLc1.png)![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/gjNEgVlr2I2uf9hPJiivu.png)![image/png](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/wSZTq340GTG3ojx75HNyH.png)- **Developed by:** TTPlanet- **Model type:** Controlnet Tile- **Language(s) (NLP):** No language limitation",
45
+ "tags": [
46
+ "diffusers",
47
+ "Controlnet",
48
+ "Tile",
49
+ "stable diffustion",
50
+ "image-feature-extraction",
51
+ "license:openrail",
52
+ "region:us"
53
+ ]
54
+ }
55
+ }
56
+ }