Upload 3 files
Browse files- README.md +38 -43
- gitattributes (2) +35 -0
- model_index (1).json +32 -0
README.md
CHANGED
|
@@ -1,51 +1,46 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
sdk_version: 3.43.2
|
| 8 |
-
app_file: app.py
|
| 9 |
-
pinned: true
|
| 10 |
-
disable_embedding: true
|
| 11 |
-
inference: true
|
| 12 |
language:
|
| 13 |
- en
|
| 14 |
-
library_name: diffusers
|
| 15 |
---
|
| 16 |
-
# D.R.E.A.M(Image generation model)
|
| 17 |
|
| 18 |
-
|
|
|
|
| 19 |
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
D.R.E.A.M (Diffusion-based Rendering for Efficient AI Models) is a fine-tuned diffusion model designed for versatile image generation. This model consists of three variants tailored to different image generation tasks, making it a powerful tool for a wide range of applications.
|
| 23 |
-
|
| 24 |
-
1. **Base Model**: The base model in D.R.E.A.M serves as a reliable workhorse, capable of generating high-quality images across various domains. It excels in tasks that demand general image generation and serves as an excellent starting point for various creative projects.
|
| 25 |
-
|
| 26 |
-
2. **Photorealistic Model**: For those seeking to create photorealistic images, D.R.E.A.M offers a specialized model that is finely tuned to produce stunningly realistic visual content. This variant is perfect for applications where realism and detail are paramount.
|
| 27 |
-
|
| 28 |
-
3. **Anime-Style Model**: The third variant of D.R.E.A.M is tailored specifically for generating anime-style images. While still a work in progress, this model shows promise in producing anime-themed content and is continually being improved for efficiency and quality.
|
| 29 |
-
|
| 30 |
-
D.R.E.A.M is an evolving project, with regular updates aimed at enhancing its performance and effectiveness. Users can expect ongoing refinements to ensure that D.R.E.A.M remains a cutting-edge tool for image generation across diverse genres and styles. Stay tuned for updates and improvements as we continue to make D.R.E.A.M even better.
|
| 31 |
|
| 32 |
## Model Details
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: creativeml-openrail-m
|
| 3 |
+
tags:
|
| 4 |
+
- text-to-image
|
| 5 |
+
- stable-diffusion
|
| 6 |
+
- art
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
language:
|
| 8 |
- en
|
|
|
|
| 9 |
---
|
|
|
|
| 10 |
|
| 11 |
+
# D.r.e.a.m (Digital Rendering Engine for Artistic Melodies)
|
| 12 |
+
## Welcome to D.r.e.a.m (Digital Rendering Engine for Artistic Melodies).
|
| 13 |
|
| 14 |
+
The model is currently in its training phase. This is not the final version and may contain artifacts, potentially performing poorly in some cases. The goal of this model is to create images similar to those produced by Midjourney. It is being trained using the Midjourney Normalized Dataset available on Kaggle.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
## Model Details
|
| 17 |
+
- **Developed by:** Cyanex1702
|
| 18 |
+
- **Model Type:** Diffusion-based text-to-image generation model
|
| 19 |
+
- **Language(s):** English
|
| 20 |
+
- **Dataset:** [DreamScape](http://https://www.kaggle.com/datasets/cyanex1702/midjouney-normalized-dataset "DreamScape")
|
| 21 |
+
- **Training Status:** In Progress
|
| 22 |
+
|
| 23 |
+
## Model Description
|
| 24 |
+
D.r.e.a.m is a model designed to generate and modify images based on text prompts. The model leverages advanced diffusion techniques to create high-quality, artistic renderings from textual descriptions, aiming to emulate the style and creativity of Midjourney.
|
| 25 |
+
## Samples
|
| 26 |
+

|
| 27 |
+

|
| 28 |
+

|
| 29 |
+

|
| 30 |
+

|
| 31 |
+

|
| 32 |
+

|
| 33 |
+

|
| 34 |
+

|
| 35 |
+

|
| 36 |
+
|
| 37 |
+
## Features
|
| 38 |
+
- **Text-to-Image Generation:** Generate images from descriptive text prompts.
|
| 39 |
+
- **Image Modification:** Modify existing images based on new text inputs.
|
| 40 |
+
- **Creative Rendering:** Produce artistic and imaginative images.
|
| 41 |
+
|
| 42 |
+
## Usage
|
| 43 |
+
To use the model, you can input text prompts in English. The model will process these prompts and generate corresponding images. Note that due to the model's current training phase, the results may vary and contain imperfections.
|
| 44 |
+
|
| 45 |
+
## Contributing
|
| 46 |
+
We welcome contributions from the community! If you'd like to contribute.
|
gitattributes (2)
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
model_index (1).json
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_class_name": "StableDiffusionPipeline",
|
| 3 |
+
"_diffusers_version": "0.6.0",
|
| 4 |
+
"feature_extractor": [
|
| 5 |
+
"transformers",
|
| 6 |
+
"CLIPImageProcessor"
|
| 7 |
+
],
|
| 8 |
+
"safety_checker": [
|
| 9 |
+
"stable_diffusion",
|
| 10 |
+
"StableDiffusionSafetyChecker"
|
| 11 |
+
],
|
| 12 |
+
"scheduler": [
|
| 13 |
+
"diffusers",
|
| 14 |
+
"PNDMScheduler"
|
| 15 |
+
],
|
| 16 |
+
"text_encoder": [
|
| 17 |
+
"transformers",
|
| 18 |
+
"CLIPTextModel"
|
| 19 |
+
],
|
| 20 |
+
"tokenizer": [
|
| 21 |
+
"transformers",
|
| 22 |
+
"CLIPTokenizer"
|
| 23 |
+
],
|
| 24 |
+
"unet": [
|
| 25 |
+
"diffusers",
|
| 26 |
+
"UNet2DConditionModel"
|
| 27 |
+
],
|
| 28 |
+
"vae": [
|
| 29 |
+
"diffusers",
|
| 30 |
+
"AutoencoderKL"
|
| 31 |
+
]
|
| 32 |
+
}
|