Instructions to use TimKond/diffusion-detection with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TimKond/diffusion-detection with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="TimKond/diffusion-detection") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")# Load model directly from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("TimKond/diffusion-detection") model = AutoModelForImageClassification.from_pretrained("TimKond/diffusion-detection") - Notebooks
- Google Colab
- Kaggle
readme updated
Browse files
README.md
CHANGED
|
@@ -44,6 +44,8 @@ print("Predicted class:", model.config.id2label[predicted_class_idx])
|
|
| 44 |
|
| 45 |
As negatives a subsample of 10.000 images from [imagenet-1k](https://huggingface.co/datasets/imagenet-1k) was used. Complementary 10.000 positive images were generated using [Realistic_Vision_V1.4](https://huggingface.co/SG161222/Realistic_Vision_V1.4).
|
| 46 |
|
|
|
|
|
|
|
| 47 |
### Training hyperparameters
|
| 48 |
|
| 49 |
The following hyperparameters were used during training:
|
|
|
|
| 44 |
|
| 45 |
As negatives a subsample of 10.000 images from [imagenet-1k](https://huggingface.co/datasets/imagenet-1k) was used. Complementary 10.000 positive images were generated using [Realistic_Vision_V1.4](https://huggingface.co/SG161222/Realistic_Vision_V1.4).
|
| 46 |
|
| 47 |
+
The labels from imagenet-1k were used as prompts for image generation. [GitHub reference](https://github.com/TimKond/diffusion-detection/blob/main/data/DatasetGeneration.py)
|
| 48 |
+
|
| 49 |
### Training hyperparameters
|
| 50 |
|
| 51 |
The following hyperparameters were used during training:
|