license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Kawase Hasui Diffusion is trained on pantings by [KAWASE Hasui(川瀬巴水)](https://en.wikipedia.org/wiki/Hasui_Kawase). The model has been trained on Stable Diffusion v2-1 with DreamBooth method with a learning rate of 1.0e-6 for 2,600 steps with the batch size of 8 (8 train or reg images) on 169 training images and 664 regularization images. This model is based on SD2.1 768/v, so if you use this model in the poplular Web UI, please rename 'v2-inference-v.yaml' to 'kawase-hasui-epoch-000003.yaml' (or ~_fp16.yaml) and place it to the same folder to .safetensors. The training prompt is "picture by lvl".
9f6c3db594d855fad924de1f649e0af1
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Examples ![Japan tourism poster](./sample1.png) ``` picture by lvl, japan tourism poster seed : 968191097, sampler: k_euler_a, steps : 160, CFG scale : 5.5 ``` ![Cyberpunk Akihabara](./sample2.png) ``` picture by lvl, cyberpunk akihabara seed : 1418478714, sampler: k_euler_a, steps : 160, CFG scale : 5.5 ``` ![Ruined castle](./sample3.png) ``` picture by lvl, ruined castle, fantasy, dawn seed : 897433524, sampler: k_euler_a, steps : 160, CFG scale : 5.5 ``` ![Party of adventurers](./sample4.png) ``` picture by lvl, fantasy, party of adventurers, ready to fight, in front of ruined temple seed : 1814292911, sampler: k_euler_a, steps : 160, CFG scale : 5.5 ```
1020afa311ff505d10010b768024b94a
mit
[]
false
thunderdome-cover on Stable Diffusion This is the `<thunderdome-cover>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<thunderdome-cover> 0](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/26.jpeg) ![<thunderdome-cover> 1](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/0.jpeg) ![<thunderdome-cover> 2](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/31.jpeg) ![<thunderdome-cover> 3](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/8.jpeg) ![<thunderdome-cover> 4](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/3.jpeg) ![<thunderdome-cover> 5](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/5.jpeg) ![<thunderdome-cover> 6](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/22.jpeg) ![<thunderdome-cover> 7](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/32.jpeg) ![<thunderdome-cover> 8](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/29.jpeg) ![<thunderdome-cover> 9](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/6.jpeg) ![<thunderdome-cover> 10](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/30.jpeg) ![<thunderdome-cover> 11](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/11.jpeg) ![<thunderdome-cover> 12](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/34.jpeg) ![<thunderdome-cover> 13](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/27.jpeg) ![<thunderdome-cover> 14](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/36.jpeg) ![<thunderdome-cover> 15](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/35.jpeg) ![<thunderdome-cover> 16](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/1.jpeg) ![<thunderdome-cover> 17](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/25.jpeg) ![<thunderdome-cover> 18](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/21.jpeg) ![<thunderdome-cover> 19](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/14.jpeg) ![<thunderdome-cover> 20](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/15.jpeg) ![<thunderdome-cover> 21](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/23.jpeg) ![<thunderdome-cover> 22](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/17.jpeg) ![<thunderdome-cover> 23](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/16.jpeg) ![<thunderdome-cover> 24](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/10.jpeg) ![<thunderdome-cover> 25](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/2.jpeg) ![<thunderdome-cover> 26](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/28.jpeg) ![<thunderdome-cover> 27](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/12.jpeg) ![<thunderdome-cover> 28](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/19.jpeg) ![<thunderdome-cover> 29](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/4.jpeg) ![<thunderdome-cover> 30](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/7.jpeg) ![<thunderdome-cover> 31](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/24.jpeg) ![<thunderdome-cover> 32](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/33.jpeg) ![<thunderdome-cover> 33](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/9.jpeg) ![<thunderdome-cover> 34](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/20.jpeg) ![<thunderdome-cover> 35](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/18.jpeg) ![<thunderdome-cover> 36](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/13.jpeg)
6d8ddf4ee70ac35f56af2bb3fe2dc474
apache-2.0
['generated_from_trainer']
false
finetuned_token_2e-05_16_02_2022-14_25_47 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1722 - Precision: 0.3378 - Recall: 0.3615 - F1: 0.3492 - Accuracy: 0.9448
9799ef831f200a399b6c4b927713a447
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
096eda612ce226bcd44c71999495e36b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 | | No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 | | No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 | | No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 | | No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
d5ba26ad16a8957896440096c8e581e8
cc0-1.0
['pointnet', 'segmentation', '3d', 'image']
false
Point cloud segmentation with PointNet This repo contains [an Implementation of a PointNet-based model for segmenting point clouds.](https://keras.io/examples/vision/pointnet_segmentation/). Full credits to [Soumik Rakshit](https://github.com/soumik12345), [Sayak Paul](https://github.com/sayakpaul)
0002ba49c13edfe78e0162c9ace9b200
cc0-1.0
['pointnet', 'segmentation', '3d', 'image']
false
Background Information A "point cloud" is an important type of data structure for storing geometric shape data. Due to its irregular format, it's often transformed into regular 3D voxel grids or collections of images before being used in deep learning applications, a step which makes the data unnecessarily large. The PointNet family of models solves this problem by directly consuming point clouds, respecting the permutation-invariance property of the point data. The PointNet family of models provides a simple, unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. In this example, we demonstrate the implementation of the PointNet architecture for shape segmentation. **References** * [PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation](https://arxiv.org/abs/1612.00593) * [Point cloud classification with PointNet](https://keras.io/examples/vision/pointnet/) * [Spatial Transformer Networks](https://arxiv.org/abs/1506.02025) ![preview](https://i.imgur.com/qFLNw5L.png) ![preview](http://stanford.edu/~rqi/pointnet/images/teaser.jpg)
31a7f4fc639bb628e5cc860a85b8e9ba
cc0-1.0
['pointnet', 'segmentation', '3d', 'image']
false
Training Dataset This model was trained on the [ShapeNet dataset](https://shapenet.org/). The ShapeNet dataset is an ongoing effort to establish a richly-annotated, large-scale dataset of 3D shapes. ShapeNetCore is a subset of the full ShapeNet dataset with clean single 3D models and manually verified category and alignment annotations. It covers 55 common object categories, with about 51,300 unique 3D models. **Prediction example** ![result](https://keras.io/img/examples/vision/pointnet_segmentation/pointnet_segmentation_40_2.png)
ee2308729209d1a05cc2d4114eea5d5b
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 197.8 - GMACs: 34.4 - Activations (M): 43.1 - Image size: 224 x 224 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/facebookresearch/ConvNeXt - **Dataset:** ImageNet-1k
6129d1cd316c7926abb98976d39f2c17
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('convnext_large.fb_in1k', pretrained=True) model = model.eval()
b139fd87d717fddc4fe7a6d3c73f9c0c
apache-2.0
['image-classification', 'timm']
false
get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0))
0af26c542bad611761d0b8297f820b25
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_large.fb_in1k', pretrained=True, features_only=True, ) model = model.eval()
89fa75be69e2bac918eb02e93e625a49
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_large.fb_in1k', pretrained=True, num_classes=0,
9e500d0cf108041e8458a1321e3d2a06
apache-2.0
['image-classification', 'timm']
false
By Top-1 All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. |model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| |[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | |[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | |[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | |[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | |[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | |[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | |[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | |[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | |[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | |[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | |[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | |[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | |[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | |[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | |[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | |[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | |[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | |[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | |[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | |[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | |[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | |[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | |[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | |[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | |[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | |[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | |[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | |[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | |[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | |[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | |[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | |[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | |[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | |[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | |[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | |[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | |[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | |[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | |[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | |[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | |[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | |[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | |[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | |[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | |[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
e0c56cff89854adebc8ab1b3c13f2f20
apache-2.0
['image-classification', 'timm']
false
By Throughput (samples / sec) All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. |model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| |[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | |[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | |[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | |[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | |[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | |[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | |[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | |[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | |[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | |[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | |[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | |[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | |[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | |[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | |[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | |[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | |[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | |[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | |[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | |[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | |[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | |[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | |[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | |[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | |[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | |[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | |[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | |[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | |[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | |[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | |[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | |[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | |[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | |[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | |[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | |[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | |[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | |[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | |[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | |[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | |[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | |[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | |[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | |[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | |[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
7bea51929ef4abbb47607f44e11aabef
apache-2.0
['image-classification', 'timm']
false
Citation ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
e38552a0abdb6080b8a68fe065043472
creativeml-openrail-m
['text-to-image']
false
hulk-style-v3 Dreambooth model trained by sztanki with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: hulk (use that on your prompt) ![hulk 0](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%281%29.jpg)![hulk 1](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%282%29.jpg)![hulk 2](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%283%29.jpg)![hulk 3](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%284%29.jpg)![hulk 4](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%285%29.jpg)![hulk 5](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%286%29.jpg)![hulk 6](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%287%29.jpg)![hulk 7](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%288%29.jpg)![hulk 8](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%289%29.jpg)![hulk 9](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%2810%29.jpg)![hulk 10](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%2811%29.jpg)![hulk 11](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%2812%29.jpg)![hulk 12](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%2813%29.jpg)![hulk 13](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%2814%29.jpg)![hulk 14](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%2815%29.jpg)![hulk 15](https://huggingface.co/sztanki/hulk-style-v3/resolve/main/concept_images/hulk_style_%2816%29.jpg)
40e91ee9e52869a9157ea642ef81a9f6
creativeml-openrail-m
[]
false
Token class word for this model is `rimu` using this will draw attention to the training data that was used and help increase the quality of the image. License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
3295ebbb1f4f153dac9f9ce7a548e05f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 24 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0
8751398e997f98b4f7879149da296068
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
4976ed2025272e61ef5d106e84ba42ca
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-classical-chinese-large-upos") ```
2d5488202d4c9533cf0b886f84fc5d3e
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos', 'dependency-parsing']
false
Reference Koichi Yasuoka: [Universal Dependencies Treebank of the Four Books in Classical Chinese](http://hdl.handle.net/2433/245217), DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28.
e6b76527fb8ca1380c7a7df0ba5eb808
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3120 - Accuracy: 0.87 - F1: 0.8696
7f7a8ef8ff33c78b81a1115d2dd24fe3
apache-2.0
['generated_from_keras_callback']
false
nandysoham/Gregorian_calendar-theme-finetuned-overfinetuned This model is a fine-tuned version of [nandysoham/distilbert-base-uncased-finetuned-squad](https://huggingface.co/nandysoham/distilbert-base-uncased-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1838 - Train End Logits Accuracy: 0.9500 - Train Start Logits Accuracy: 0.9688 - Validation Loss: 2.0017 - Validation End Logits Accuracy: 0.5238 - Validation Start Logits Accuracy: 0.4762 - Epoch: 8
2590db4c4c4237529ae88460c8077231
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
b8672afbb127ce2c235899c295038a8e
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 2.2861 | 0.3688 | 0.4062 | 1.6038 | 0.5952 | 0.5714 | 0 | | 1.2774 | 0.5938 | 0.5938 | 1.4240 | 0.5952 | 0.5714 | 1 | | 0.8752 | 0.7000 | 0.7375 | 1.4402 | 0.5952 | 0.5476 | 2 | | 0.5245 | 0.8250 | 0.8438 | 1.5027 | 0.6429 | 0.5952 | 3 | | 0.4132 | 0.8313 | 0.8938 | 1.6252 | 0.5714 | 0.5 | 4 | | 0.3140 | 0.9000 | 0.9062 | 1.7524 | 0.5476 | 0.4762 | 5 | | 0.2534 | 0.9688 | 0.9312 | 1.8646 | 0.5238 | 0.4762 | 6 | | 0.1999 | 0.9500 | 0.9563 | 1.9513 | 0.5238 | 0.4762 | 7 | | 0.1838 | 0.9500 | 0.9688 | 2.0017 | 0.5238 | 0.4762 | 8 |
9435fb057c678f707d6860f176a4b99d
apache-2.0
[]
false
Adaptive Depth Transformers Implementation of the paper "How Many Layers and Why? An Analysis of the Model Depth in Transformers". In this study, we investigate the role of the multiple layers in deep transformer models. We design a variant of ALBERT that dynamically adapts the number of layers for each token of the input.
86bdbb4392c3d3c319433ce75c98a7fc
apache-2.0
[]
false
Model architecture We augment a multi-layer transformer encoder with a halting mechanism, which dynamically adjusts the number of layers for each token. We directly adapted this mechanism from Graves ([2016](
1c189433e47130afe31d6545b91cd454
apache-2.0
[]
false
Model use The architecture is not yet directly included in the Transformers library. The code used for pre-training is available in the following [github repository](https://github.com/AntoineSimoulin/adaptive-depth-transformers). So you should install the code implementation first: ```bash !pip install git+https://github.com/AntoineSimoulin/adaptive-depth-transformers$ ``` Then you can use the model directly. ```python from act import AlbertActConfig, AlbertActModel, TFAlbertActModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('asi/albert-act-base') model = AlbertActModel.from_pretrained('asi/albert-act-base') _ = model.eval() inputs = tokenizer("a lump in the middle of the monkeys stirred and then fell quiet .", return_tensors="pt") outputs = model(**inputs) outputs.updates
6cd7e811220d9ba0fc3f8df5719860cc
apache-2.0
[]
false
BibTeX entry and citation info If you use our iterative transformer model for your scientific publication or your industrial applications, please cite the following [paper](https://aclanthology.org/2021.acl-srw.23/): ```bibtex @inproceedings{simoulin-crabbe-2021-many, title = "How Many Layers and Why? {A}n Analysis of the Model Depth in Transformers", author = "Simoulin, Antoine and Crabb{\'e}, Benoit", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-srw.23", doi = "10.18653/v1/2021.acl-srw.23", pages = "221--228", } ```
10372ee8e8ca119491faf3438c22f6e7
apache-2.0
[]
false
References ><div id="graves-2016">Alex Graves. 2016. <a href="https://arxiv.org/abs/1603.08983" target="_blank">Adaptive computation time for recurrent neural networks.</a> CoRR, abs/1603.08983.</div>
af916eae46ca1f642288eea97b058aa4
apache-2.0
['generated_from_trainer']
false
small-vanilla-target-glue-cola-linear-probe This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6097 - Matthews Correlation: 0.0
bfab0bdc0fe00ff7cacdb9f3367d58e5
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000
9b706a2bb77263e20813d6a515e825c2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6185 | 1.87 | 500 | 0.6137 | 0.0 | | 0.6093 | 3.73 | 1000 | 0.6125 | 0.0 | | 0.6073 | 5.6 | 1500 | 0.6100 | 0.0 | | 0.6052 | 7.46 | 2000 | 0.6097 | 0.0 |
3d648a4eba75e1e3e5712261745b0e5f
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 31.4 - GMACs: 3.9 - Activations (M): 12.0 - Image size: 224 x 224 - **Original:** https://github.com/snap-research/EfficientFormer - **Papers:** - EfficientFormer: Vision Transformers at MobileNet Speed: https://arxiv.org/abs/2206.01191 - **Dataset:** ImageNet-1k
27bfda75e395d38454aaf0a691d18868
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('efficientformer_l3.snap_dist_in1k', pretrained=True) model = model.eval()
94013687a0cdcde4c5191af4bbc138e3
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'efficientformer_l3.snap_dist_in1k', pretrained=True, num_classes=0,
fe2bddca1810f7422c55e511027f1f18
apache-2.0
['image-classification', 'timm']
false
Model Comparison |model |top1 |top5 |param_count|img_size| |-----------------------------------|------|------|-----------|--------| |efficientformerv2_l.snap_dist_in1k |83.628|96.54 |26.32 |224 | |efficientformer_l7.snap_dist_in1k |83.368|96.534|82.23 |224 | |efficientformer_l3.snap_dist_in1k |82.572|96.24 |31.41 |224 | |efficientformerv2_s2.snap_dist_in1k|82.128|95.902|12.71 |224 | |efficientformer_l1.snap_dist_in1k |80.496|94.984|12.29 |224 | |efficientformerv2_s1.snap_dist_in1k|79.698|94.698|6.19 |224 | |efficientformerv2_s0.snap_dist_in1k|76.026|92.77 |3.6 |224 |
a265cb1d7fb27cd46e23aecea8bf4c77
apache-2.0
['image-classification', 'timm']
false
Citation ```bibtex @article{li2022efficientformer, title={EfficientFormer: Vision Transformers at MobileNet Speed}, author={Li, Yanyu and Yuan, Geng and Wen, Yang and Hu, Ju and Evangelidis, Georgios and Tulyakov, Sergey and Wang, Yanzhi and Ren, Jian}, journal={arXiv preprint arXiv:2206.01191}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
3c3a4c7c73bd17d7253e544d00066e83
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 1.2188 - Rouge1: 0.5217 - Rouge2: 0.0464 - Rougel: 0.527 - Rougelsum: 0.5215 - Gen Len: 6.7441
5179e97bf65bc9368becd474179ae725
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
a4072e505299f31790731151571601a9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.3831 | 1.0 | 7475 | 1.2188 | 0.5217 | 0.0464 | 0.527 | 0.5215 | 6.7441 |
4707089790cf3cdfcf13a591f27772dd
apache-2.0
['automatic-speech-recognition', 'pt']
false
exp_w2v2t_pt_wavlm_s118 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ff19fcc463e26ad9d18725757eb65d72
cc-by-4.0
[]
false
Automatic Translation Alignment of Ancient Greek Texts GRC-ALIGNMENT model is an XLM-RoBERTa-based model, fine-tuned for automatic multilingual text alignment at the word level. The model is trained on 12 million monolingual ancient Greek tokens with Masked Language Model (MLM) training objective. Further, the model is fine-tuned on 45k parallel sentences, mainly in ancient Greek-English, Greek-Latin, and Greek-Georgian.
50302e5c3cf4d8e1ee38327bf3647ef9
cc-by-4.0
[]
false
Multilingual Training Dataset | Languages |Sentences | Source | |:---------------------------------------|:-----------:|:--------------------------------------------------------------------------------| | GRC-ENG | 32.500 | Perseus Digital Library (Iliad, Odyssey, Xenophon, New Testament) | | GRC-LAT | 8.200 | [Digital Fragmenta Historicorum Graecorum project](https://www.dfhg-project.org/) | | GRC-KAT <br>GRC-ENG <br>GRC-LAT<br>GRC-ITA<br>GRC-POR | 4.000 | [UGARIT Translation Alignment Editor](https://ugarit.ialigner.com/ ) |
631524b8f2b7d51823671c754bd533c7
cc-by-4.0
[]
false
Model Performance | Languages | Alignment Error Rate | |:---------:|:--------------------:| | GRC-ENG | 19.73% (IterMax) | | GRC-POR | 23.91% (IterMax) | | GRC-LAT | 10.60% (ArgMax) | The gold standard datasets are available on [Github](https://github.com/UgaritAlignment/Alignment-Gold-Standards). If you use this model, please cite our papers: <pre> @InProceedings{yousef-EtAl:2022:LREC, author = {Yousef, Tariq and Palladino, Chiara and Shamsian, Farnoosh and d’Orange Ferreira, Anise and Ferreira dos Reis, Michel}, title = {An automatic model and Gold Standard for translation alignment of Ancient Greek}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {5894--5905}, url = {https://aclanthology.org/2022.lrec-1.634} } @InProceedings{yousef-EtAl:2022:LT4HALA2022, author = {Yousef, Tariq and Palladino, Chiara and Wright, David J. and Berti, Monica}, title = {Automatic Translation Alignment for Ancient Greek and Latin}, booktitle = {Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {101--107}, url = {https://aclanthology.org/2022.lt4hala2022-1.14} } </pre>
683a5d13124bad256f87ddb08ac4c086
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8874 - Precision: 0.2534 - Recall: 0.3333 - F1: 0.2879 - Accuracy: 0.7603 - True predictions: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] - True labels: [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 0, 1, 2, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0, 1, 2, 2, 2, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
d408276f1159041e5334c96e5e11a58d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | True predictions | True labels | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | No log | 1.0 | 2 | 0.9937 | 0.2839 | 0.3072 | 0.2951 | 0.6712 | [0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 2] | [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 0, 1, 2, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0, 1, 2, 2, 2, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | | No log | 2.0 | 4 | 0.9155 | 0.2523 | 0.3273 | 0.2850 | 0.7466 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 0, 1, 2, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0, 1, 2, 2, 2, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | | No log | 3.0 | 6 | 0.8874 | 0.2534 | 0.3333 | 0.2879 | 0.7603 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 0, 1, 2, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0, 1, 2, 2, 2, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] |
32d9f97f5d0890689ba913ef06286e27
apache-2.0
[]
false
ALBERT XXLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team.
a888aab3a4495dfecf36997b0f728e22
apache-2.0
[]
false
Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters
216fe8cc5aef275388190ffe32608fff
apache-2.0
[]
false
Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
8f9034eaedc4c53b588c3c84e0c374ad
apache-2.0
[]
false
How to use You can use this model directly with a pipeline for masked language modeling: In tf_transformers ```python from tf_transformers.models import AlbertModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1') model = AlbertModel.from_pretrained("albert-xxlarge-v1") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] inputs_tf["input_type_ids"] = inputs["token_type_ids"] inputs_tf["input_mask"] = inputs["attention_mask"] outputs_tf = model(inputs_tf) ```
8b0e17ee6bf035c5358903834ef5fe4d
apache-2.0
[]
false
Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
62625cfd4d6c7cdf3bcb54e4d209d04a
apache-2.0
[]
false
Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ```
b2c8c825d7211788749310241eacc02a
apache-2.0
[]
false
Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is.
61fcafa7dc74a696db0adda01d5f7e38
apache-2.0
[]
false
Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
a76312fa5a00029260b7713c492013fd
apache-2.0
[]
false
BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
b2cb36ba456f0369c5a13ad9401d3af0
mit
['pytorch', 'diffusers', 'dreambooth']
false
Model Card for Dreambooth model trained on My pet Pintu's images This model is a diffusion model for unconditional image generation of my cute pet dog Pintu trained using Dreambooth concept. The token to use is sks .
e56318e3f7e5cc53fb8f911b6b772e80
mit
['pytorch', 'diffusers', 'dreambooth']
false
Usage from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained(Kugos/KgSelfie_lr_15e-6) image = pipeline('a photo of sks dog').images[0] image These are the images on which the dreambooth model is trained on ![sks 0](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s1.jpeg)![sks 1](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s2.jpeg)![sks 2](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s3.jpeg)![sks 3](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s4.jpeg)![sks 4](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s5.jpeg)![sks 5](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s6.jpeg)![sks 6](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s7.jpeg)![sks 7](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s8.jpeg)![sks 8](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s9.jpeg)![sks 9](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s10.jpeg)![sks 10](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s11.jpeg)![sks 11](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s12.jpeg)![sks 12](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s13.jpeg)![sks 13](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s14.jpeg)![sks 14](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s15.jpeg)![sks 15](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s16.jpeg)![sks 16](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s17.jpeg)![sks 17](https://huggingface.co/Kugos/KgSelfie_lr_15e-6/resolve/main/concept_images/s18.jpeg)
94dbdd1c46a7ce85943113c93fb8bd22
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP
db6ebb19a4f4af4695ab52074c837c50
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2t_en_xls-r_s957 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
07e08d37f91492d7225d4861b449c363
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 2 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0
4873c5f6a198e14744240278d9a7aa2c
cc-by-sa-4.0
['legal']
false
Legal-CamemBERT * Legal-DistilCamemBERT is a [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base)-based model further pre-trained on [23,000+ statutory articles](https://huggingface.co/datasets/maastrichtlawtech/bsard) from the Belgian legislation. * We chose the following training set-up: 50k training steps (200 epochs) with batches of 32 sequences of length 512 with an initial learning rate of 5e-5. * Training was performed on one Tesla V100 GPU with 32 GB using the [code](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) provided by Hugging Face. ---
9ebdc0a769303a0449eddc75f22a1263
cc-by-sa-4.0
['legal']
false
Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("maastrichtlawtech/legal-distilcamembert") model = AutoModel.from_pretrained("maastrichtlawtech/legal-distilcamembert") ```
41b4ab14cfd829c9cf69d334e61ef8b9
cc-by-sa-4.0
['legal']
false
About Us The [Maastricht Law & Tech Lab](https://www.maastrichtuniversity.nl/about-um/faculties/law/research/law-and-tech-lab) develops algorithms, models, and systems that allow computers to process natural language texts from the legal domain. Author: [Antoine Louis](https://antoinelouis.co) on behalf of the [Maastricht Law & Tech Lab](https://www.maastrichtuniversity.nl/about-um/faculties/law/research/law-and-tech-lab).
89d0bed119c226ca639c2afcbd0a703a
apache-2.0
['generated_from_trainer']
false
Tagged_One_50v8_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5935 - Precision: 0.0917 - Recall: 0.0054 - F1: 0.0102 - Accuracy: 0.7849
11540d51af0e83bdd9d42a9bcb07d176
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 19 | 0.7198 | 0.0 | 0.0 | 0.0 | 0.7786 | | No log | 2.0 | 38 | 0.6263 | 0.0727 | 0.0010 | 0.0019 | 0.7798 | | No log | 3.0 | 57 | 0.5935 | 0.0917 | 0.0054 | 0.0102 | 0.7849 |
bcad17fa5b717c0d9872ba121e4d4d54
mit
['generated_from_trainer']
false
finetuning-profane-model-deberta This model is a fine-tuned version of [yangheng/deberta-v3-base-absa-v1.1](https://huggingface.co/yangheng/deberta-v3-base-absa-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6243 - Accuracy: 0.8322 - F1: 0.8455 - Precision: 0.8015 - Recall: 0.8946
c1df251fd5d5d5841e0455c4d9ae5038
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
6ec7069587b6033e466a01f3d46f86be
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
false
wav2vec2-common_voice-nl-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - NL dataset. It achieves the following results on the evaluation set: - Loss: 0.3523 - Wer: 0.2046
779248a98e572d96d8d10aab00ccc11d
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP
0441cb7357e652390404acf8d5caf52a
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0536 | 1.12 | 500 | 0.5349 | 0.4338 | | 0.2543 | 2.24 | 1000 | 0.3859 | 0.3029 | | 0.1472 | 3.36 | 1500 | 0.3471 | 0.2818 | | 0.1088 | 4.47 | 2000 | 0.3489 | 0.2731 | | 0.0855 | 5.59 | 2500 | 0.3582 | 0.2558 | | 0.0721 | 6.71 | 3000 | 0.3457 | 0.2471 | | 0.0653 | 7.83 | 3500 | 0.3299 | 0.2357 | | 0.0527 | 8.95 | 4000 | 0.3440 | 0.2334 | | 0.0444 | 10.07 | 4500 | 0.3417 | 0.2289 | | 0.0404 | 11.19 | 5000 | 0.3691 | 0.2204 | | 0.0345 | 12.3 | 5500 | 0.3453 | 0.2102 | | 0.0288 | 13.42 | 6000 | 0.3634 | 0.2089 | | 0.027 | 14.54 | 6500 | 0.3532 | 0.2044 |
182f91127c4b2342ff1db0de52b3e060
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It was created by following the [huggingface tutorial](https://huggingface.co/course/chapter7/5?fw=pt). It achieves the following results on the evaluation set: - Loss: 3.0173 - Rouge1: 16.7977 - Rouge2: 8.6849 - Rougel: 16.4822 - Rougelsum: 16.4975
ad5e4d3f61d3ae80096be8dac7375a49
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8
0bb29d49639e42a8ff5ce3066f506f86
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 3.4693 | 1.0 | 1209 | 3.1215 | 17.5363 | 8.3875 | 17.0229 | 16.9653 | | 3.4231 | 2.0 | 2418 | 3.0474 | 16.7927 | 8.3533 | 16.2748 | 16.2379 | | 3.271 | 3.0 | 3627 | 3.0440 | 16.7233 | 7.9129 | 16.2385 | 16.1915 | | 3.1885 | 4.0 | 4836 | 3.0264 | 16.3078 | 7.5751 | 15.844 | 15.889 | | 3.1216 | 5.0 | 6045 | 3.0277 | 17.259 | 8.7504 | 16.8293 | 16.8543 | | 3.0739 | 6.0 | 7254 | 3.0188 | 16.8374 | 8.6457 | 16.4407 | 16.4743 | | 3.0393 | 7.0 | 8463 | 3.0161 | 17.3064 | 8.7822 | 16.9423 | 16.9543 | | 3.0202 | 8.0 | 9672 | 3.0173 | 16.7977 | 8.6849 | 16.4822 | 16.4975 |
1b75b623d5d37d773fd66fdf8738d539
gpl-3.0
['generated_from_trainer']
false
IceBERT-finetuned-ner This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0783 - Precision: 0.8873 - Recall: 0.8627 - F1: 0.8748 - Accuracy: 0.9848
72ed512c34094ed909d3c203a2b95ec7
gpl-3.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0539 | 1.0 | 2904 | 0.0768 | 0.8732 | 0.8453 | 0.8590 | 0.9833 | | 0.0281 | 2.0 | 5808 | 0.0737 | 0.8781 | 0.8492 | 0.8634 | 0.9838 | | 0.0166 | 3.0 | 8712 | 0.0783 | 0.8873 | 0.8627 | 0.8748 | 0.9848 |
fac9aec5e2ae9eb8da3690212c431096
mit
['sklearn', 'skops', 'tabular-classification', 'visual emb-gam']
false
Model description This is a LogisticRegressionCV model trained on averages of patch embeddings from the Imagenette dataset. This forms the GAM of an [Emb-GAM](https://arxiv.org/abs/2209.11799) extended to images. Patch embeddings are meant to be extracted with the [`google/vit-base-patch16-224` ViT checkpoint](https://huggingface.co/google/vit-base-patch16-224).
779a569ea32f9b1018686365a262f3ee
mit
['sklearn', 'skops', 'tabular-classification', 'visual emb-gam']
false
Hyperparameters The model is trained with below hyperparameters. <details> <summary> Click to expand </summary> | Hyperparameter | Value | |-------------------|-----------------------------------------------------------| | Cs | 10 | | class_weight | | | cv | StratifiedKFold(n_splits=5, random_state=1, shuffle=True) | | dual | False | | fit_intercept | True | | intercept_scaling | 1.0 | | l1_ratios | | | max_iter | 100 | | multi_class | auto | | n_jobs | | | penalty | l2 | | random_state | 1 | | refit | False | | scoring | | | solver | lbfgs | | tol | 0.0001 | | verbose | 0 | </details>
1c75f55c07779f9ea610032c41167f7d
mit
['sklearn', 'skops', 'tabular-classification', 'visual emb-gam']
false
sk-57980b4f-6828-4a54-ae50-b50e1f9f097e input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}
2ecc0ae7fc57124bea68bfcf540f7115
mit
['sklearn', 'skops', 'tabular-classification', 'visual emb-gam']
false
sk-57980b4f-6828-4a54-ae50-b50e1f9f097e div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}
57450597fbe9ea320d23ca2088dbf0b1
mit
['sklearn', 'skops', 'tabular-classification', 'visual emb-gam']
false
sk-57980b4f-6828-4a54-ae50-b50e1f9f097e div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}
45082dbf96c958efe45fd6d83b34f805
mit
['sklearn', 'skops', 'tabular-classification', 'visual emb-gam']
false
sk-57980b4f-6828-4a54-ae50-b50e1f9f097e div.sk-text-repr-fallback {display: none;}</style><div id="sk-57980b4f-6828-4a54-ae50-b50e1f9f097e" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>LogisticRegressionCV(cv=StratifiedKFold(n_splits=5, random_state=1, shuffle=True),random_state=1, refit=False)</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="51ec5e46-9aaa-4487-adda-6718142c9f85" type="checkbox" checked><label for="51ec5e46-9aaa-4487-adda-6718142c9f85" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegressionCV</label><div class="sk-toggleable__content"><pre>LogisticRegressionCV(cv=StratifiedKFold(n_splits=5, random_state=1, shuffle=True),random_state=1, refit=False)</pre></div></div></div></div></div>
ced4dfa9ef473f0058d7bde359d4836b
mit
['sklearn', 'skops', 'tabular-classification', 'visual emb-gam']
false
How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from PIL import Image from skops import hub_utils import torch from transformers import AutoFeatureExtractor, AutoModel import pickle import os
6ad561164c727bed7e8c19653dbc25b9
mit
['sklearn', 'skops', 'tabular-classification', 'visual emb-gam']
false
load embedding model device = torch.device("cuda" if torch.cuda.is_available() else "cpu") feature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224") model = AutoModel.from_pretrained("google/vit-base-patch16-224").eval().to(device)
6ef358c8d3c5c64f6e878274589daede
mit
['sklearn', 'skops', 'tabular-classification', 'visual emb-gam']
false
load logistic regression os.mkdir("emb-gam-vit") hub_utils.download(repo_id="Ramos-Ramos/emb-gam-vit", dst="emb-gam-vit") with open("emb-gam-vit/model.pkl", "rb") as file: logistic_regression = pickle.load(file)
bf46ac1634a59bc8e9507b0698bb9cd7
mit
['sklearn', 'skops', 'tabular-classification', 'visual emb-gam']
false
Citation Below you can find information related to citation. **BibTeX:** ``` @article{singh2022emb, title={Emb-GAM: an Interpretable and Efficient Predictor using Pre-trained Language Models}, author={Singh, Chandan and Gao, Jianfeng}, journal={arXiv preprint arXiv:2209.11799}, year={2022} } ```
c9bf1784ed14865b70495401acb37a8e
mit
['msmarco', 't5', 'pytorch', 'tensorflow', 'pt', 'pt-br']
false
Introduction ptt5-base-msmarco-en-pt-100k-v2 is a T5-based model pretrained in the BrWac corpus, fine-tuned on both English and Portuguese translated version of MS MARCO passage dataset. In the v2 version, the Portuguese dataset was translated using Google Translate. This model was finetuned for 100k steps. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
4ce2aa7f4b584a9e0503d0c7a43e7c31
mit
['msmarco', 't5', 'pytorch', 'tensorflow', 'pt', 'pt-br']
false
Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'unicamp-dl/ptt5-base-msmarco-en-pt-100k-v2' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) ```
b809c8386a7058b98e2b3059a4b94c18
mit
['msmarco', 't5', 'pytorch', 'tensorflow', 'pt', 'pt-br']
false
Citation If you use ptt5-base-msmarco-en-pt-100k-v2, please cite: @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} }
f78d00a18ef72ac4e9274cc4213ed0f5
apache-2.0
['generated_from_trainer']
false
mobilebert_add_GLUE_Experiment_mrpc This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6197 - Accuracy: 0.6838 - F1: 0.8122 - Combined Score: 0.7480
1b54ebb759e3475e3d38ebb436a14b7b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6387 | 1.0 | 29 | 0.6245 | 0.6838 | 0.8122 | 0.7480 | | 0.6307 | 2.0 | 58 | 0.6234 | 0.6838 | 0.8122 | 0.7480 | | 0.6307 | 3.0 | 87 | 0.6233 | 0.6838 | 0.8122 | 0.7480 | | 0.6295 | 4.0 | 116 | 0.6231 | 0.6838 | 0.8122 | 0.7480 | | 0.6261 | 5.0 | 145 | 0.6197 | 0.6838 | 0.8122 | 0.7480 | | 0.6147 | 6.0 | 174 | 0.6344 | 0.6838 | 0.8122 | 0.7480 | | 0.6209 | 7.0 | 203 | 0.6398 | 0.6838 | 0.8122 | 0.7480 | | 0.6007 | 8.0 | 232 | 0.6338 | 0.6324 | 0.7517 | 0.6920 | | 0.5795 | 9.0 | 261 | 0.6377 | 0.625 | 0.7429 | 0.6839 | | 0.5712 | 10.0 | 290 | 0.6290 | 0.6814 | 0.8036 | 0.7425 |
dab3f1dd106ac788c8928a31fadade95
apache-2.0
[]
false
<p align="center"> <br> <img src="./docs/source/en/imgs/diffusers_library.jpg" width="400"/> <br> <p> <p align="center"> <a href="https://github.com/huggingface/diffusers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/datasets.svg?color=blue"> </a> <a href="https://github.com/huggingface/diffusers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/diffusers.svg"> </a> <a href="CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg"> </a> </p> 🤗 Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves as a modular toolbox for inference and training of diffusion models. More precisely, 🤗 Diffusers offers: - State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)). Check [this overview](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/README.md
0ec8a616762a0eaa3a893e35ce35b55a
apache-2.0
[]
false
pipelines-summary) to see all supported pipelines and their corresponding official papers. - Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)). - Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)). - Training examples to show how to train the most popular diffusion model tasks (see [examples](https://github.com/huggingface/diffusers/tree/main/examples), *e.g.* [unconditional-image-generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation)).
57a3d5e32a953bba4078375e75ecd76d
apache-2.0
[]
false
For PyTorch **With `pip`** (official package) ```bash pip install --upgrade diffusers[torch] ``` **With `conda`** (maintained by the community) ```sh conda install -c conda-forge diffusers ```
0d897368d652de541d4b794d5b31010c
apache-2.0
[]
false
For Flax **With `pip`** ```bash pip install --upgrade diffusers[flax] ``` **Apple Silicon (M1/M2) support** Please, refer to [the documentation](https://huggingface.co/docs/diffusers/optimization/mps).
f2af28b501773347a9f7e739c0be46a8
apache-2.0
[]
false
Contributing We ❤️ contributions from the open-source community! If you want to contribute to this library, please check out our [Contribution guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md). You can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library. - See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute - See [New model/pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models / diffusion pipelines - See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or just hang out ☕.
1caf13a86674fcbafcebcdc750d5b750
apache-2.0
[]
false
Quickstart In order to get started, we recommend taking a look at two notebooks: - The [Getting started with Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) notebook, which showcases an end-to-end example of usage for diffusion models, schedulers and pipelines. Take a look at this notebook to learn how to use the pipeline abstraction, which takes care of everything (model, scheduler, noise handling) for you, and also to understand each independent building block in the library. - The [Training a diffusers model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook summarizes diffusion models training methods. This notebook takes a step-by-step approach to training your diffusion models on an image dataset, with explanatory graphics.
382abd4ecc225f779b23ca09cfd41055
apache-2.0
[]
false
Stable Diffusion is fully compatible with `diffusers`! Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [LAION](https://laion.ai/) and [RunwayML](https://runwayml.com/). It's trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 4GB VRAM. See the [model card](https://huggingface.co/CompVis/stable-diffusion) for more information.
9b9e0db4dd060f8e1e339e212b516fbc