license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
['stable-diffusion', 'text-to-image']
false
Example Pictures from Rebecca_3.5k <table> <tr> <td><img src=https://i.imgur.com/h9milQd.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/3Uxe6Bi.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/FHczkJj.png width=100% height=100%/></td> </tr> </table>
55436e315c285924d5d8e211ab48c5ad
creativeml-openrail-m
[]
false
**Prompts:** The model is dreamboothed on tagged suisei no majo images; some prompts that work are 1. suletta mercury 2. miorine rembran 3. gundam aerial --- **Training details:** Trained with [kanewallmann Dreambooth repository](https://github.com/kanewallmann/Dreambooth-Stable-Diffusion) using tags as captions 1. Trained for 10000 steps probably at the default learning ratet lr=1e-6 2. Dataset: around 500 tagged images of suise no majo + thousands of customized reg images --- **Problems:** As the model is trained only on tagged images, it is more flexible but it is only harder to prompt. Some detailed description may be needed to get the character right, especially when trying to prompt suletta and miorine in the same image. --- **Example Generations:** ![00038-2321521523-long](https://huggingface.co/alea31415/suremio-suisei-no-majo/resolve/main/00038-2321521523-long.png) ![00005-894260846-miorine](https://huggingface.co/alea31415/suremio-suisei-no-majo/resolve/main/00005-894260846-miorine.png) ![00046-2321521523-suletta](https://huggingface.co/alea31415/suremio-suisei-no-majo/resolve/main/00046-2321521523-suletta.png) ![00060-2516217770-suletta](https://huggingface.co/alea31415/suremio-suisei-no-majo/resolve/main/00060-2516217770-suletta.png) ![00176-1352431307-miorine](https://huggingface.co/alea31415/suremio-suisei-no-majo/resolve/main/00176-1352431307-miorine.png) ![00184-1661291290-long](https://huggingface.co/alea31415/suremio-suisei-no-majo/resolve/main/00184-1661291290-long.png) ![00316-2911672629-miorine](https://huggingface.co/alea31415/suremio-suisei-no-majo/resolve/main/00316-2911672629-miorine.png) ![00147-1397396354-miorine](https://huggingface.co/alea31415/suremio-suisei-no-majo/resolve/main/00147-1397396354-miorine.png) ![00400-3442904358-gundam](https://huggingface.co/alea31415/suremio-suisei-no-majo/resolve/main/00400-3442904358-gundam.png) ![00407-2385989155-gundam](https://huggingface.co/alea31415/suremio-suisei-no-majo/resolve/main/00407-2385989155-gundam.png)
d0c3d85ef2ee5bac51dffa85ef44972b
mit
['generated_from_trainer']
false
bart-large-cnn-samsum-ElectrifAi_v6 This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4591 - Rouge1: 70.5822 - Rouge2: 55.7529 - Rougel: 63.7452 - Rougelsum: 69.9659 - Gen Len: 113.6
ab58bc855863e4fade83b6e4f6acfbc4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 20 | 0.7010 | 63.9182 | 44.7625 | 53.1206 | 63.0249 | 102.5 | | No log | 2.0 | 40 | 0.5084 | 68.113 | 52.0277 | 60.5913 | 67.282 | 114.8 | | No log | 3.0 | 60 | 0.4591 | 70.5822 | 55.7529 | 63.7452 | 69.9659 | 113.6 |
4b3ea461a202534a517a3f4ac69546e5
apache-2.0
['deep-narrow']
false
T5-Efficient-SMALL-EL8-DL4 (Deep-Narrow version) T5-Efficient-SMALL-EL8-DL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the modelโ€™s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
fe0d61ce253e09bbfac30d94b64193a8
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-small-el8-dl4** - is of model type **Small** with the following variations: - **el** is **8** - **dl** is **4** It has **58.42** million parameters and thus requires *ca.* **233.69 MB** of memory in full precision (*fp32*) or **116.84 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
1035fb58e5cf2bd8daae90259d89d914
mit
['generated_from_trainer']
false
hyunwoongko-kobart-eb-finetuned-papers-meetings This model is a fine-tuned version of [hyunwoongko/kobart](https://huggingface.co/hyunwoongko/kobart) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3136 - Rouge1: 18.3166 - Rouge2: 8.0509 - Rougel: 18.3332 - Rougelsum: 18.3146 - Gen Len: 19.9143
149933beb82c201c6b8e5690559d8281
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 0.2118 | 1.0 | 7739 | 0.2951 | 18.0837 | 7.9585 | 18.0787 | 18.0784 | 19.896 | | 0.1598 | 2.0 | 15478 | 0.2812 | 18.529 | 7.9891 | 18.5421 | 18.5271 | 19.8977 | | 0.1289 | 3.0 | 23217 | 0.2807 | 18.0638 | 7.8086 | 18.0787 | 18.0583 | 19.9129 | | 0.0873 | 4.0 | 30956 | 0.2923 | 18.3483 | 8.0233 | 18.3716 | 18.3696 | 19.914 | | 0.0844 | 5.0 | 38695 | 0.3136 | 18.3166 | 8.0509 | 18.3332 | 18.3146 | 19.9143 |
da4c7026afea624ec82b20be6fd93d15
apache-2.0
['image-classification', 'timm']
false
Model card for convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384 A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-1k in `timm` by Ross Wightman. Please see related OpenCLIP model cards for more details on pretrain: * https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg * https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg * https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
05a741375635a16e1c44f760290a6117
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 200.1 - GMACs: 101.1 - Activations (M): 126.7 - Image size: 384 x 384 - **Papers:** - LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020 - **Original:** https://github.com/mlfoundations/open_clip - **Pretrain Dataset:** LAION-2B - **Dataset:** ImageNet-1k
84de9ed4c17aa6ebb63dfc36c6c9bc6d
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384', pretrained=True) model = model.eval()
5e72c51ea0463bf48a8bc07ac75ac159
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384', pretrained=True, features_only=True, ) model = model.eval()
14555db102d34ba84f1ef8326b73ac8b
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384', pretrained=True, num_classes=0,
c1256afe16c1a1fa48562c964338da88
apache-2.0
['image-classification', 'timm']
false
By Top-1 All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. |model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| |[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | |[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | |[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | |[convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384)|87.870|98.452|384 |200.13 |101.11 |126.74 |197.92 |256 | |[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | |[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | |[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | |[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | |[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | |[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | |[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | |[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | |[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | |[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | |[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | |[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | |[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | |[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | |[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | |[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | |[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | |[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | |[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | |[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | |[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | |[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | |[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | |[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | |[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | |[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | |[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | |[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | |[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | |[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | |[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | |[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | |[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | |[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | |[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | |[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | |[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | |[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | |[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | |[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | |[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | |[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
fc7dab2c6dd893b7f063efa956690409
apache-2.0
['image-classification', 'timm']
false
By Throughput (samples / sec) All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. |model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| |[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | |[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | |[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | |[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | |[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | |[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | |[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | |[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | |[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | |[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | |[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | |[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | |[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | |[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | |[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | |[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | |[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | |[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | |[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | |[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | |[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | |[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | |[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | |[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | |[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | |[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | |[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | |[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | |[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | |[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | |[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | |[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | |[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | |[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | |[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | |[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | |[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | |[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | |[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | |[convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384)|87.870 |98.452 |384 |200.13 |101.11 |126.74 |197.92 |256 | |[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | |[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | |[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | |[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | |[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | |[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
24051d84314462bfd306b594ad6ba23c
mit
[]
false
Eddie on Stable Diffusion This is the `Eddie` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![Eddie 0](https://huggingface.co/sd-concepts-library/eddie/resolve/main/concept_images/1.jpeg) ![Eddie 1](https://huggingface.co/sd-concepts-library/eddie/resolve/main/concept_images/0.jpeg) ![Eddie 2](https://huggingface.co/sd-concepts-library/eddie/resolve/main/concept_images/4.jpeg) ![Eddie 3](https://huggingface.co/sd-concepts-library/eddie/resolve/main/concept_images/2.jpeg) ![Eddie 4](https://huggingface.co/sd-concepts-library/eddie/resolve/main/concept_images/3.jpeg)
eebc1a68a76003bd3c8ec32e163aaddd
apache-2.0
['generated_from_keras_callback']
false
leabum/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.5824 - Train End Logits Accuracy: 0.0347 - Train Start Logits Accuracy: 0.0694 - Validation Loss: 5.8343 - Validation End Logits Accuracy: 0.0 - Validation Start Logits Accuracy: 0.0 - Epoch: 1
e6b961217bc8d176343965f2768a3369
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 5.8427 | 0.0069 | 0.0069 | 5.8688 | 0.0 | 0.0 | 0 | | 5.5824 | 0.0347 | 0.0694 | 5.8343 | 0.0 | 0.0 | 1 |
2eeb4bd030ce5a8be5ef265823ff2c9d
creativeml-openrail-m
[]
false
**Model Description** The model was created by merging Well-known models. (Waifu Diffusion, Novel AI, Anything 3.0, etc) There is no separate trigger word, and keywords commonly applied in Waifu Diffusion and Novel AI can be used. ๋ชจ๋ธ์€ ์ž˜ ์•Œ๋ ค์ง„ ๊ณต๊ฐœ ๋ชจ๋ธ์„ ๋ณ‘ํ•ฉํ•˜์—ฌ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. (Waifu Diffusion, Novel AI, Anything 3.0 ๋“ฑ) ๋ณ„๋„์˜ ํŠธ๋ฆฌ๊ฑฐ ๋‹จ์–ด๋Š” ์—†์œผ๋ฉฐ, Waifu Diffusion๊ณผ Novel AI ์—์„œ ์ผ๋ฐ˜์ ์œผ๋กœ ์ ์šฉ๋˜๋Š” ํ‚ค์›Œ๋“œ๋ฅผ ์‚ฌ์šฉ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
056a3a2be5e29cb0b523d47026f8be8a
creativeml-openrail-m
[]
false
**Vox-mix Samples** ![Vox_01](./img/vox_01.jpg) >(masterpiece, best quality, ultra-detailed, illustration, painting), >best illumination, dynamic angle, finely detail, >(full body shot of a High Quality Victorian Era cute girl), (oil painting), >(Francois Boucher), alphonse mucha, (Claude Monet), Franz Xaver Winterhalter, [NORMAN ROCKWELL], >(PERFECT FACE:1.2), (SEXY FACE:1.2), (DETAILED PUPILS:1.2), (SMIRK), (HIGH DETAIL:1.2), SHARP, glitter many particles, artgerm, ((intricate details)), ((highres)), (finely detailed), >absurdres, soft lighting, glow, (1girl), (solo), beautiful detailed glow, (large breasts), cleavage, sideswept hair, hair bowtie, (intricate halter backless dress), gloves, (highheels:1.14), [ornate mansion's foyer, bannisters in background], >Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, extra fingers, mutation, bad anatomy, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, (nipples on navel:1.34), (nipples on stomach:1.34), (3 or more nipples:1.34), (censored:1.22), (censor bar:1.22), (ugly:1.48), (duplicate:1.34), (morbid:1.22), (mutilated:1.22), (tranny:1.34), (trans:1.34), (trannsexual:1.34), (hermaphrodite:1.1), extra fingers, mutated hands, (poorly drawn hands:1.22), (poorly drawn face:1.22), (mutation:1.34), (deformed:1.34), (ugly:1.22), blurry, (bad anatomy:1.22), >Seed: 3483746954, Steps: 50, CFG scale: 8 ![Vox_02](./img/vox_02.jpg) >(masterpiece, best quality, ultra-detailed, illustration, painting), >best illumination, dynamic angle, finely detail, >(full body shot of a High Quality Victorian Era cute girl), (oil painting), >(Francois Boucher), alphonse mucha, (Claude Monet), Franz Xaver Winterhalter, [NORMAN ROCKWELL], >(PERFECT FACE:1.2), (SEXY FACE:1.2), (DETAILED PUPILS:1.2), (SMIRK), (HIGH DETAIL:1.2), SHARP, glitter many particles, artgerm, ((intricate details)), ((highres)), (finely detailed), >absurdres, soft lighting, glow, (1girl), (solo), beautiful detailed glow, (large breasts), cleavage, sideswept hair, hair bowtie, (intricate halter backless dress), gloves, (highheels:1.14), [ornate mansion's foyer, bannisters in background], >Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, extra fingers, mutation, bad anatomy, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, (nipples on navel:1.34), (nipples on stomach:1.34), (3 or more nipples:1.34), (censored:1.22), (censor bar:1.22), (ugly:1.48), (duplicate:1.34), (morbid:1.22), (mutilated:1.22), (tranny:1.34), (trans:1.34), (trannsexual:1.34), (hermaphrodite:1.1), extra fingers, mutated hands, (poorly drawn hands:1.22), (poorly drawn face:1.22), (mutation:1.34), (deformed:1.34), (ugly:1.22), blurry, (bad anatomy:1.22), >Seed: 4009463661, Steps: 50, Sampler: DDIM, CFG scale: 8 ![Vox_03](./img/vox_03.jpg) >(masterpiece, best quality, ultra-detailed, illustration, painting), >best illumination, dynamic angle, finely detail, >(full body shot of a High Quality Victorian Era cute girl), (oil painting), >(Francois Boucher), alphonse mucha, (Claude Monet), Franz Xaver Winterhalter, [NORMAN ROCKWELL], >(PERFECT FACE:1.2), (SEXY FACE:1.2), (DETAILED PUPILS:1.2), (SMIRK), (HIGH DETAIL:1.2), SHARP, glitter many particles, artgerm, ((intricate details)), ((highres)), (finely detailed), (wearing a sexy see-through backless dress:1.28), >absurdres, soft lighting, glow, (1girl), (solo), beautiful detailed glow, (large breasts), cleavage, sideswept hair, hair bowtie, gloves, (highheels:1.12), [ornate mansion's foyer, bannisters in background], (NSFW:1.2) >Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, extra fingers, mutation, bad anatomy, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, (nipples on navel:1.34), (nipples on stomach:1.34), (3 or more nipples:1.34), (censored:1.22), (censor bar:1.22), (ugly:1.48), (duplicate:1.34), (morbid:1.22), (mutilated:1.22), (tranny:1.34), (trans:1.34), (trannsexual:1.34), (hermaphrodite:1.1), extra fingers, mutated hands, (poorly drawn hands:1.22), (poorly drawn face:1.22), (mutation:1.34), (deformed:1.34), (ugly:1.22), blurry, (bad anatomy:1.22), >Seed: 3002977200, Steps: 50, Sampler: DDIM, CFG scale: 10
44a9b69a4db2f7d0566a46fa74a272bb
creativeml-openrail-m
[]
false
**Vox-mix2 Samples** ![Vox_mix2_01](./img/vox_mix2_01.jpg) >((masterpiece, best quality, ultra-detailed, illustration, painting), (poster illustration), trending on artstation, (4girls:1.6), >(a High Quality Victorian Era sexy girl), (nsfw:1.2), (intricate see-through dress:1.2), >((1girl)), long hair, (PERFECT FACE:1.2), (SEXY FACE:1.2), (DETAILED PUPILS:1.2), (SMIRK), sideswept hair, (full body:1.2), >((1girl)), pixie cut, (sexy eyes), detailed face, detailed eyes, slight smile, (full body:1.2), >((1girl)), shot hair, (PERFECT FACE:1.2), (DETAILED PUPILS:1.2), (SMIRK), (full body:1.2), >((1girl)), wave hair, (bride), (beautiful face), (sexy eyes), (DETAILED PUPILS:1.2), (SMIRK), ideswept hair, (full body:1.2), >((1girl)), bob cut, (beautiful face), (sexy eyes), detailed face, detailed eyes, (full body:1.2), >SHARP, glitter many particles, ((intricate details)), ((highres)), (finely detailed), >oil painting by ((Francois Boucher), (alphonse mucha:0.8), (Claude Monet), Franz Xaver Winterhalter, (NORMAN ROCKWELL:0.8)), > >Negative prompt:logo, title, text, caption, solo:1.5, identical outfits, split panels, white background, plain background, black background, simple background, gradient background, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, extra fingers, mutation, bad anatomy, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, (nipples on navel:1.34), (nipples on stomach:1.34), (3 or more nipples:1.34), (censored:1.22), (censor bar:1.22), (ugly:1.48), (duplicate:1.34), (morbid:1.22), (mutilated:1.22), (tranny:1.34), (trans:1.34), (trannsexual:1.34), (hermaphrodite:1.1), extra fingers, mutated hands, (poorly drawn hands:1.22), (poorly drawn face:1.22), (mutation:1.34), (deformed:1.34), (ugly:1.22), blurry, (bad anatomy:1.22), (bad proportions:1.34), (extra limbs:1.22), cloned face, (disfigured:1.34), (more than 2 nipples:1.34), extra limbs, (bad anatomy:1.1), gross proportions, (malformed limbs:1.1), (missing arms:1.22), (missing legs:1.22), (extra arms:1.34), (extra legs:1.34), mutated hands, (fused fingers:1.1), (too many fingers:1.1), (long neck:1.34), (out of frame:1.1), (more than one person in focus:1.1), (bad anatomy:1.1), (more than two arm per body:1.48), (more than two leg per body:1.48), (more than five fingers on one hand:1.48), bad detailed background, unclear architectural outline, non-linear background, over one person in focus, (over four finger:1.05), (fingers excluding thumb:1.98), fused anatomy, (bad anatomybody:1.1), (bad anatomyhand:1.1), (bad anatomyfinger:1.1), (four fingers excluding thumbfingers:1.98), (bad anatomyarms:1.1), (over two armsbody:1.1), (bad anatomyleg:1.1), (over two legsbody:1.1), (bad anatomyarm:1.1), (bad detailfinger:1.05), (bad anatomyfingers:1.1), (multifulfingers:1.1), (bad anatomyfinger:1.1), (bad anatomyfingers:1.1), (fusedfingers:1.1), (over four fingerfingers excluding thumb:1.98), (multifulhands:1.1), (multifularms:1.1), (multifullegs:1.1), ((frame)) > >Steps: 50, Sampler: DDIM, CFG scale: 12, Seed: 1023642063, Size: 768x512, Model hash: ab05b088cd, Model: 20_Vox-mix2anu, Denoising strength: 0.53, ENSD: -1, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B **License** This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. ์ด ๋ชจ๋ธ์€ ๊ถŒํ•œ ๋ฐ ์‚ฌ์šฉ์„ ์ถ”๊ฐ€๋กœ ์ง€์ •ํ•˜๋Š” CreativeML OpenRAIL-M ๋ผ์ด์„ ์Šค๋ฅผ ํ†ตํ•ด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๊ณ  ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. The CreativeML OpenRAIL License specifies: CreativeML OpenRAIL ๋ผ์ด์„ ์Šค๋Š” ๋‹ค์Œ์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content / ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ถˆ๋ฒ•์ ์ด๊ฑฐ๋‚˜ ์œ ํ•ดํ•œ ์ถœ๋ ฅ๋ฌผ ๋˜๋Š” ์ฝ˜ํ…์ธ ๋ฅผ ์˜๋„์ ์œผ๋กœ ์ƒ์„ฑํ•˜๊ฑฐ๋‚˜ ๊ณต์œ ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license / ์ž‘์„ฑ์ž๋Š” ๊ท€ํ•˜๊ฐ€ ์ƒ์„ฑํ•œ ๊ฒฐ๊ณผ๋ฌผ์— ๋Œ€ํ•ด ์–ด๋– ํ•œ ๊ถŒ๋ฆฌ๋„ ์ฃผ์žฅํ•˜์ง€ ์•Š์œผ๋ฉฐ, ๊ท€ํ•˜๋Š” ์ด๋ฅผ ์ž์œ ๋กญ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋ผ์ด์„ผ์Šค์— ์„ค์ •๋œ ์กฐํ•ญ์— ์œ„๋ฐฐ๋˜์ง€ ์•Š๋Š” ์‚ฌ์šฉ์— ๋Œ€ํ•ด ์ฑ…์ž„์„ ์ง‘๋‹ˆ๋‹ค. 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)ย / ๊ฐ€์ค‘์น˜๋ฅผ ์žฌ๋ฐฐํฌํ•˜๊ณ  ๋ชจ๋ธ์„ ์ƒ์—…์  ๋ฐ/๋˜๋Š” ์„œ๋น„์Šค๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ฒŒ ํ•˜๋Š” ๊ฒฝ์šฐ ๋ผ์ด์„ ์Šค์— ์žˆ๋Š” ๊ฒƒ๊ณผ ๋™์ผํ•œ ์‚ฌ์šฉ ์ œํ•œ์„ ํฌํ•จํ•˜๊ณ  ๋ชจ๋“  ์‚ฌ์šฉ์ž์—๊ฒŒ CreativeML OpenRAIL-M ์‚ฌ๋ณธ์„ ๊ณต์œ ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค(๋ผ์ด์„ ์Šค๋ฅผ ์™„์ „ํžˆ ์ฃผ์˜ ๊นŠ๊ฒŒ ์ฝ์œผ์‹ญ์‹œ์˜ค). **[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)**
6da6c8ee49217415ec3774972ab7ff89
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2251 - Accuracy: 0.923 - F1: 0.9230
e4b09526d7d111d8b8bf34df25875bdd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8643 | 1.0 | 250 | 0.3395 | 0.901 | 0.8969 | | 0.2615 | 2.0 | 500 | 0.2251 | 0.923 | 0.9230 |
377cc1866666920cf46369578c73bd45
mit
['generated_from_trainer']
false
xlm-roberta-base-misogyny-sexism-fr-indomain-trans This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9813 - Accuracy: 0.8708 - F1: 0.0 - Precision: 0.0 - Recall: 0.0 - Mae: 0.1292
ca7045111b1e49fb6dc2c907796652e4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|:------:| | 0.3606 | 1.0 | 2297 | 0.8082 | 0.8710 | 0.0 | 0.0 | 0.0 | 0.1290 | | 0.3169 | 2.0 | 4594 | 0.8868 | 0.8702 | 0.0 | 0.0 | 0.0 | 0.1298 | | 0.2708 | 3.0 | 6891 | 0.9082 | 0.8710 | 0.0 | 0.0 | 0.0 | 0.1290 | | 0.2337 | 4.0 | 9188 | 0.9813 | 0.8708 | 0.0 | 0.0 | 0.0 | 0.1292 |
d3aef9ef2f2495e92f980ed8d2f9eba0
mit
[]
false
obama_self_2 on Stable Diffusion This is the `<Obama>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<Obama> 0](https://huggingface.co/sd-concepts-library/obama-self-2/resolve/main/concept_images/1.jpg) ![<Obama> 1](https://huggingface.co/sd-concepts-library/obama-self-2/resolve/main/concept_images/2.jpg) ![<Obama> 2](https://huggingface.co/sd-concepts-library/obama-self-2/resolve/main/concept_images/3.jpg) ![<Obama> 3](https://huggingface.co/sd-concepts-library/obama-self-2/resolve/main/concept_images/0.jpg)
8ba69381026fcad0320fdc3153865cc0
apache-2.0
['translation']
false
opus-mt-sv-en * source languages: sv * target languages: en * OPUS readme: [sv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-en/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-en/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-en/opus-2020-02-26.eval.txt)
954511eea51ce49882d7224ceea181f0
apache-2.0
['generated_from_trainer']
false
my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9043
dd7f4d5329a7bbb42459cc2bbe7d5e27
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 25 | 3.9350 | | No log | 2.0 | 50 | 3.9107 | | No log | 3.0 | 75 | 3.9043 |
4813839bf4eefbb617a68b3c1d4aeacd
mit
[]
false
youtooz candy on Stable Diffusion This is the `<youtooz-candy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<youtooz-candy> 0](https://huggingface.co/sd-concepts-library/youtooz-candy/resolve/main/concept_images/2.jpeg) ![<youtooz-candy> 1](https://huggingface.co/sd-concepts-library/youtooz-candy/resolve/main/concept_images/0.jpeg) ![<youtooz-candy> 2](https://huggingface.co/sd-concepts-library/youtooz-candy/resolve/main/concept_images/1.jpeg) ![<youtooz-candy> 3](https://huggingface.co/sd-concepts-library/youtooz-candy/resolve/main/concept_images/3.jpeg) ![<youtooz-candy> 4](https://huggingface.co/sd-concepts-library/youtooz-candy/resolve/main/concept_images/4.jpeg) ![<youtooz-candy> 5](https://huggingface.co/sd-concepts-library/youtooz-candy/resolve/main/concept_images/5.jpeg)
a77ca9d79e85ded60c493830875fe8ec
apache-2.0
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - gradient_accumulation_steps: 4 - optimizer: AdamW with betas=(0.9, 0.999), weight_decay=0.01 and epsilon=1e-08 - lr_scheduler: constant - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16
df272cf1a72f539b1a94c7631f12b2ac
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased_fold_6_ternary This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6625 - F1: 0.7588
8a6dc33aa3443c40fb14eac861ea9158
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 292 | 0.5117 | 0.7306 | | 0.5701 | 2.0 | 584 | 0.5273 | 0.7296 | | 0.5701 | 3.0 | 876 | 0.6037 | 0.7415 | | 0.2468 | 4.0 | 1168 | 0.7132 | 0.7318 | | 0.2468 | 5.0 | 1460 | 0.8980 | 0.7504 | | 0.12 | 6.0 | 1752 | 1.0343 | 0.7369 | | 0.0486 | 7.0 | 2044 | 1.1860 | 0.7333 | | 0.0486 | 8.0 | 2336 | 1.3348 | 0.7437 | | 0.019 | 9.0 | 2628 | 1.3040 | 0.7561 | | 0.019 | 10.0 | 2920 | 1.4649 | 0.7293 | | 0.0152 | 11.0 | 3212 | 1.4870 | 0.7431 | | 0.0078 | 12.0 | 3504 | 1.5668 | 0.7455 | | 0.0078 | 13.0 | 3796 | 1.5280 | 0.7378 | | 0.0091 | 14.0 | 4088 | 1.5672 | 0.7410 | | 0.0091 | 15.0 | 4380 | 1.5948 | 0.7491 | | 0.0052 | 16.0 | 4672 | 1.6625 | 0.7588 | | 0.0052 | 17.0 | 4964 | 1.6544 | 0.7411 | | 0.0048 | 18.0 | 5256 | 1.7124 | 0.7425 | | 0.0024 | 19.0 | 5548 | 1.7211 | 0.7477 | | 0.0024 | 20.0 | 5840 | 1.8216 | 0.7373 | | 0.001 | 21.0 | 6132 | 1.8325 | 0.7361 | | 0.001 | 22.0 | 6424 | 1.8089 | 0.7498 | | 0.0015 | 23.0 | 6716 | 1.8026 | 0.7506 | | 0.0005 | 24.0 | 7008 | 1.8026 | 0.7464 | | 0.0005 | 25.0 | 7300 | 1.8043 | 0.7464 |
380757bb48c353dd465db0fca7298b87
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 50 - mixed_precision_training: Native AMP
dd50778c8280013d5f8a53d35ea42bd8
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__subj__train-8-8 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3160 - Accuracy: 0.8735
99ee26781cfbd769cd8d86b157ece1f3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7187 | 1.0 | 3 | 0.6776 | 1.0 | | 0.684 | 2.0 | 6 | 0.6608 | 1.0 | | 0.6532 | 3.0 | 9 | 0.6364 | 1.0 | | 0.5996 | 4.0 | 12 | 0.6119 | 1.0 | | 0.5242 | 5.0 | 15 | 0.5806 | 1.0 | | 0.4612 | 6.0 | 18 | 0.5320 | 1.0 | | 0.4192 | 7.0 | 21 | 0.4714 | 1.0 | | 0.3274 | 8.0 | 24 | 0.4071 | 1.0 | | 0.2871 | 9.0 | 27 | 0.3378 | 1.0 | | 0.2082 | 10.0 | 30 | 0.2822 | 1.0 | | 0.1692 | 11.0 | 33 | 0.2271 | 1.0 | | 0.1242 | 12.0 | 36 | 0.1793 | 1.0 | | 0.0977 | 13.0 | 39 | 0.1417 | 1.0 | | 0.0776 | 14.0 | 42 | 0.1117 | 1.0 | | 0.0631 | 15.0 | 45 | 0.0894 | 1.0 | | 0.0453 | 16.0 | 48 | 0.0733 | 1.0 | | 0.0399 | 17.0 | 51 | 0.0617 | 1.0 | | 0.0333 | 18.0 | 54 | 0.0528 | 1.0 | | 0.0266 | 19.0 | 57 | 0.0454 | 1.0 | | 0.0234 | 20.0 | 60 | 0.0393 | 1.0 | | 0.0223 | 21.0 | 63 | 0.0345 | 1.0 | | 0.0195 | 22.0 | 66 | 0.0309 | 1.0 | | 0.0161 | 23.0 | 69 | 0.0281 | 1.0 | | 0.0167 | 24.0 | 72 | 0.0260 | 1.0 | | 0.0163 | 25.0 | 75 | 0.0242 | 1.0 | | 0.0134 | 26.0 | 78 | 0.0227 | 1.0 | | 0.0128 | 27.0 | 81 | 0.0214 | 1.0 | | 0.0101 | 28.0 | 84 | 0.0204 | 1.0 | | 0.0109 | 29.0 | 87 | 0.0194 | 1.0 | | 0.0112 | 30.0 | 90 | 0.0186 | 1.0 | | 0.0108 | 31.0 | 93 | 0.0179 | 1.0 | | 0.011 | 32.0 | 96 | 0.0174 | 1.0 | | 0.0099 | 33.0 | 99 | 0.0169 | 1.0 | | 0.0083 | 34.0 | 102 | 0.0164 | 1.0 | | 0.0096 | 35.0 | 105 | 0.0160 | 1.0 | | 0.01 | 36.0 | 108 | 0.0156 | 1.0 | | 0.0084 | 37.0 | 111 | 0.0152 | 1.0 | | 0.0089 | 38.0 | 114 | 0.0149 | 1.0 | | 0.0073 | 39.0 | 117 | 0.0146 | 1.0 | | 0.0082 | 40.0 | 120 | 0.0143 | 1.0 | | 0.008 | 41.0 | 123 | 0.0141 | 1.0 | | 0.0093 | 42.0 | 126 | 0.0139 | 1.0 | | 0.0078 | 43.0 | 129 | 0.0138 | 1.0 | | 0.0086 | 44.0 | 132 | 0.0136 | 1.0 | | 0.009 | 45.0 | 135 | 0.0135 | 1.0 | | 0.0072 | 46.0 | 138 | 0.0134 | 1.0 | | 0.0075 | 47.0 | 141 | 0.0133 | 1.0 | | 0.0082 | 48.0 | 144 | 0.0133 | 1.0 | | 0.0068 | 49.0 | 147 | 0.0132 | 1.0 | | 0.0074 | 50.0 | 150 | 0.0132 | 1.0 |
29a21f5c6b78b4a2bbdde611ca29c152
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-meta-7-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4797 - Accuracy: 0.28
11766b1414c2a26389919263d669961c
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2r_es_xls-r_age_teens-2_sixties-8_s786 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
3ecb53a290d042681bf2d54db4f918ad
mit
['generated_from_trainer']
false
camembert-base-finetuned-paraphrase This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the pawsx dataset. It achieves the following results on the evaluation set: - Loss: 0.2708 - Accuracy: 0.9085 - F1: 0.9089
8166c519c20e41039b11700ec3be499b
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4
b66cc38c0c9542137d92224d0e5232e7
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3918 | 1.0 | 772 | 0.3211 | 0.869 | 0.8696 | | 0.2103 | 2.0 | 1544 | 0.2448 | 0.9075 | 0.9077 | | 0.1622 | 3.0 | 2316 | 0.2577 | 0.9055 | 0.9059 | | 0.1344 | 4.0 | 3088 | 0.2708 | 0.9085 | 0.9089 |
9ddaba3aeeb37cd18007c9a491d6ad54
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Jak's Creepy Critter Pack v2.0-768px! Higher resolution 768px images used for training with fine tuning to now allow better control of output images. Compared to v1.0 which creates messy blob monsters (which is still fun), this version allows finer control to unleash your creativity! Enjoy! Tips: use "food_crit" to start your prompt add "3d, ceramic, octane render" to add a shiny 3D appearance go wild Sample pictures of this concept using the 768px model: ![0](https://huggingface.co/plasmo/colorjizz-768px/resolve/main/sample_images/00323.jpg) ![0](https://huggingface.co/plasmo/colorjizz-768px/resolve/main/sample_images/00325.jpg) ![0](https://huggingface.co/plasmo/colorjizz-768px/resolve/main/sample_images/00326.jpg) ![0](https://huggingface.co/plasmo/colorjizz-768px/resolve/main/sample_images/00327.jpg) ![0](https://huggingface.co/plasmo/colorjizz-768px/resolve/main/sample_images/00328.jpg) ![0](https://huggingface.co/plasmo/colorjizz-768px/resolve/main/sample_images/00329.jpg) ![0](https://huggingface.co/plasmo/colorjizz-768px/resolve/main/sample_images/00330.jpg)
67d2fcc3d29572f6931c6c23a68f6e18
apache-2.0
['generated_from_trainer']
false
bert-base-cased-finetuned-imdb This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3367 - Accuracy: 0.625
858dcd318c6ee5e8e515d8d9a759306f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.687 | 1.0 | 20 | 1.4339 | 0.625 | | 1.4117 | 2.0 | 40 | 1.3367 | 0.625 |
91ff10a66242f4cae73b698186ba9d33
apache-2.0
['generated_from_keras_callback']
false
example_workflow_model This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4118 - Train Sparse Categorical Accuracy: 0.8765 - Validation Loss: 0.5309 - Validation Sparse Categorical Accuracy: 0.8448 - Epoch: 1
c694764985ee9a9bcea0f6be8e190bcc
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.7242 | 0.7814 | 0.5739 | 0.8254 | 0 | | 0.4118 | 0.8765 | 0.5309 | 0.8448 | 1 |
a07f90f43bb41f597b906aa3d985be6a
apache-2.0
['generated_from_trainer']
false
bert-large-uncased_cls_subj This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1860 - Accuracy: 0.9675
9f537c18bd616f0f3acf344d682de7a6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2427 | 1.0 | 500 | 0.1733 | 0.9585 | | 0.1349 | 2.0 | 1000 | 0.1377 | 0.958 | | 0.0487 | 3.0 | 1500 | 0.1701 | 0.9635 | | 0.0184 | 4.0 | 2000 | 0.1906 | 0.9675 | | 0.0144 | 5.0 | 2500 | 0.1860 | 0.9675 |
7fbef32b4c185f6eb1b11e67eea343cb
apache-2.0
['text-classfication', 'int8', 'Intelยฎ Neural Compressor', 'PostTrainingDynamic']
false
Post-training dynamic quantization This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intelยฎ Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [bart-large-mrpc](https://huggingface.co/Intel/bart-large-mrpc).
ff3041288c3a140ddaa6904e5445da5d
apache-2.0
['text-classfication', 'int8', 'Intelยฎ Neural Compressor', 'PostTrainingDynamic']
false
Load with optimum: ```python from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification int8_model = IncQuantizedModelForSequenceClassification.from_pretrained( 'Intel/bart-large-mrpc-int8-dynamic', ) ```
e91601e030d119a0b8fc09ee4f2c505b
mit
['spacy', 'token-classification']
false
| Feature | Description | | --- |-----------------------------------------| | **Name** | `it_tei2go` | | **Version** | `0.0.0` | | **spaCy** | `>=3.2.4,<3.3.0` | | **Default Pipeline** | `ner` | | **Components** | `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | MIT | | **Author** | [n/a]() |
63486c449d92f747644a061d9a52def3
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-small-test-amazon This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9515 - Rouge1: 30.3066 - Rouge2: 3.3019 - Rougel: 30.1887 - Rougelsum: 30.0314
ff5d806ff67e996d7dd9cc6aa1629a20
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 10.0147 | 1.0 | 1004 | 2.9904 | 7.3703 | 0.2358 | 7.3703 | 7.4292 | | 3.4892 | 2.0 | 2008 | 2.4061 | 23.4178 | 2.4764 | 23.2901 | 23.3097 | | 2.724 | 3.0 | 3012 | 2.1630 | 26.6706 | 2.8302 | 26.6509 | 26.5723 | | 2.4395 | 4.0 | 4016 | 2.0815 | 26.7296 | 2.9481 | 26.6313 | 26.533 | | 2.2881 | 5.0 | 5020 | 2.0048 | 30.1887 | 3.3019 | 30.0708 | 29.9135 | | 2.1946 | 6.0 | 6024 | 1.9712 | 29.4811 | 2.9481 | 29.4025 | 29.3042 | | 2.1458 | 7.0 | 7028 | 1.9545 | 29.8153 | 3.3019 | 29.717 | 29.5204 | | 2.1069 | 8.0 | 8032 | 1.9515 | 30.3066 | 3.3019 | 30.1887 | 30.0314 |
a441d7f82a7711c03210540b997816e1
apache-2.0
['generated_from_trainer']
false
Tagged_Uni_50v1_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v1_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5851 - Precision: 0.1466 - Recall: 0.0256 - F1: 0.0437 - Accuracy: 0.7941
f766c3ce35a8232733e088f2bc62b625
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 24 | 0.6704 | 0.0 | 0.0 | 0.0 | 0.7775 | | No log | 2.0 | 48 | 0.5824 | 0.1479 | 0.0154 | 0.0279 | 0.7895 | | No log | 3.0 | 72 | 0.5851 | 0.1466 | 0.0256 | 0.0437 | 0.7941 |
8760eeaf22901af9896ccad8e8678225
other
['vision', 'image-segmentation', 'generated_from_trainer']
false
segformer-b0-finetuned-segments-sidewalk-2 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set: - Loss: 2.6306 - Mean Iou: 0.1027 - Mean Accuracy: 0.1574 - Overall Accuracy: 0.6552 - Per Category Iou: [0.0, 0.40932069741697885, 0.6666047315185674, 0.0015527279135260222, 0.000557997451181134, 0.004734463745284192, 0.0, 0.00024311836753505628, 0.0, 0.0, 0.5448608416905849, 0.005644290758731727, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4689142754019952, 0.0, 0.00039031599380590526, 0.010175747938072128, 0.0, 0.0, 0.0, 0.0008842445754996234, 0.0, 0.0, 0.6689560919488968, 0.10178439680971307, 0.7089823411348399, 0.0, 0.0, 0.0, 0.0] - Per Category Accuracy: [nan, 0.6798160901382586, 0.8601972223213155, 0.001563543652833044, 0.0005586801134972854, 0.004789605465686377, nan, 0.00024743825184288725, 0.0, 0.0, 0.8407289173400536, 0.012641370267169317, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7574833533176979, 0.0, 0.00039110009377117975, 0.013959849889225483, 0.0, nan, 0.0, 0.0009309900323061499, 0.0, 0.0, 0.9337304207449932, 0.12865528611713883, 0.8019892660736478, 0.0, 0.0, 0.0, 0.0]
b197c3d73cd8dc1e0d199a4f460ace58
other
['vision', 'image-segmentation', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
d6a426364496904f69a46637e886f461
other
['vision', 'image-segmentation', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 2.8872 | 0.5 | 20 | 3.1018 | 0.0995 | 0.1523 | 0.6415 | [0.0, 0.3982872425364927, 0.6582689116809847, 0.0, 0.00044314555867048773, 0.019651883205738383, 0.0, 0.0006528617866575068, 0.0, 0.0, 0.4861235900758522, 0.003961411405960721, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4437814560942763, 0.0, 1.1600860783870164e-06, 0.019965880301918204, 0.0, 0.0, 0.0, 0.0074026601990928, 0.0, 0.0, 0.666238976894996, 0.13012673492067245, 0.6486315429686865, 0.0, 2.0656177918545805e-05, 0.0001944735843164534, 0.0] | [nan, 0.6263716501798601, 0.8841421548179447, 0.0, 0.00044410334445801165, 0.020659891877382746, nan, 0.0006731258604635891, 0.0, 0.0, 0.8403154629142631, 0.017886412063596133, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6324385775164868, 0.0, 1.160534402881839e-06, 0.06036834410935781, 0.0, nan, 0.0, 0.010232933175604348, 0.0, 0.0, 0.9320173945724101, 0.15828224740687694, 0.6884182010535304, 0.0, 2.3169780427714147e-05, 0.00019505205451704924, 0.0] | | 2.6167 | 1.0 | 40 | 2.6306 | 0.1027 | 0.1574 | 0.6552 | [0.0, 0.40932069741697885, 0.6666047315185674, 0.0015527279135260222, 0.000557997451181134, 0.004734463745284192, 0.0, 0.00024311836753505628, 0.0, 0.0, 0.5448608416905849, 0.005644290758731727, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4689142754019952, 0.0, 0.00039031599380590526, 0.010175747938072128, 0.0, 0.0, 0.0, 0.0008842445754996234, 0.0, 0.0, 0.6689560919488968, 0.10178439680971307, 0.7089823411348399, 0.0, 0.0, 0.0, 0.0] | [nan, 0.6798160901382586, 0.8601972223213155, 0.001563543652833044, 0.0005586801134972854, 0.004789605465686377, nan, 0.00024743825184288725, 0.0, 0.0, 0.8407289173400536, 0.012641370267169317, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7574833533176979, 0.0, 0.00039110009377117975, 0.013959849889225483, 0.0, nan, 0.0, 0.0009309900323061499, 0.0, 0.0, 0.9337304207449932, 0.12865528611713883, 0.8019892660736478, 0.0, 0.0, 0.0, 0.0] |
d496bfce511c51310ed688c5bc5d6c94
apache-2.0
['generated_from_trainer']
false
Vin11-P3 This model is a fine-tuned version of [HuyenNguyen/Vin9-P3](https://huggingface.co/HuyenNguyen/Vin9-P3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2151 - Wer: 11.6220
de440983dddde2aa51a9786dbd967074
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1595 | 0.15 | 300 | 0.2195 | 11.2807 | | 0.1691 | 0.31 | 600 | 0.2151 | 11.6220 |
7332116a0415d4929ddf31925517dae2
mit
['generated_from_trainer']
false
aces-roberta-base-reduced This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3101 - Precision: 0.9036 - Recall: 0.9038 - F1: 0.9029 - Accuracy: 0.9038 - F1 Who: 0.8727 - F1 What: 0.8295 - F1 Where: 0.8468 - F1 How: 0.9414
827eea1b55416d52e2cea3c3927ab7e3
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | F1 Who | F1 What | F1 Where | F1 How | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:------:|:-------:|:--------:|:------:| | 0.6299 | 1.0 | 48 | 0.3723 | 0.8943 | 0.8946 | 0.8939 | 0.8946 | 0.8846 | 0.8179 | 0.8559 | 0.9208 | | 0.3067 | 2.0 | 96 | 0.3481 | 0.8911 | 0.8803 | 0.8820 | 0.8803 | 0.8649 | 0.8102 | 0.7766 | 0.9365 | | 0.2054 | 3.0 | 144 | 0.3018 | 0.9129 | 0.9121 | 0.9117 | 0.9121 | 0.8649 | 0.8571 | 0.8720 | 0.9430 | | 0.2196 | 4.0 | 192 | 0.3061 | 0.9108 | 0.9105 | 0.9098 | 0.9105 | 0.8649 | 0.8385 | 0.8610 | 0.9508 | | 0.1505 | 5.0 | 240 | 0.3101 | 0.9036 | 0.9038 | 0.9029 | 0.9038 | 0.8727 | 0.8295 | 0.8468 | 0.9414 |
a748aeca3b626d41c545c7947b361e83
mit
['text-classification']
false
Multi2ConvAI-Logistics: finetuned Bert for Croatian This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Croatian (hr) - model type: finetuned Bert
beb94420bbc6ac548a027db1085a5525
mit
['text-classification']
false
Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-hr-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-hr-bert") ````
b17e6bb9518ac95b2b2dbe6662188851
cc-by-4.0
['questions and answers generation']
false
Model Card of `lmqg/mbart-large-cc25-itquad-qag` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question & answer pair generation task on the [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
a1f9e61c20d704e0456c5f4866286560
cc-by-4.0
['questions and answers generation']
false
Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** it - **Training data:** [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
9bb1cc0aec7cceab1b0f7b090244baea
cc-by-4.0
['questions and answers generation']
false
model prediction question_answer_pairs = model.generate_qa("Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-itquad-qag") output = pipe("Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ```
97e254f6ce238c4a425aefa1fe80e199
cc-by-4.0
['questions and answers generation']
false
Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_itquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-------------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 72.96 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) | | QAAlignedF1Score (MoverScore) | 51.25 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) | | QAAlignedPrecision (BERTScore) | 74.2 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) | | QAAlignedPrecision (MoverScore) | 52.44 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) | | QAAlignedRecall (BERTScore) | 71.83 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) | | QAAlignedRecall (MoverScore) | 50.21 | default | [lmqg/qag_itquad](https://huggingface.co/datasets/lmqg/qag_itquad) |
6ef0d8fb7537c81b168282844f7cdbfb
cc-by-4.0
['questions and answers generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_itquad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: None - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 256 - epoch: 14 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qag/raw/main/trainer_config.json).
d76d6557c98d57f48d7072a984ba5b71
apache-2.0
['automatic-speech-recognition', 'fi', 'finnish', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Wav2vec2-large-uralic-voxpopuli-v2 for Finnish ASR This acoustic model is a fine-tuned version of [facebook/wav2vec2-large-uralic-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-large-uralic-voxpopuli-v2) for Finnish ASR. The model has been fine-tuned with 276.7 hours of Finnish transcribed speech data. Wav2Vec2 was introduced in [this paper](https://arxiv.org/abs/2006.11477) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec
3a1ef7a2930f3a9383ff41f1c3d7c158
apache-2.0
['automatic-speech-recognition', 'fi', 'finnish', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Model description [Wav2vec2-large-uralic-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-large-uralic-voxpopuli-v2) is Facebook AI's pretrained model for uralic language family (Finnish, Estonian, Hungarian) speech. It is pretrained on 42.5k hours of unlabeled Finnish, Estonian and Hungarian speech from [VoxPopuli V2 dataset](https://github.com/facebookresearch/voxpopuli/) with the wav2vec 2.0 objective. This model is fine-tuned version of the pretrained model for Finnish ASR.
7ed498843f3ed60181c305e9e0752fad
apache-2.0
['automatic-speech-recognition', 'fi', 'finnish', 'generated_from_trainer', 'hf-asr-leaderboard']
false
How to use Check the [run-finnish-asr-models.ipynb](https://huggingface.co/Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
d88d3176f0497ec7b3263201791a63bb
apache-2.0
['automatic-speech-recognition', 'fi', 'finnish', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-04 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP The pretrained `facebook/wav2vec2-large-uralic-voxpopuli-v2` model was initialized with following hyperparameters: - attention_dropout: 0.094 - hidden_dropout: 0.047 - feat_proj_dropout: 0.04 - mask_time_prob: 0.082 - layerdrop: 0.041 - activation_dropout: 0.055 - ctc_loss_reduction: "mean"
fd2e0a19ab0d78dc1a3fed28a480aa8d
apache-2.0
['automatic-speech-recognition', 'fi', 'finnish', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.9421 | 0.17 | 500 | 0.8633 | 0.8870 | | 0.572 | 0.33 | 1000 | 0.1650 | 0.1829 | | 0.5149 | 0.5 | 1500 | 0.1416 | 0.1711 | | 0.4884 | 0.66 | 2000 | 0.1265 | 0.1605 | | 0.4729 | 0.83 | 2500 | 0.1205 | 0.1485 | | 0.4723 | 1.0 | 3000 | 0.1108 | 0.1403 | | 0.443 | 1.16 | 3500 | 0.1175 | 0.1439 | | 0.4378 | 1.33 | 4000 | 0.1083 | 0.1482 | | 0.4313 | 1.49 | 4500 | 0.1110 | 0.1398 | | 0.4182 | 1.66 | 5000 | 0.1024 | 0.1418 | | 0.3884 | 1.83 | 5500 | 0.1032 | 0.1395 | | 0.4034 | 1.99 | 6000 | 0.0985 | 0.1318 | | 0.3735 | 2.16 | 6500 | 0.1008 | 0.1355 | | 0.4174 | 2.32 | 7000 | 0.0970 | 0.1361 | | 0.3581 | 2.49 | 7500 | 0.0968 | 0.1297 | | 0.3783 | 2.66 | 8000 | 0.0881 | 0.1284 | | 0.3827 | 2.82 | 8500 | 0.0921 | 0.1352 | | 0.3651 | 2.99 | 9000 | 0.0861 | 0.1298 | | 0.3684 | 3.15 | 9500 | 0.0844 | 0.1270 | | 0.3784 | 3.32 | 10000 | 0.0870 | 0.1248 | | 0.356 | 3.48 | 10500 | 0.0828 | 0.1214 | | 0.3524 | 3.65 | 11000 | 0.0878 | 0.1218 | | 0.3879 | 3.82 | 11500 | 0.0874 | 0.1216 | | 0.3521 | 3.98 | 12000 | 0.0860 | 0.1210 | | 0.3527 | 4.15 | 12500 | 0.0818 | 0.1184 | | 0.3529 | 4.31 | 13000 | 0.0787 | 0.1185 | | 0.3114 | 4.48 | 13500 | 0.0852 | 0.1202 | | 0.3495 | 4.65 | 14000 | 0.0807 | 0.1187 | | 0.34 | 4.81 | 14500 | 0.0796 | 0.1162 | | 0.3646 | 4.98 | 15000 | 0.0782 | 0.1149 | | 0.3004 | 5.14 | 15500 | 0.0799 | 0.1142 | | 0.3167 | 5.31 | 16000 | 0.0847 | 0.1123 | | 0.3249 | 5.48 | 16500 | 0.0837 | 0.1171 | | 0.3202 | 5.64 | 17000 | 0.0749 | 0.1109 | | 0.3104 | 5.81 | 17500 | 0.0798 | 0.1093 | | 0.3039 | 5.97 | 18000 | 0.0810 | 0.1132 | | 0.3157 | 6.14 | 18500 | 0.0847 | 0.1156 | | 0.3133 | 6.31 | 19000 | 0.0833 | 0.1140 | | 0.3203 | 6.47 | 19500 | 0.0838 | 0.1113 | | 0.3178 | 6.64 | 20000 | 0.0907 | 0.1141 | | 0.3182 | 6.8 | 20500 | 0.0938 | 0.1143 | | 0.3 | 6.97 | 21000 | 0.0854 | 0.1133 | | 0.3151 | 7.14 | 21500 | 0.0859 | 0.1109 | | 0.2963 | 7.3 | 22000 | 0.0832 | 0.1122 | | 0.3099 | 7.47 | 22500 | 0.0865 | 0.1103 | | 0.322 | 7.63 | 23000 | 0.0833 | 0.1105 | | 0.3064 | 7.8 | 23500 | 0.0865 | 0.1078 | | 0.2964 | 7.97 | 24000 | 0.0859 | 0.1096 | | 0.2869 | 8.13 | 24500 | 0.0872 | 0.1100 | | 0.315 | 8.3 | 25000 | 0.0869 | 0.1099 | | 0.3003 | 8.46 | 25500 | 0.0878 | 0.1105 | | 0.2947 | 8.63 | 26000 | 0.0884 | 0.1084 | | 0.297 | 8.8 | 26500 | 0.0891 | 0.1102 | | 0.3049 | 8.96 | 27000 | 0.0863 | 0.1081 | | 0.2957 | 9.13 | 27500 | 0.0846 | 0.1083 | | 0.2908 | 9.29 | 28000 | 0.0848 | 0.1059 | | 0.2955 | 9.46 | 28500 | 0.0846 | 0.1085 | | 0.2991 | 9.62 | 29000 | 0.0839 | 0.1081 | | 0.3112 | 9.79 | 29500 | 0.0832 | 0.1071 | | 0.29 | 9.96 | 30000 | 0.0828 | 0.1075 |
76290987857d87219eb589a4d9ecac31
apache-2.0
['automatic-speech-recognition', 'fi', 'finnish', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Common Voice 7.0 testing To evaluate this model, run the `eval.py` script in this repository: ```bash python3 eval.py --model_id Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish --dataset mozilla-foundation/common_voice_7_0 --config fi --split test ``` This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts: | | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) | |-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------| |Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.85 |13.52 |1.35 |2.44 | |Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |**9.66** |0.90 |1.66 | |Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |8.16 |17.92 |1.97 |3.36 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.65 |13.11 |1.20 |2.23 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**4.09** |9.73 |**0.88** |**1.65** |
701197139e247db22f9b7cb97b762f57
apache-2.0
['automatic-speech-recognition', 'fi', 'finnish', 'generated_from_trainer', 'hf-asr-leaderboard']
false
Common Voice 9.0 testing To evaluate this model, run the `eval.py` script in this repository: ```bash python3 eval.py --model_id Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish --dataset mozilla-foundation/common_voice_9_0 --config fi --split test ``` This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts: | | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) | |-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------| |Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.93 |14.08 |1.40 |2.59 | |Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |9.83 |0.92 |1.71 | |Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |7.42 |16.45 |1.79 |3.07 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.35 |13.00 |1.14 |2.20 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**3.72** |**8.96** |**0.80** |**1.52** |
96dd17d73ad521b2c9eb7376e3b0d630
apache-2.0
['automatic-speech-recognition', 'fi', 'finnish', 'generated_from_trainer', 'hf-asr-leaderboard']
false
FLEURS ASR testing To evaluate this model, run the `eval.py` script in this repository: ```bash python3 eval.py --model_id Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish --dataset google/fleurs --config fi_fi --split test ``` This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts: | | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) | |-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------| |Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |13.99 |17.16 |6.07 |6.61 | |Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |12.44 |**14.63** |5.77 |6.22 | |Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |17.72 |23.30 |6.78 |7.67 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |20.34 |16.67 |6.97 |6.35 | |Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**12.11** |14.89 |**5.65** |**6.06** |
3a0e757245df4ea44ed79a59058d57ae
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 49a284e69308d81c142b89795de255b4ce290c54 pip install -e . cd egs2/talromur/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/GunnarThor_talromur_d_fastspeech2 ```
8c2821066af1642bb751bd8910a12162
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
TTS config <details><summary>expand</summary> ``` config: conf/tuning/train_fastspeech2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/d/tts_train_fastspeech2_raw_phn_none ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 8 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 800 batch_size: 20 valid_batch_size: null batch_bins: 2500000 valid_batch_bins: null train_shape_file: - exp/d/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.phn - exp/d/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape valid_shape_file: - exp/d/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.phn - exp/d/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_d_phn/text - text - text - - exp/d/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/train_d_phn/durations - durations - text_int - - dump/raw/train_d_phn/wav.scp - speech - sound valid_data_path_and_name_and_type: - - dump/raw/dev_d_phn/text - text - text - - exp/d/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/dev_d_phn/durations - durations - text_int - - dump/raw/dev_d_phn/wav.scp - speech - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: model_size: 384 warmup_steps: 4000 token_list: - <blank> - <unk> - ',' - . - r - t - n - a0 - s - I0 - D - l - Y0 - m - v - h - E1 - k - a:1 - E:1 - j - f - T - G - a1 - p - c - au:1 - i:1 - O:1 - E0 - I:1 - r_0 - I1 - t_h - k_h - Y1 - i0 - ei1 - u:1 - ou:1 - ei:1 - O1 - N - l_0 - '91' - ou0 - ai0 - n_0 - au1 - O0 - ou1 - ai:1 - ei0 - '9:1' - ai1 - i1 - c_h - '90' - au0 - x - C - p_h - u0 - 9i:1 - Y:1 - 9i1 - J - u1 - 9i0 - N_0 - m_0 - J_0 - Oi1 - Yi0 - Yi1 - Oi0 - '9:0' - au:0 - E:0 - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 22050 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/d/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz tts: fastspeech2 tts_conf: adim: 384 aheads: 2 elayers: 4 eunits: 1536 dlayers: 4 dunits: 1536 positionwise_layer_type: conv1d positionwise_conv_kernel_size: 3 duration_predictor_layers: 2 duration_predictor_chans: 256 duration_predictor_kernel_size: 3 postnet_layers: 5 postnet_filts: 5 postnet_chans: 256 use_masking: true use_scaled_pos_enc: true encoder_normalize_before: true decoder_normalize_before: true reduction_factor: 1 init_type: xavier_uniform init_enc_alpha: 1.0 init_dec_alpha: 1.0 transformer_enc_dropout_rate: 0.2 transformer_enc_positional_dropout_rate: 0.2 transformer_enc_attn_dropout_rate: 0.2 transformer_dec_dropout_rate: 0.2 transformer_dec_positional_dropout_rate: 0.2 transformer_dec_attn_dropout_rate: 0.2 pitch_predictor_layers: 5 pitch_predictor_chans: 256 pitch_predictor_kernel_size: 5 pitch_predictor_dropout: 0.5 pitch_embed_kernel_size: 1 pitch_embed_dropout: 0.0 stop_gradient_from_pitch_predictor: true energy_predictor_layers: 2 energy_predictor_chans: 256 energy_predictor_kernel_size: 3 energy_predictor_dropout: 0.5 energy_embed_kernel_size: 1 energy_embed_dropout: 0.0 stop_gradient_from_energy_predictor: false pitch_extract: dio pitch_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 f0max: 400 f0min: 80 reduction_factor: 1 pitch_normalize: global_mvn pitch_normalize_conf: stats_file: exp/d/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/pitch_stats.npz energy_extract: energy energy_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 win_length: null reduction_factor: 1 energy_normalize: global_mvn energy_normalize_conf: stats_file: exp/d/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/energy_stats.npz required: - output_dir - token_list version: 0.10.7a1 distributed: false ``` </details>
4fa67bdfed972fcefebb1081e6d6f009
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset. It achieves the following results on the evaluation set: - Loss: 0.8170 - Accuracy: 0.9225 - F1: 0.9241
44cbe4c5ba4e233f9861f81714346716
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
d0e7733bbf196c2a98097cce5bd1c800
apache-2.0
['generated_from_trainer']
false
opus-mt-en-ro-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2915 - Bleu: 27.9273 - Gen Len: 34.0935
50ccbad7d523d4d7b39319db38ff9c66
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7448 | 1.0 | 38145 | 1.2915 | 27.9273 | 34.0935 |
021a37579be37aa47b83dda1060882ef
apache-2.0
['generated_from_trainer']
false
distilbert-sentiment-new This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5872 - Accuracy: 0.7243 - Precision: 0.7192 - Recall: 0.7243 - F1: 0.7175
809706f251d91d2474f4308ffdf59b58
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 296 | 0.6038 | 0.6787 | 0.7049 | 0.6787 | 0.6235 | | 0.5926 | 2.0 | 592 | 0.5532 | 0.7148 | 0.7118 | 0.7148 | 0.6994 | | 0.5926 | 3.0 | 888 | 0.5480 | 0.7243 | 0.7199 | 0.7243 | 0.7144 | | 0.4946 | 4.0 | 1184 | 0.5535 | 0.7300 | 0.7255 | 0.7300 | 0.7220 | | 0.4946 | 5.0 | 1480 | 0.5858 | 0.7186 | 0.7140 | 0.7186 | 0.7146 | | 0.4267 | 6.0 | 1776 | 0.5872 | 0.7243 | 0.7192 | 0.7243 | 0.7175 |
be9156a8f0097bafc65e71a11cc8b0ae
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
About this bad ass beast of a checkpoint: I merged a few checkpoints and got something buttery and amazing. Does great with things other then people too. It can do anything really. It doesn't need crazy prompts either. Keep it simple. No need for all the artist names and trending on whatever.
4919af7f70ecccccd311227f038b98dd
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-4000-samples_en This model is a fine-tuned version of [zboxi7/finetuning-sentiment-model-3000-samples_fr](https://huggingface.co/zboxi7/finetuning-sentiment-model-3000-samples_fr) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3887
57a1dfcb8bdb87e81918cb0bc82796fa
apache-2.0
['translation']
false
mul-eng * source group: Multiple languages * target group: English * OPUS readme: [mul-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md) * model: transformer * source language(s): abk acm ady afb afh_Latn afr akl_Latn aln amh ang_Latn apc ara arg arq ary arz asm ast avk_Latn awa aze_Latn bak bam_Latn bel bel_Latn ben bho bod bos_Latn bre brx brx_Latn bul bul_Latn cat ceb ces cha che chr chv cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant cor cos crh crh_Latn csb_Latn cym dan deu dsb dtp dws_Latn egl ell enm_Latn epo est eus ewe ext fao fij fin fkv_Latn fra frm_Latn frr fry fuc fuv gan gcf_Latn gil gla gle glg glv gom gos got_Goth grc_Grek grn gsw guj hat hau_Latn haw heb hif_Latn hil hin hnj_Latn hoc hoc_Latn hrv hsb hun hye iba ibo ido ido_Latn ike_Latn ile_Latn ilo ina_Latn ind isl ita izh jav jav_Java jbo jbo_Cyrl jbo_Latn jdt_Cyrl jpn kab kal kan kat kaz_Cyrl kaz_Latn kek_Latn kha khm khm_Latn kin kir_Cyrl kjh kpv krl ksh kum kur_Arab kur_Latn lad lad_Latn lao lat_Latn lav ldn_Latn lfn_Cyrl lfn_Latn lij lin lit liv_Latn lkt lld_Latn lmo ltg ltz lug lzh lzh_Hans mad mah mai mal mar max_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob_Hebr nog non_Latn nov_Latn npi nya oci ori orv_Cyrl oss ota_Arab ota_Latn pag pan_Guru pap pau pdc pes pes_Latn pes_Thaa pms pnb pol por ppl_Latn prg_Latn pus quc qya qya_Latn rap rif_Latn roh rom ron rue run rus sag sah san_Deva scn sco sgs shs_Latn shy_Latn sin sjn_Latn slv sma sme smo sna snd_Arab som spa sqi srp_Cyrl srp_Latn stq sun swe swg swh tah tam tat tat_Arab tat_Latn tel tet tgk_Cyrl tha tir tlh_Latn tly_Latn tmw_Latn toi_Latn ton tpw_Latn tso tuk tuk_Latn tur tvl tyv tzl tzl_Latn udm uig_Arab uig_Cyrl ukr umb urd uzb_Cyrl uzb_Latn vec vie vie_Hani vol_Latn vro war wln wol wuu xal xho yid yor yue yue_Hans yue_Hant zho zho_Hans zho_Hant zlm_Latn zsm_Latn zul zza * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.eval.txt)
4000841094938cdda1a9c0225c47d964
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-hineng.hin.eng | 8.5 | 0.341 | | newsdev2015-enfi-fineng.fin.eng | 16.8 | 0.441 | | newsdev2016-enro-roneng.ron.eng | 31.3 | 0.580 | | newsdev2016-entr-tureng.tur.eng | 16.4 | 0.422 | | newsdev2017-enlv-laveng.lav.eng | 21.3 | 0.502 | | newsdev2017-enzh-zhoeng.zho.eng | 12.7 | 0.409 | | newsdev2018-enet-esteng.est.eng | 19.8 | 0.467 | | newsdev2019-engu-gujeng.guj.eng | 13.3 | 0.385 | | newsdev2019-enlt-liteng.lit.eng | 19.9 | 0.482 | | newsdiscussdev2015-enfr-fraeng.fra.eng | 26.7 | 0.520 | | newsdiscusstest2015-enfr-fraeng.fra.eng | 29.8 | 0.541 | | newssyscomb2009-ceseng.ces.eng | 21.1 | 0.487 | | newssyscomb2009-deueng.deu.eng | 22.6 | 0.499 | | newssyscomb2009-fraeng.fra.eng | 25.8 | 0.530 | | newssyscomb2009-huneng.hun.eng | 15.1 | 0.430 | | newssyscomb2009-itaeng.ita.eng | 29.4 | 0.555 | | newssyscomb2009-spaeng.spa.eng | 26.1 | 0.534 | | news-test2008-deueng.deu.eng | 21.6 | 0.491 | | news-test2008-fraeng.fra.eng | 22.3 | 0.502 | | news-test2008-spaeng.spa.eng | 23.6 | 0.514 | | newstest2009-ceseng.ces.eng | 19.8 | 0.480 | | newstest2009-deueng.deu.eng | 20.9 | 0.487 | | newstest2009-fraeng.fra.eng | 25.0 | 0.523 | | newstest2009-huneng.hun.eng | 14.7 | 0.425 | | newstest2009-itaeng.ita.eng | 27.6 | 0.542 | | newstest2009-spaeng.spa.eng | 25.7 | 0.530 | | newstest2010-ceseng.ces.eng | 20.6 | 0.491 | | newstest2010-deueng.deu.eng | 23.4 | 0.517 | | newstest2010-fraeng.fra.eng | 26.1 | 0.537 | | newstest2010-spaeng.spa.eng | 29.1 | 0.561 | | newstest2011-ceseng.ces.eng | 21.0 | 0.489 | | newstest2011-deueng.deu.eng | 21.3 | 0.494 | | newstest2011-fraeng.fra.eng | 26.8 | 0.546 | | newstest2011-spaeng.spa.eng | 28.2 | 0.549 | | newstest2012-ceseng.ces.eng | 20.5 | 0.485 | | newstest2012-deueng.deu.eng | 22.3 | 0.503 | | newstest2012-fraeng.fra.eng | 27.5 | 0.545 | | newstest2012-ruseng.rus.eng | 26.6 | 0.532 | | newstest2012-spaeng.spa.eng | 30.3 | 0.567 | | newstest2013-ceseng.ces.eng | 22.5 | 0.498 | | newstest2013-deueng.deu.eng | 25.0 | 0.518 | | newstest2013-fraeng.fra.eng | 27.4 | 0.537 | | newstest2013-ruseng.rus.eng | 21.6 | 0.484 | | newstest2013-spaeng.spa.eng | 28.4 | 0.555 | | newstest2014-csen-ceseng.ces.eng | 24.0 | 0.517 | | newstest2014-deen-deueng.deu.eng | 24.1 | 0.511 | | newstest2014-fren-fraeng.fra.eng | 29.1 | 0.563 | | newstest2014-hien-hineng.hin.eng | 14.0 | 0.414 | | newstest2014-ruen-ruseng.rus.eng | 24.0 | 0.521 | | newstest2015-encs-ceseng.ces.eng | 21.9 | 0.481 | | newstest2015-ende-deueng.deu.eng | 25.5 | 0.519 | | newstest2015-enfi-fineng.fin.eng | 17.4 | 0.441 | | newstest2015-enru-ruseng.rus.eng | 22.4 | 0.494 | | newstest2016-encs-ceseng.ces.eng | 23.0 | 0.500 | | newstest2016-ende-deueng.deu.eng | 30.1 | 0.560 | | newstest2016-enfi-fineng.fin.eng | 18.5 | 0.461 | | newstest2016-enro-roneng.ron.eng | 29.6 | 0.562 | | newstest2016-enru-ruseng.rus.eng | 22.0 | 0.495 | | newstest2016-entr-tureng.tur.eng | 14.8 | 0.415 | | newstest2017-encs-ceseng.ces.eng | 20.2 | 0.475 | | newstest2017-ende-deueng.deu.eng | 26.0 | 0.523 | | newstest2017-enfi-fineng.fin.eng | 19.6 | 0.465 | | newstest2017-enlv-laveng.lav.eng | 16.2 | 0.454 | | newstest2017-enru-ruseng.rus.eng | 24.2 | 0.510 | | newstest2017-entr-tureng.tur.eng | 15.0 | 0.412 | | newstest2017-enzh-zhoeng.zho.eng | 13.7 | 0.412 | | newstest2018-encs-ceseng.ces.eng | 21.2 | 0.486 | | newstest2018-ende-deueng.deu.eng | 31.5 | 0.564 | | newstest2018-enet-esteng.est.eng | 19.7 | 0.473 | | newstest2018-enfi-fineng.fin.eng | 15.1 | 0.418 | | newstest2018-enru-ruseng.rus.eng | 21.3 | 0.490 | | newstest2018-entr-tureng.tur.eng | 15.4 | 0.421 | | newstest2018-enzh-zhoeng.zho.eng | 12.9 | 0.408 | | newstest2019-deen-deueng.deu.eng | 27.0 | 0.529 | | newstest2019-fien-fineng.fin.eng | 17.2 | 0.438 | | newstest2019-guen-gujeng.guj.eng | 9.0 | 0.342 | | newstest2019-lten-liteng.lit.eng | 22.6 | 0.512 | | newstest2019-ruen-ruseng.rus.eng | 24.1 | 0.503 | | newstest2019-zhen-zhoeng.zho.eng | 13.9 | 0.427 | | newstestB2016-enfi-fineng.fin.eng | 15.2 | 0.428 | | newstestB2017-enfi-fineng.fin.eng | 16.8 | 0.442 | | newstestB2017-fien-fineng.fin.eng | 16.8 | 0.442 | | Tatoeba-test.abk-eng.abk.eng | 2.4 | 0.190 | | Tatoeba-test.ady-eng.ady.eng | 1.1 | 0.111 | | Tatoeba-test.afh-eng.afh.eng | 1.7 | 0.108 | | Tatoeba-test.afr-eng.afr.eng | 53.0 | 0.672 | | Tatoeba-test.akl-eng.akl.eng | 5.9 | 0.239 | | Tatoeba-test.amh-eng.amh.eng | 25.6 | 0.464 | | Tatoeba-test.ang-eng.ang.eng | 11.7 | 0.289 | | Tatoeba-test.ara-eng.ara.eng | 26.4 | 0.443 | | Tatoeba-test.arg-eng.arg.eng | 35.9 | 0.473 | | Tatoeba-test.asm-eng.asm.eng | 19.8 | 0.365 | | Tatoeba-test.ast-eng.ast.eng | 31.8 | 0.467 | | Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.119 | | Tatoeba-test.awa-eng.awa.eng | 9.7 | 0.271 | | Tatoeba-test.aze-eng.aze.eng | 37.0 | 0.542 | | Tatoeba-test.bak-eng.bak.eng | 13.9 | 0.395 | | Tatoeba-test.bam-eng.bam.eng | 2.2 | 0.094 | | Tatoeba-test.bel-eng.bel.eng | 36.8 | 0.549 | | Tatoeba-test.ben-eng.ben.eng | 39.7 | 0.546 | | Tatoeba-test.bho-eng.bho.eng | 33.6 | 0.540 | | Tatoeba-test.bod-eng.bod.eng | 1.1 | 0.147 | | Tatoeba-test.bre-eng.bre.eng | 14.2 | 0.303 | | Tatoeba-test.brx-eng.brx.eng | 1.7 | 0.130 | | Tatoeba-test.bul-eng.bul.eng | 46.0 | 0.621 | | Tatoeba-test.cat-eng.cat.eng | 46.6 | 0.636 | | Tatoeba-test.ceb-eng.ceb.eng | 17.4 | 0.347 | | Tatoeba-test.ces-eng.ces.eng | 41.3 | 0.586 | | Tatoeba-test.cha-eng.cha.eng | 7.9 | 0.232 | | Tatoeba-test.che-eng.che.eng | 0.7 | 0.104 | | Tatoeba-test.chm-eng.chm.eng | 7.3 | 0.261 | | Tatoeba-test.chr-eng.chr.eng | 8.8 | 0.244 | | Tatoeba-test.chv-eng.chv.eng | 11.0 | 0.319 | | Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.204 | | Tatoeba-test.cos-eng.cos.eng | 58.2 | 0.643 | | Tatoeba-test.crh-eng.crh.eng | 26.3 | 0.399 | | Tatoeba-test.csb-eng.csb.eng | 18.8 | 0.389 | | Tatoeba-test.cym-eng.cym.eng | 23.4 | 0.407 | | Tatoeba-test.dan-eng.dan.eng | 50.5 | 0.659 | | Tatoeba-test.deu-eng.deu.eng | 39.6 | 0.579 | | Tatoeba-test.dsb-eng.dsb.eng | 24.3 | 0.449 | | Tatoeba-test.dtp-eng.dtp.eng | 1.0 | 0.149 | | Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.061 | | Tatoeba-test.egl-eng.egl.eng | 7.6 | 0.236 | | Tatoeba-test.ell-eng.ell.eng | 55.4 | 0.682 | | Tatoeba-test.enm-eng.enm.eng | 28.0 | 0.489 | | Tatoeba-test.epo-eng.epo.eng | 41.8 | 0.591 | | Tatoeba-test.est-eng.est.eng | 41.5 | 0.581 | | Tatoeba-test.eus-eng.eus.eng | 37.8 | 0.557 | | Tatoeba-test.ewe-eng.ewe.eng | 10.7 | 0.262 | | Tatoeba-test.ext-eng.ext.eng | 25.5 | 0.405 | | Tatoeba-test.fao-eng.fao.eng | 28.7 | 0.469 | | Tatoeba-test.fas-eng.fas.eng | 7.5 | 0.281 | | Tatoeba-test.fij-eng.fij.eng | 24.2 | 0.320 | | Tatoeba-test.fin-eng.fin.eng | 35.8 | 0.534 | | Tatoeba-test.fkv-eng.fkv.eng | 15.5 | 0.434 | | Tatoeba-test.fra-eng.fra.eng | 45.1 | 0.618 | | Tatoeba-test.frm-eng.frm.eng | 29.6 | 0.427 | | Tatoeba-test.frr-eng.frr.eng | 5.5 | 0.138 | | Tatoeba-test.fry-eng.fry.eng | 25.3 | 0.455 | | Tatoeba-test.ful-eng.ful.eng | 1.1 | 0.127 | | Tatoeba-test.gcf-eng.gcf.eng | 16.0 | 0.315 | | Tatoeba-test.gil-eng.gil.eng | 46.7 | 0.587 | | Tatoeba-test.gla-eng.gla.eng | 20.2 | 0.358 | | Tatoeba-test.gle-eng.gle.eng | 43.9 | 0.592 | | Tatoeba-test.glg-eng.glg.eng | 45.1 | 0.623 | | Tatoeba-test.glv-eng.glv.eng | 3.3 | 0.119 | | Tatoeba-test.gos-eng.gos.eng | 20.1 | 0.364 | | Tatoeba-test.got-eng.got.eng | 0.1 | 0.041 | | Tatoeba-test.grc-eng.grc.eng | 2.1 | 0.137 | | Tatoeba-test.grn-eng.grn.eng | 1.7 | 0.152 | | Tatoeba-test.gsw-eng.gsw.eng | 18.2 | 0.334 | | Tatoeba-test.guj-eng.guj.eng | 21.7 | 0.373 | | Tatoeba-test.hat-eng.hat.eng | 34.5 | 0.502 | | Tatoeba-test.hau-eng.hau.eng | 10.5 | 0.295 | | Tatoeba-test.haw-eng.haw.eng | 2.8 | 0.160 | | Tatoeba-test.hbs-eng.hbs.eng | 46.7 | 0.623 | | Tatoeba-test.heb-eng.heb.eng | 33.0 | 0.492 | | Tatoeba-test.hif-eng.hif.eng | 17.0 | 0.391 | | Tatoeba-test.hil-eng.hil.eng | 16.0 | 0.339 | | Tatoeba-test.hin-eng.hin.eng | 36.4 | 0.533 | | Tatoeba-test.hmn-eng.hmn.eng | 0.4 | 0.131 | | Tatoeba-test.hoc-eng.hoc.eng | 0.7 | 0.132 | | Tatoeba-test.hsb-eng.hsb.eng | 41.9 | 0.551 | | Tatoeba-test.hun-eng.hun.eng | 33.2 | 0.510 | | Tatoeba-test.hye-eng.hye.eng | 32.2 | 0.487 | | Tatoeba-test.iba-eng.iba.eng | 9.4 | 0.278 | | Tatoeba-test.ibo-eng.ibo.eng | 5.8 | 0.200 | | Tatoeba-test.ido-eng.ido.eng | 31.7 | 0.503 | | Tatoeba-test.iku-eng.iku.eng | 9.1 | 0.164 | | Tatoeba-test.ile-eng.ile.eng | 42.2 | 0.595 | | Tatoeba-test.ilo-eng.ilo.eng | 29.7 | 0.485 | | Tatoeba-test.ina-eng.ina.eng | 42.1 | 0.607 | | Tatoeba-test.isl-eng.isl.eng | 35.7 | 0.527 | | Tatoeba-test.ita-eng.ita.eng | 54.8 | 0.686 | | Tatoeba-test.izh-eng.izh.eng | 28.3 | 0.526 | | Tatoeba-test.jav-eng.jav.eng | 10.0 | 0.282 | | Tatoeba-test.jbo-eng.jbo.eng | 0.3 | 0.115 | | Tatoeba-test.jdt-eng.jdt.eng | 5.3 | 0.140 | | Tatoeba-test.jpn-eng.jpn.eng | 18.8 | 0.387 | | Tatoeba-test.kab-eng.kab.eng | 3.9 | 0.205 | | Tatoeba-test.kal-eng.kal.eng | 16.9 | 0.329 | | Tatoeba-test.kan-eng.kan.eng | 16.2 | 0.374 | | Tatoeba-test.kat-eng.kat.eng | 31.1 | 0.493 | | Tatoeba-test.kaz-eng.kaz.eng | 24.5 | 0.437 | | Tatoeba-test.kek-eng.kek.eng | 7.4 | 0.192 | | Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.154 | | Tatoeba-test.khm-eng.khm.eng | 12.2 | 0.290 | | Tatoeba-test.kin-eng.kin.eng | 22.5 | 0.355 | | Tatoeba-test.kir-eng.kir.eng | 27.2 | 0.470 | | Tatoeba-test.kjh-eng.kjh.eng | 2.1 | 0.129 | | Tatoeba-test.kok-eng.kok.eng | 4.5 | 0.259 | | Tatoeba-test.kom-eng.kom.eng | 1.4 | 0.099 | | Tatoeba-test.krl-eng.krl.eng | 26.1 | 0.387 | | Tatoeba-test.ksh-eng.ksh.eng | 5.5 | 0.256 | | Tatoeba-test.kum-eng.kum.eng | 9.3 | 0.288 | | Tatoeba-test.kur-eng.kur.eng | 9.6 | 0.208 | | Tatoeba-test.lad-eng.lad.eng | 30.1 | 0.475 | | Tatoeba-test.lah-eng.lah.eng | 11.6 | 0.284 | | Tatoeba-test.lao-eng.lao.eng | 4.5 | 0.214 | | Tatoeba-test.lat-eng.lat.eng | 21.5 | 0.402 | | Tatoeba-test.lav-eng.lav.eng | 40.2 | 0.577 | | Tatoeba-test.ldn-eng.ldn.eng | 0.8 | 0.115 | | Tatoeba-test.lfn-eng.lfn.eng | 23.0 | 0.433 | | Tatoeba-test.lij-eng.lij.eng | 9.3 | 0.287 | | Tatoeba-test.lin-eng.lin.eng | 2.4 | 0.196 | | Tatoeba-test.lit-eng.lit.eng | 44.0 | 0.597 | | Tatoeba-test.liv-eng.liv.eng | 1.6 | 0.115 | | Tatoeba-test.lkt-eng.lkt.eng | 2.0 | 0.113 | | Tatoeba-test.lld-eng.lld.eng | 18.3 | 0.312 | | Tatoeba-test.lmo-eng.lmo.eng | 25.4 | 0.395 | | Tatoeba-test.ltz-eng.ltz.eng | 35.9 | 0.509 | | Tatoeba-test.lug-eng.lug.eng | 5.1 | 0.357 | | Tatoeba-test.mad-eng.mad.eng | 2.8 | 0.123 | | Tatoeba-test.mah-eng.mah.eng | 5.7 | 0.175 | | Tatoeba-test.mai-eng.mai.eng | 56.3 | 0.703 | | Tatoeba-test.mal-eng.mal.eng | 37.5 | 0.534 | | Tatoeba-test.mar-eng.mar.eng | 22.8 | 0.470 | | Tatoeba-test.mdf-eng.mdf.eng | 2.0 | 0.110 | | Tatoeba-test.mfe-eng.mfe.eng | 59.2 | 0.764 | | Tatoeba-test.mic-eng.mic.eng | 9.0 | 0.199 | | Tatoeba-test.mkd-eng.mkd.eng | 44.3 | 0.593 | | Tatoeba-test.mlg-eng.mlg.eng | 31.9 | 0.424 | | Tatoeba-test.mlt-eng.mlt.eng | 38.6 | 0.540 | | Tatoeba-test.mnw-eng.mnw.eng | 2.5 | 0.101 | | Tatoeba-test.moh-eng.moh.eng | 0.3 | 0.110 | | Tatoeba-test.mon-eng.mon.eng | 13.5 | 0.334 | | Tatoeba-test.mri-eng.mri.eng | 8.5 | 0.260 | | Tatoeba-test.msa-eng.msa.eng | 33.9 | 0.520 | | Tatoeba-test.multi.eng | 34.7 | 0.518 | | Tatoeba-test.mwl-eng.mwl.eng | 37.4 | 0.630 | | Tatoeba-test.mya-eng.mya.eng | 15.5 | 0.335 | | Tatoeba-test.myv-eng.myv.eng | 0.8 | 0.118 | | Tatoeba-test.nau-eng.nau.eng | 9.0 | 0.186 | | Tatoeba-test.nav-eng.nav.eng | 1.3 | 0.144 | | Tatoeba-test.nds-eng.nds.eng | 30.7 | 0.495 | | Tatoeba-test.nep-eng.nep.eng | 3.5 | 0.168 | | Tatoeba-test.niu-eng.niu.eng | 42.7 | 0.492 | | Tatoeba-test.nld-eng.nld.eng | 47.9 | 0.640 | | Tatoeba-test.nog-eng.nog.eng | 12.7 | 0.284 | | Tatoeba-test.non-eng.non.eng | 43.8 | 0.586 | | Tatoeba-test.nor-eng.nor.eng | 45.5 | 0.619 | | Tatoeba-test.nov-eng.nov.eng | 26.9 | 0.472 | | Tatoeba-test.nya-eng.nya.eng | 33.2 | 0.456 | | Tatoeba-test.oci-eng.oci.eng | 17.9 | 0.370 | | Tatoeba-test.ori-eng.ori.eng | 14.6 | 0.305 | | Tatoeba-test.orv-eng.orv.eng | 11.0 | 0.283 | | Tatoeba-test.oss-eng.oss.eng | 4.1 | 0.211 | | Tatoeba-test.ota-eng.ota.eng | 4.1 | 0.216 | | Tatoeba-test.pag-eng.pag.eng | 24.3 | 0.468 | | Tatoeba-test.pan-eng.pan.eng | 16.4 | 0.358 | | Tatoeba-test.pap-eng.pap.eng | 53.2 | 0.628 | | Tatoeba-test.pau-eng.pau.eng | 3.7 | 0.173 | | Tatoeba-test.pdc-eng.pdc.eng | 45.3 | 0.569 | | Tatoeba-test.pms-eng.pms.eng | 14.0 | 0.345 | | Tatoeba-test.pol-eng.pol.eng | 41.7 | 0.588 | | Tatoeba-test.por-eng.por.eng | 51.4 | 0.669 | | Tatoeba-test.ppl-eng.ppl.eng | 0.4 | 0.134 | | Tatoeba-test.prg-eng.prg.eng | 4.1 | 0.198 | | Tatoeba-test.pus-eng.pus.eng | 6.7 | 0.233 | | Tatoeba-test.quc-eng.quc.eng | 3.5 | 0.091 | | Tatoeba-test.qya-eng.qya.eng | 0.2 | 0.090 | | Tatoeba-test.rap-eng.rap.eng | 17.5 | 0.230 | | Tatoeba-test.rif-eng.rif.eng | 4.2 | 0.164 | | Tatoeba-test.roh-eng.roh.eng | 24.6 | 0.464 | | Tatoeba-test.rom-eng.rom.eng | 3.4 | 0.212 | | Tatoeba-test.ron-eng.ron.eng | 45.2 | 0.620 | | Tatoeba-test.rue-eng.rue.eng | 21.4 | 0.390 | | Tatoeba-test.run-eng.run.eng | 24.5 | 0.392 | | Tatoeba-test.rus-eng.rus.eng | 42.7 | 0.591 | | Tatoeba-test.sag-eng.sag.eng | 3.4 | 0.187 | | Tatoeba-test.sah-eng.sah.eng | 5.0 | 0.177 | | Tatoeba-test.san-eng.san.eng | 2.0 | 0.172 | | Tatoeba-test.scn-eng.scn.eng | 35.8 | 0.410 | | Tatoeba-test.sco-eng.sco.eng | 34.6 | 0.520 | | Tatoeba-test.sgs-eng.sgs.eng | 21.8 | 0.299 | | Tatoeba-test.shs-eng.shs.eng | 1.8 | 0.122 | | Tatoeba-test.shy-eng.shy.eng | 1.4 | 0.104 | | Tatoeba-test.sin-eng.sin.eng | 20.6 | 0.429 | | Tatoeba-test.sjn-eng.sjn.eng | 1.2 | 0.095 | | Tatoeba-test.slv-eng.slv.eng | 37.0 | 0.545 | | Tatoeba-test.sma-eng.sma.eng | 4.4 | 0.147 | | Tatoeba-test.sme-eng.sme.eng | 8.9 | 0.229 | | Tatoeba-test.smo-eng.smo.eng | 37.7 | 0.483 | | Tatoeba-test.sna-eng.sna.eng | 18.0 | 0.359 | | Tatoeba-test.snd-eng.snd.eng | 28.1 | 0.444 | | Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 | | Tatoeba-test.spa-eng.spa.eng | 47.9 | 0.645 | | Tatoeba-test.sqi-eng.sqi.eng | 46.9 | 0.634 | | Tatoeba-test.stq-eng.stq.eng | 8.1 | 0.379 | | Tatoeba-test.sun-eng.sun.eng | 23.8 | 0.369 | | Tatoeba-test.swa-eng.swa.eng | 6.5 | 0.193 | | Tatoeba-test.swe-eng.swe.eng | 51.4 | 0.655 | | Tatoeba-test.swg-eng.swg.eng | 18.5 | 0.342 | | Tatoeba-test.tah-eng.tah.eng | 25.6 | 0.249 | | Tatoeba-test.tam-eng.tam.eng | 29.1 | 0.437 | | Tatoeba-test.tat-eng.tat.eng | 12.9 | 0.327 | | Tatoeba-test.tel-eng.tel.eng | 21.2 | 0.386 | | Tatoeba-test.tet-eng.tet.eng | 9.2 | 0.215 | | Tatoeba-test.tgk-eng.tgk.eng | 12.7 | 0.374 | | Tatoeba-test.tha-eng.tha.eng | 36.3 | 0.531 | | Tatoeba-test.tir-eng.tir.eng | 9.1 | 0.267 | | Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.084 | | Tatoeba-test.tly-eng.tly.eng | 2.1 | 0.128 | | Tatoeba-test.toi-eng.toi.eng | 5.3 | 0.150 | | Tatoeba-test.ton-eng.ton.eng | 39.5 | 0.473 | | Tatoeba-test.tpw-eng.tpw.eng | 1.5 | 0.160 | | Tatoeba-test.tso-eng.tso.eng | 44.7 | 0.526 | | Tatoeba-test.tuk-eng.tuk.eng | 18.6 | 0.401 | | Tatoeba-test.tur-eng.tur.eng | 40.5 | 0.573 | | Tatoeba-test.tvl-eng.tvl.eng | 55.0 | 0.593 | | Tatoeba-test.tyv-eng.tyv.eng | 19.1 | 0.477 | | Tatoeba-test.tzl-eng.tzl.eng | 17.7 | 0.333 | | Tatoeba-test.udm-eng.udm.eng | 3.4 | 0.217 | | Tatoeba-test.uig-eng.uig.eng | 11.4 | 0.289 | | Tatoeba-test.ukr-eng.ukr.eng | 43.1 | 0.595 | | Tatoeba-test.umb-eng.umb.eng | 9.2 | 0.260 | | Tatoeba-test.urd-eng.urd.eng | 23.2 | 0.426 | | Tatoeba-test.uzb-eng.uzb.eng | 19.0 | 0.342 | | Tatoeba-test.vec-eng.vec.eng | 41.1 | 0.409 | | Tatoeba-test.vie-eng.vie.eng | 30.6 | 0.481 | | Tatoeba-test.vol-eng.vol.eng | 1.8 | 0.143 | | Tatoeba-test.war-eng.war.eng | 15.9 | 0.352 | | Tatoeba-test.wln-eng.wln.eng | 12.6 | 0.291 | | Tatoeba-test.wol-eng.wol.eng | 4.4 | 0.138 | | Tatoeba-test.xal-eng.xal.eng | 0.9 | 0.153 | | Tatoeba-test.xho-eng.xho.eng | 35.4 | 0.513 | | Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.387 | | Tatoeba-test.yor-eng.yor.eng | 19.3 | 0.327 | | Tatoeba-test.zho-eng.zho.eng | 25.8 | 0.448 | | Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.567 | | Tatoeba-test.zza-eng.zza.eng | 1.6 | 0.125 |
b0f16068c90a93c8569c5dcf2907f67a
apache-2.0
['translation']
false
System Info: - hf_name: mul-eng - source_languages: mul - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en'] - src_constituents: {'sjn_Latn', 'cat', 'nan', 'spa', 'ile_Latn', 'pap', 'mwl', 'uzb_Latn', 'mww', 'hil', 'lij', 'avk_Latn', 'lad_Latn', 'lat_Latn', 'bos_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi_Latn', 'awa', 'swg', 'zsm_Latn', 'zho_Hant', 'gcf_Latn', 'uzb_Cyrl', 'isl', 'lfn_Latn', 'shs_Latn', 'nov_Latn', 'bho', 'ltz', 'lzh', 'kur_Latn', 'sun', 'arg', 'pes_Thaa', 'sqi', 'uig_Arab', 'csb_Latn', 'fra', 'hat', 'liv_Latn', 'non_Latn', 'sco', 'cmn_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul_Latn', 'amh', 'lfn_Cyrl', 'eus', 'fkv_Latn', 'tur', 'pus', 'afr', 'brx_Latn', 'nya', 'acm', 'ota_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho_Hans', 'tmw_Latn', 'kjh', 'ota_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh_Hans', 'ara', 'tly_Latn', 'lug', 'brx', 'bul', 'bel', 'vol_Latn', 'kat', 'gan', 'got_Goth', 'vro', 'ext', 'afh_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif_Latn', 'cjy_Hant', 'bre', 'ceb', 'mah', 'nob_Hebr', 'crh_Latn', 'prg_Latn', 'khm', 'ang_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie_Hani', 'arz', 'yue', 'kha', 'san_Deva', 'jbo_Latn', 'gos', 'hau_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig_Cyrl', 'fao', 'mnw', 'zho', 'orv_Cyrl', 'iba', 'bel_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc_Grek', 'tpw_Latn', 'oci', 'mfe', 'sna', 'kir_Cyrl', 'tat_Latn', 'gom', 'ido_Latn', 'sgs', 'pau', 'tgk_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp_Latn', 'wuu', 'dtp', 'jbo_Cyrl', 'tet', 'bod', 'yue_Hans', 'zlm_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif_Latn', 'vie', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina_Latn', 'cjy_Hans', 'jdt_Cyrl', 'gsw', 'glv', 'khm_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd_Arab', 'arq', 'mri', 'kur_Arab', 'por', 'hin', 'shy_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue_Hant', 'kpv', 'tam', 'est', 'frm_Latn', 'hoc_Latn', 'bam_Latn', 'kek_Latn', 'ksh', 'tlh_Latn', 'ltg', 'pan_Guru', 'hnj_Latn', 'cor', 'gle', 'swe', 'lin', 'qya_Latn', 'kum', 'mad', 'cmn_Hant', 'fuv', 'nau', 'mon', 'akl_Latn', 'guj', 'kaz_Latn', 'wln', 'tuk_Latn', 'jav_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws_Latn', 'urd', 'stq', 'tat_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld_Latn', 'tzl_Latn', 'mdf', 'ike_Latn', 'ces', 'ldn_Latn', 'egl', 'heb', 'vec', 'zul', 'max_Latn', 'pes_Latn', 'yid', 'mal', 'nds'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt - src_alpha3: mul - tgt_alpha3: eng - short_pair: mul-en - chrF2_score: 0.518 - bleu: 34.7 - brevity_penalty: 1.0 - ref_len: 72346.0 - src_name: Multiple languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: mul - tgt_alpha2: en - prefer_old: False - long_pair: mul-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
26c7d57662d07c480349b541c9c2205e
mit
['generated_from_trainer']
false
language-modeling This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4229
58a1d3ea85e91aa5e7db119b8a47d66a
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-en-to-ro-epoch.04375 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.4137 - Bleu: 7.3292 - Gen Len: 18.2541
af908d268e979bb27114c76cb54d5fb6
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.04375 - mixed_precision_training: Native AMP
d76b0ff66e1fe18c4395dd841dcbe1b5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 0.6211 | 0.04 | 1669 | 1.4137 | 7.3292 | 18.2541 |
da955dfa2b3d65d24f1a17e1463cc76d
apache-2.0
['generated_from_keras_callback']
false
KubiakJakub01/finetuned-distilbert-base-augumented This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4522 - Validation Loss: 0.4260 - Train Accuracy: 0.8129 - Epoch: 0
1f372c7f281e84aabe646ac8853467b8
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 470, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 100, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
8a33087798e297517adb3300ab034a12
mit
[]
false
Freefonix-Style on Stable Diffusion This is the `<Freefonix>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Freefonix> 0](https://huggingface.co/sd-concepts-library/freefonix-style/resolve/main/concept_images/2.jpeg) ![<Freefonix> 1](https://huggingface.co/sd-concepts-library/freefonix-style/resolve/main/concept_images/3.jpeg) ![<Freefonix> 2](https://huggingface.co/sd-concepts-library/freefonix-style/resolve/main/concept_images/1.jpeg) ![<Freefonix> 3](https://huggingface.co/sd-concepts-library/freefonix-style/resolve/main/concept_images/0.jpeg)
a8448d31f44162d039de6dc7d3ee8b2e
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'bas', 'robust-speech-event', 'hf-asr-leaderboard']
false
wav2vec2-xls-r-300m-bas-CV8-v2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.6121 - Wer: 0.5697
2317d6ff405e522d19d30e783ac074b8
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'bas', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 90 - mixed_precision_training: Native AMP
cd9fc1a5969de0c42e1b2311d481043e
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'bas', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.5211 | 16.13 | 500 | 1.2661 | 0.9153 | | 0.7026 | 32.25 | 1000 | 0.6245 | 0.6516 | | 0.3752 | 48.38 | 1500 | 0.6039 | 0.6148 | | 0.2752 | 64.51 | 2000 | 0.6080 | 0.5808 | | 0.2155 | 80.63 | 2500 | 0.6121 | 0.5697 |
a2dff43e98f248761f5f466220538145