license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
gpl-3.0
['ufd', 'text-classification', 'undersupervised-feature-decomposition']
false
Cross Lingual Cross Domain You can **try out the model** at [SGNLP](https://sgnlp.aisingapore.net/cross-lingual-cross-domain).<br /> If you want to find out more information, please contact us at [SGNLP-AISingapore](sg-nlp@aisingapore.org).
1fb8d4635cd0a4f9fcac24a886048eaf
gpl-3.0
['ufd', 'text-classification', 'undersupervised-feature-decomposition']
false
Model Details **Model Name:** Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language - **Description:** It is an implementation of Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model paper. - **Paper:** Unsupervised domain adaptation of a pretrained cross-lingual language model. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Nov, 2020 (pp. 3672-3678). - **Author(s):** Li, J., He, R., Ye, H., Ng, H. T., Bing, L., & Yan, R. (2020). - **URL:** https://www.ijcai.org/Proceedings/2020/508
c6faf8e65633b32e088f16c0e6fd9489
gpl-3.0
['ufd', 'text-classification', 'undersupervised-feature-decomposition']
false
Install Python package SGnlp is an initiative by AI Singapore's NLP Hub. They aim to bridge the gap between research and industry, promote translational research, and encourage adoption of NLP techniques in the industry. <br><br> Various NLP models, other than cross lingual cross domain are available in the python package. You can try them out at [SGNLP-Demo](https://sgnlp.aisingapore.net/) | [SGNLP-Github](https://github.com/aisingapore/sgnlp). ```python pip install sgnlp ```
c182bcc7c0fa7955289de581282e69b7
gpl-3.0
['ufd', 'text-classification', 'undersupervised-feature-decomposition']
false
Examples For more full code guide, please refer to this [documentation](https://sgnlp.aisingapore.net/docs/model/ufd.html). <br> Alternatively, you can also try out the [demo](https://sgnlp.aisingapore.net/cross-lingual-cross-domain) for Cross Lingual Cross Domain. Example of Undersupervised Feature Decomposition (UFD) model (German language): ```python from sgnlp.models.ufd import UFDModelBuilder, UFDPreprocessor
a69f76e51f35a9d6803d665fb84faa40
gpl-3.0
['ufd', 'text-classification', 'undersupervised-feature-decomposition']
false
Model predict ('books_de_dvd' model example) instance = """Wolverine is BACK Der Film ist im Grunde wie alle Teile der X-Men für Comic-Fans auf jeden Fall ein muss. Hugh Jackman spielt seine Rolle wie immer so gut was ich von den ein oder anderen Darsteller leider nicht sagen kann. Story und Action sind aber genug Gründe um sich die Blu-ray zu kaufen.""" instance_features = preprocessor([instance]) output = model_groups['books_de_dvd'](**instance_features) ```
5ecf3cc5c771db5f651b89b4681d815c
gpl-3.0
['ufd', 'text-classification', 'undersupervised-feature-decomposition']
false
Training Results - For UFD - **Training Time: (Unsupervised training)** ~3 hours for 30 epochs on a single V100 GPU - **Training Time: (Supervised training)** ~3 hours for 60 epochs on a single V100 GPU
080dae8d6c029d49b1224e5d3dea5f74
gpl-3.0
['ufd', 'text-classification', 'undersupervised-feature-decomposition']
false
Model Parameters - **Model Weights:** [refer to documentation for details](https://sgnlp.aisingapore.net/docs/model/ufd.html) - **Model Config:** [refer to documentation for details](https://sgnlp.aisingapore.net/docs/model/ufd.html) - **Model Inputs:** Raw text. - **Model Outputs:** Array of logits with the size of number of classes. - **Model Size:** XLM-Roberta: ~2.2GB, Adaptor Domain: ~8.0MB, Adaptor Global: ~8.0MB, Feature Mapper: ~8.0MB, Classifier: ~9.1KB. - **Model Inference Info:** ~2 sec on Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz. - **Usage Scenarios:** Sentiment analysis for eCommerce with operations across multiple countries.
525520cb7d01e2ebc1d374247628fbb1
apache-2.0
['translation']
false
opus-mt-en-chk * source languages: en * target languages: chk * OPUS readme: [en-chk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-chk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.eval.txt)
81ebb39272cded32f560fde720a2022d
creativeml-openrail-m
['text-to-image']
false
collage Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: c123ollage (use that on your prompt)
a0e2d0d581b06f3e3bc823834b736711
apache-2.0
['image-classification', 'timm']
false
Model card for maxvit_small_tf_512.in1k An official MaxViT image classification model. Trained in tensorflow on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
2f2aead0fac2e8951e8dc0a268e1a8d3
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 69.1 - GMACs: 67.3 - Activations (M): 383.8 - Image size: 512 x 512 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k
677dd754921a19ddabda98554859c574
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxvit_small_tf_512.in1k', pretrained=True) model = model.eval()
9c576acf110661ff52ab33b757e53a0a
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_small_tf_512.in1k', pretrained=True, features_only=True, ) model = model.eval()
39fe04b2ca82deae548c41e1cf5c655a
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_small_tf_512.in1k', pretrained=True, num_classes=0,
cf43bb07ce5141a7cdd82939f5027851
mit
[]
false
luinv2 on Stable Diffusion This is the `<luin-waifu>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<luin-waifu> 0](https://huggingface.co/sd-concepts-library/luinv2/resolve/main/concept_images/0.jpeg) ![<luin-waifu> 1](https://huggingface.co/sd-concepts-library/luinv2/resolve/main/concept_images/2.jpeg) ![<luin-waifu> 2](https://huggingface.co/sd-concepts-library/luinv2/resolve/main/concept_images/4.jpeg) ![<luin-waifu> 3](https://huggingface.co/sd-concepts-library/luinv2/resolve/main/concept_images/1.jpeg) ![<luin-waifu> 4](https://huggingface.co/sd-concepts-library/luinv2/resolve/main/concept_images/3.jpeg)
3c4cfba4cd6612573e8a39c347fd8f6d
apache-2.0
['generated_from_trainer']
false
Dansk-wav2vec2-stt This model is a fine-tuned version of [Siyam/Dansk-wav2vec21](https://huggingface.co/Siyam/Dansk-wav2vec21) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7500 - Wer: 0.3929
d12ade87ffc2b697d9c9b0fdf43708d6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0298 | 4.26 | 400 | 0.8420 | 0.4579 | | 0.0479 | 8.51 | 800 | 0.8713 | 0.4461 | | 0.0387 | 12.77 | 1200 | 0.8307 | 0.4404 | | 0.0336 | 17.02 | 1600 | 0.8322 | 0.4144 | | 0.0322 | 21.28 | 2000 | 0.7493 | 0.4081 | | 0.0288 | 25.53 | 2400 | 0.7361 | 0.3951 | | 0.0264 | 29.79 | 2800 | 0.7500 | 0.3929 |
8c409ac7544c808f2063724a9e9db587
mit
['generated_from_trainer']
false
bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e4 This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8121 - Rouge1: 53.9237 - Rouge2: 34.5683 - Rougel: 36.5547 - Rougelsum: 51.0273 - Gen Len: 142.0
82f382a4888d01825e89ecc0379ff294
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 398 | 0.8673 | 53.562 | 34.4013 | 36.5393 | 50.7868 | 142.0 | | 0.826 | 2.0 | 796 | 0.8119 | 55.0909 | 36.5216 | 38.6034 | 52.718 | 142.0 | | 0.5377 | 3.0 | 1194 | 0.8268 | 54.0198 | 35.9154 | 38.1218 | 51.2782 | 142.0 | | 0.3817 | 4.0 | 1592 | 0.8121 | 53.9237 | 34.5683 | 36.5547 | 51.0273 | 142.0 |
e8e33fd0abc81e1a61fbcfa6c5f7d4ab
cc-by-4.0
['question generation']
false
Model Card of `research-backup/t5-large-subjqa-vanilla-electronics-qg` This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: electronics) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
eb8a0168223b5f8142cb6b686c598231
cc-by-4.0
['question generation']
false
Overview - **Language model:** [t5-large](https://huggingface.co/t5-large) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (electronics) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
8998fdce61bf171697f7e8844c22baa6
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/t5-large-subjqa-vanilla-electronics-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
a53cc9e9609c3bd62dfac9610540f805
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-large-subjqa-vanilla-electronics-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) | | Score | Type | Dataset | |:-----------|--------:|:------------|:-----------------------------------------------------------------| | BERTScore | 81.61 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 5.2 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 1.22 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 0.32 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 0 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 6.49 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 50.46 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 8.01 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
d79807cfb6129a6257fda7bdd47c04d3
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: electronics - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-large - max_length: 512 - max_length_output: 32 - epoch: 1 - batch: 16 - lr: 1e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-large-subjqa-vanilla-electronics-qg/raw/main/trainer_config.json).
223d8f18254a098a275a2e196124a9e7
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
Anything V3 Welcome to Anything V3 - a latent diffusion model for weebs. This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images. e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_**
b88f92bf6ec4d71ee633226b60a959fb
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "Linaqruf/anything-v3.0" branch_name= "diffusers" pipe = StableDiffusionPipeline.from_pretrained(model_id, revision=branch_name, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "pikachu" image = pipe(prompt).images[0] image.save("./pikachu.png") ```
71775ce0c564214516c67ae8041a6aec
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
Examples Below are some examples of images generated using this model: **Anime Girl:** ![Anime Girl](https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/1girl.png) ``` 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden Steps: 50, Sampler: DDIM, CFG scale: 12 ``` **Anime Boy:** ![Anime Boy](https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/1boy.png) ``` 1boy, medium hair, blonde hair, blue eyes, bishounen, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden Steps: 50, Sampler: DDIM, CFG scale: 12 ``` **Scenery:** ![Scenery](https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/scenery.png) ``` scenery, shibuya tokyo, post-apocalypse, ruins, rust, sky, skyscraper, abandoned, blue sky, broken window, building, cloud, crane machine, outdoors, overgrown, pillar, sunset Steps: 50, Sampler: DDIM, CFG scale: 12 ```
7c2b326fd712c2f4a088293271346e9e
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
This is a mixed model made by IWillRemember, obtained from Discord. All credits goes to them who provided the models, thank you. Quote from IWillRemember: "...it's an amazingly accurate mix and it does almost everything really well if the right tags are used, the art style is really soft, photorealism, classicism, ghotic stuff and cyberpunk themed stuff are really... really good with it, personally i love it ... This is the full recipe: Anything V3 A (1a7df6b8) EasterE9 B (b56f765a) 0,5 sum merged A NAI (animefull-final-pruned) B (925997e9) 0,25 sum merged A F222 B (44bf0551) 0,25 sum merged A WD 1.3 B (eide58a9) 0,05 sum merged A SamdoesArt B (e02601f3) 0,05 sum Final result: RememberMix (c3a45486)"
39861dc600435dfb2c363ced333e8842
apache-2.0
['generated_from_trainer']
false
Tagged_One_100v3_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v3_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4863 - Precision: 0.2056 - Recall: 0.0896 - F1: 0.1248 - Accuracy: 0.8124
4e74f1e62c523fab4f391303c46f8ec7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 26 | 0.6246 | 0.1111 | 0.0003 | 0.0005 | 0.7773 | | No log | 2.0 | 52 | 0.5272 | 0.1238 | 0.0296 | 0.0478 | 0.7948 | | No log | 3.0 | 78 | 0.4863 | 0.2056 | 0.0896 | 0.1248 | 0.8124 |
0ceacc10f7f166273f422c38cb14ad95
apache-2.0
['generated_from_trainer']
false
distil-I-upper This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6060 - Rmse: 0.7785 - Mse: 0.6060 - Mae: 0.6007
2cbf2a5f05370c6477a7fa12eb31783d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.7219 | 1.0 | 492 | 0.6818 | 0.8257 | 0.6818 | 0.5909 | | 0.5932 | 2.0 | 984 | 0.6419 | 0.8012 | 0.6419 | 0.5838 | | 0.5874 | 3.0 | 1476 | 0.6058 | 0.7783 | 0.6058 | 0.6007 | | 0.5883 | 4.0 | 1968 | 0.6211 | 0.7881 | 0.6211 | 0.5875 | | 0.5838 | 5.0 | 2460 | 0.6060 | 0.7785 | 0.6060 | 0.6007 |
951bc1de716a5aab268df9008c77739f
apache-2.0
['translation']
false
gmq-eng * source group: North Germanic languages * target group: English * OPUS readme: [gmq-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-eng/README.md) * model: transformer * source language(s): dan fao isl nno nob nob_Hebr non_Latn swe * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.zip) * test set translations: [opus2m-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.test.txt) * test set scores: [opus2m-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.eval.txt)
14754d0d545543cecd4bdf2dcba9e465
apache-2.0
['translation']
false
System Info: - hf_name: gmq-eng - source_languages: gmq - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq', 'en'] - src_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.test.txt - src_alpha3: gmq - tgt_alpha3: eng - short_pair: gmq-en - chrF2_score: 0.72 - bleu: 58.1 - brevity_penalty: 0.982 - ref_len: 72641.0 - src_name: North Germanic languages - tgt_name: English - train_date: 2020-07-26 - src_alpha2: gmq - tgt_alpha2: en - prefer_old: False - long_pair: gmq-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
dcfc9d7500655976570620b002b35157
apache-2.0
['generated_from_trainer']
false
gpt-neo-125M-DOD-LOW This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.0427
c1868336ff0bbbf803ac9172cb6ac6b9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 261 | 6.4768 | | 6.8863 | 2.0 | 522 | 6.1056 | | 6.8863 | 3.0 | 783 | 6.0427 |
a9523eb2fd3c55ddab971ae3010d7531
mit
['computer vision', 'GAN']
false
Face Frontalization is a generative computer vision task in which the model takes a photo of a person's head taken at an angle between -90 and 90 degrees, and produces an image of what that person's frontal (i.e. 0 degree) view of the face might look like. The present model was first released in [this repository](https://github.com/scaleway/frontalization) by [Scaleway](https://www.scaleway.com/), a European cloud provider originating from France. It has been previously discussed in a [Scaleway blog post](https://blog.scaleway.com/gpu-instances-using-deep-learning-to-obtain-frontal-rendering-of-facial-images/) and presented at [the DataXDay conference in Paris](https://www.youtube.com/watch?v=aL7rhJz8mAI). The model's GAN architecture was inspired by [the work of R. Huang et al](https://arxiv.org/abs/1704.04086).
9bb7b6bdd81f64f3ef8eb236e6a6f628
mit
['computer vision', 'GAN']
false
Model description The Face Frontalization model is the Generator part of a [GAN](https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf) that was trained in a supervised fashion on profile-frontal image pairs. The Discriminator was based on a fairly standard [DCGAN](https://arxiv.org/abs/1511.06434) architecture, where the input is a 128x128x3 image that is processed through multiple convolutional layers, to be classified as either Real or Fake. The Generator had to be modified in order to fit the supervised learning scenario. It consists of convolutional layers (the Encoder of the input image), followed by a 512-dimensional hidden representation that is then fed into the Decoder made up of deconvolutional layers, which produces the output image. For more details on the model's architecture, see [this blog post](https://blog.scaleway.com/gpu-instances-using-deep-learning-to-obtain-frontal-rendering-of-facial-images/).
45759e1498d5cad542481b2647345871
mit
['computer vision', 'GAN']
false
Intended uses & limitations The present Face Frontalization model was not intended to represent the state of the art for this machine learning task. Instead, the goals were: (a) to demonstrate the benefits of using a GAN for supervised machine learning tasks (whereas the original GAN is an unsupervised generative algorithm; see [this conference talk](https://www.youtube.com/watch?v=aL7rhJz8mAI) for more details); (b) to show how a complex generative computer vision project can be accomplished on a [Scaleway cloud RENDER-S instance](https://www.scaleway.com/en/gpu-instances/) within ~ a day.
914a8eadc2bfc10fff3c75b8d99043b2
mit
['computer vision', 'GAN']
false
How to use The Face Frontalization model is a saved Pytorch model that can be loaded provided the included *network* package is present in the directory. It takes in 3-channel color images resized to 128x128 pixels in the form of [N, 3, 128, 128] tensors (where N is the size of the batch). Ideally, the input images should be closely-cropped photos of faces, taken in good lighting conditions. Here is how the model can be used for inference with a *gradio* image widget, e.g. in a Jupyter notebook: ``` import gradio as gr import numpy as np import torch from torchvision import transforms from torch.autograd import Variable from PIL import Image import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore')
f3eb6b9094d62ce9c7dc05b45d31bc5c
mit
['computer vision', 'GAN']
false
(as required by the frontalization model) preprocess = transforms.Compose((transforms.ToPILImage(), transforms.Resize(size = (128, 128)), transforms.ToTensor())) input_tensor = torch.unsqueeze(preprocess(image), 0)
7fcb0528416764ddb54c1cf98e5b6160
mit
['computer vision', 'GAN']
false
and this will need to get fixed before the output is displayed) generated_image = saved_model(Variable(input_tensor.type('torch.FloatTensor'))) generated_image = generated_image.detach().squeeze().permute(1, 2, 0).numpy() generated_image = (generated_image + 1.0) / 2.0 return generated_image iface = gr.Interface(frontalize, gr.inputs.Image(type="numpy"), "image") iface.launch() ```
8fa79df528faeebc8d46137387d9ea39
mit
['computer vision', 'GAN']
false
Limitations and bias As mentioned in the **Intended uses** section, the present model's performance is not intended to compete with the state of the art. Additionally, as the training data had a disproportionately high number of images of caucasian and asian males in their 20s, the model does not perform as well when supplied with images of people not belonging to this limited demographic.
56ba831bf5849e1d10ecf4d92435cbc6
mit
['computer vision', 'GAN']
false
Training data The present model was trained on [the CMU Multi-PIE Face Database that is available commercially](https://www.cs.cmu.edu/afs/cs/project/PIE/MultiPie/Multi-Pie/Home.html). The input images were closely cropped to include the face of a person photographed at an angle between -90 and 90 degrees. The target frontal images were cropped and aligned so that the center of the person's left eye was at the same relative position in all of them. Having a precise alignment for the target images turned out to play a key role in the training of the model.
9b8c421a34fbf999006e716725693696
mit
['computer vision', 'GAN']
false
Training procedure The training of the model was performed in a similar manner to that of a regular unsupervised [GAN](https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf), except that in addition to the binary cross entropy loss for the Discriminator, a pixelwise loss function was introduced for the Generator (see [the blog post](https://blog.scaleway.com/gpu-instances-using-deep-learning-to-obtain-frontal-rendering-of-facial-images/) for details). The exact weights given to the L1 and L2 pixelwise losses, as well as the BCE (GAN) loss were as follows: ``` L1_factor = 1 L2_factor = 1 GAN_factor = 0.001 ``` The model was trained for 18 epochs, with the training batch size equal to 30. The following optimizers were used for the Discriminator and the Generator: ``` optimizerD = optim.Adam(netD.parameters(), lr = 0.0002, betas = (0.5, 0.999)) optimizerG = optim.Adam(netG.parameters(), lr = 0.0002, betas = (0.5, 0.999), eps = 1e-8) ```
4171741b6c88d0d255fe79251052938f
mit
['computer vision', 'GAN']
false
Evaluation results GANs are notoriously difficult to train, with the losses for the Discriminator and the Generator often failing to converge even when producing what looks to be a highly realistic result to a human eye. The pixelwise loss for the test images also serves as a poor indicator of the model's performance because any variation in the lighting between the real target photo and the generated image could result in a deceptively high discrepancy between the two. The best evaluation method that remains is the manual inspection of the generated results. We have found that the present model performs reasonably well on the test data from the CMU Multi-PIE Face Database (naturally, all of the photos of the individuals included in the test set were removed from training): ![test examples](https://github.com/scaleway/frontalization/raw/master/pretrained/test-Pie.jpg) (Top row: inputs; middle row: model outputs; bottom row: ground truth images)
eab9a2dc8ce0032b0a368ffee6f7c2c5
mit
['roberta-base', 'roberta-base-epoch_25']
false
RoBERTa, Intermediate Checkpoint - Epoch 25 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_25.
480ed5ef3c73ae8f70afc4ab6ac9986e
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-TT2-exam This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0620 - Precision: 0.9222 - Recall: 0.9369 - F1: 0.9295 - Accuracy: 0.9835
d671370e9bd7301c474189bae6d8187d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2509 | 1.0 | 879 | 0.0733 | 0.8855 | 0.9212 | 0.9030 | 0.9777 | | 0.0505 | 2.0 | 1758 | 0.0618 | 0.9221 | 0.9330 | 0.9275 | 0.9827 | | 0.0309 | 3.0 | 2637 | 0.0620 | 0.9222 | 0.9369 | 0.9295 | 0.9835 |
b94b2f2b8a587edddb5e323eed68aa7e
mit
['torch']
false
How to use Here is how to use this model in PyTorch: ```python >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> >>> model_id = "rmihaylov/roberta-base-sentiment-bg" >>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True) >>> tokenizer = AutoTokenizer.from_pretrained(model_id) >>> >>> inputs = tokenizer.batch_encode_plus(['Това е умно.', 'Това е тъпо.'], return_tensors='pt') >>> outputs = model(**inputs) >>> torch.softmax(outputs, dim=1).tolist() [[0.0004746630438603461, 0.9995253086090088], [0.9986956715583801, 0.0013043134240433574]] ```
4873ad4a1d7cdef38018153b5733a1e2
mit
['stable-diffusion', 'dreamfusion', 'text2mesh']
false
Stable-Dreamfusion A pytorch implementation of the text-to-3D model **Dreamfusion**, powered by the [Stable Diffusion](https://github.com/CompVis/stable-diffusion) text-to-2D model. The original paper's project page: [_DreamFusion: Text-to-3D using 2D Diffusion_](https://dreamfusion3d.github.io/). Colab notebook for usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1MXT3yfOFvO0ooKEfiUUvTKwUkrrlCHpF?usp=sharing) Examples generated from text prompt `a high quality photo of a pineapple` viewed with the GUI in real time: https://user-images.githubusercontent.com/25863658/194241493-f3e68f78-aefe-479e-a4a8-001424a61b37.mp4
6d92a950f0b3259fef07f0ac0a47fa0b
mit
['stable-diffusion', 'dreamfusion', 'text2mesh']
false
Important Notice This project is a **work-in-progress**, and contains lots of differences from the paper. Also, many features are still not implemented now. **The current generation quality cannot match the results from the original paper, and many prompts still fail badly!**
de3d0dc023c272a7d2a8a02459bc2b53
mit
['stable-diffusion', 'dreamfusion', 'text2mesh']
false
Notable differences from the paper * Since the Imagen model is not publicly available, we use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) to replace it (implementation from [diffusers](https://github.com/huggingface/diffusers)). Different from Imagen, Stable-Diffusion is a latent diffusion model, which diffuses in a latent space instead of the original image space. Therefore, we need the loss to propagate back from the VAE's encoder part too, which introduces extra time cost in training. Currently, 10000 training steps take about 3 hours to train on a V100. * We use the [multi-resolution grid encoder](https://github.com/NVlabs/instant-ngp/) to implement the NeRF backbone (implementation from [torch-ngp](https://github.com/ashawkey/torch-ngp)), which enables much faster rendering (~10FPS at 800x800). * We use the Adam optimizer with a larger initial learning rate.
88a6e9fbfc131aacefc62b55547dcdf5
mit
['stable-diffusion', 'dreamfusion', 'text2mesh']
false
Install ```bash git clone https://github.com/ashawkey/stable-dreamfusion.git cd stable-dreamfusion ``` **Important**: To download the Stable Diffusion model checkpoint, you should provide your [access token](https://huggingface.co/settings/tokens). You could choose either of the following ways: * Run `huggingface-cli login` and enter your token. * Create a file called `TOKEN` under this directory (i.e., `stable-dreamfusion/TOKEN`) and copy your token into it.
15b96f12982b12f492d3c68cfaa4e6a3
mit
['stable-diffusion', 'dreamfusion', 'text2mesh']
false
dreamfields (CLIP) setting python main.py --text "a hamburger" --workspace trial_clip -O --guidance clip python main.py --text "a hamburger" --workspace trial_clip -O --test --gui --guidance clip ```
99dfc580ab92fc1d3d09add8e66128f3
mit
['stable-diffusion', 'dreamfusion', 'text2mesh']
false
Code organization & Advanced tips This is a simple description of the most important implementation details. If you are interested in improving this repo, this might be a starting point. Any contribution would be greatly appreciated! * The SDS loss is located at `./nerf/sd.py > StableDiffusion > train_step`: ```python
65284ef093a06747c1a205e989f21d76
mit
['stable-diffusion', 'dreamfusion', 'text2mesh']
false
3. the SDS loss, since UNet part is ignored and cannot simply audodiff, we manually set the grad for latents. w = self.alphas[t] ** 0.5 * (1 - self.alphas[t]) grad = w * (noise_pred - noise) latents.backward(gradient=grad, retain_graph=True) ``` * Other regularizations are in `./nerf/utils.py > Trainer > train_step`. * The generation seems quite sensitive to regularizations on weights_sum (alphas for each ray). The original opacity loss tends to make NeRF disappear (zero density everywhere), so we use an entropy loss to replace it for now (encourages alpha to be either 0 or 1). * NeRF Rendering core function: `./nerf/renderer.py > NeRFRenderer > run_cuda`. * the occupancy grid based training acceleration (instant-ngp like, enabled by `--cuda_ray`) may harm the generation progress, since once a grid cell is marked as empty, rays won't pass it later... * Not using `--cuda_ray` also works now: ```bash
a3202fe198afcd88ba63ae6a4ba182a0
mit
['stable-diffusion', 'dreamfusion', 'text2mesh']
false
faster training, but slower rendering ``` Training is faster if only sample 128 points uniformly per ray (5h --> 2.5h). More testing is needed... * Shading & normal evaluation: `./nerf/network*.py > NeRFNetwork > forward`. Current implementation harms training and is disabled. * light direction: current implementation use a plane light source, instead of a point light source... * View-dependent prompting: `./nerf/provider.py > get_view_direction`. * ues `--angle_overhead, --angle_front` to set the border. How to better divide front/back/side regions? * Network backbone (`./nerf/network*.py`) can be chosen by the `--backbone` option, but `tcnn` and `vanilla` are not well tested. * Spatial density bias (gaussian density blob): `./nerf/network*.py > NeRFNetwork > gaussian`.
a54f8e21945e0b17aebb94f58f4bcdcb
mit
['stable-diffusion', 'dreamfusion', 'text2mesh']
false
Acknowledgement * The amazing original work: [_DreamFusion: Text-to-3D using 2D Diffusion_](https://dreamfusion3d.github.io/). ``` @article{poole2022dreamfusion, author = {Poole, Ben and Jain, Ajay and Barron, Jonathan T. and Mildenhall, Ben}, title = {DreamFusion: Text-to-3D using 2D Diffusion}, journal = {arXiv}, year = {2022}, } ``` * Huge thanks to the [Stable Diffusion](https://github.com/CompVis/stable-diffusion) and the [diffusers](https://github.com/huggingface/diffusers) library. ``` @misc{rombach2021highresolution, title={High-Resolution Image Synthesis with Latent Diffusion Models}, author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer}, year={2021}, eprint={2112.10752}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{von-platen-etal-2022-diffusers, author = {Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Thomas Wolf}, title = {Diffusers: State-of-the-art diffusion models}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/huggingface/diffusers}} } ``` * The GUI is developed with [DearPyGui](https://github.com/hoffstadt/DearPyGui).
e8433f9b84c4a477f0a9b08620963a78
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8076 - Matthews Correlation: 0.5513
7e02f397ef6dd1a3dc1b3076d2eb9980
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5264 | 1.0 | 535 | 0.5380 | 0.4135 | | 0.3486 | 2.0 | 1070 | 0.5007 | 0.4923 | | 0.2404 | 3.0 | 1605 | 0.5373 | 0.5358 | | 0.1757 | 4.0 | 2140 | 0.7435 | 0.5414 | | 0.122 | 5.0 | 2675 | 0.8076 | 0.5513 |
bc7f03aa5630dea7611b9cb9f984dcdd
creativeml-openrail-m
['text-to-image', 'stable-diffusion', 'dreambooth']
false
SD-1.5-TheRock Dreambooth model trained by Azuremis with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook This is a stable diffusion model fine-tuned via dreambooth to produce images of The Rock (Dwayne Johnson). [The Rock AI Avatar Generator](https://github.com/azuremis/avatar_the_rock/tree/Doc/Readme_Updates) uses this model for generations, You can check out a web demo of this project [here](https://avatartherock-production.up.railway.app/) Sample genereated pictures of the model in action: ![0](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/worried_rock_1.png) ![1](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/video_game_rock_2.png) ![2](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/nerd_rock_2.png) ![3](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/chaacter_portrait_1.png) ![4](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/wizard_rock.png) ![5](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/suit_rock.png) ![6](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/BW_rock_1.png) ![7](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/prince_rock_2.png) ![8](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/shades_rock.png) ![9](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/bangkok_portrait_2.png) ![10](https://huggingface.co/Azuremis/sd-1-5-therock/resolve/main/sample_images/swole_rock_1.png)
7663391231203118a1a76bb09c8876a8
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4275, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
b1b2bf847a6114ecaf3ebea680465804
apache-2.0
['generated_from_keras_callback']
false
Rocketknight1/distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2026 - Validation Loss: 0.0726 - Train Precision: 0.8945 - Train Recall: 0.9220 - Train F1: 0.9081 - Train Accuracy: 0.9793 - Epoch: 0
baa12a6aafd636a207e77d8b6c448d20
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.2026 | 0.0726 | 0.8945 | 0.9220 | 0.9081 | 0.9793 | 0 |
5e55fee70a67083034c2abb0ad4dda60
apache-2.0
['image-classification', 'timm']
false
Model card for maxvit_large_tf_512.in1k An official MaxViT image classification model. Trained in tensorflow on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
fe939eb8fca6ad449576bb3f716456b8
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 212.3 - GMACs: 244.8 - Activations (M): 942.1 - Image size: 512 x 512 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k
a89f38140829b49bdac24b493000f06e
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxvit_large_tf_512.in1k', pretrained=True) model = model.eval()
3b7a8d2e79123329aaeabc1c1411e29f
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_large_tf_512.in1k', pretrained=True, features_only=True, ) model = model.eval()
d6d8e309b82d7670531ab5d56deec5ce
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_large_tf_512.in1k', pretrained=True, num_classes=0,
36296f607b0e8f893f516149c819595f
apache-2.0
['generated_from_keras_callback']
false
caotianyu1996/bert_finetuned_ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0247 - Validation Loss: 0.0593 - Epoch: 2
ab17df2e90fc95c593803876c1111e33
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1275 | 0.0574 | 0 | | 0.0414 | 0.0569 | 1 | | 0.0247 | 0.0593 | 2 |
f9f29eef6663fc4287089168774411d3
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'ug']
false
XLS-R-300M Uyghur CV8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UG dataset. It achieves the following results on the evaluation set: - Loss: 0.2026 - Wer: 0.3248
40ef9bb2368df0049db3e260f95a15da
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'ug']
false
Training procedure The featurization layers of the XLS-R model are frozen while tuning a final CTC/LM layer on the Uyghur CV8 example sentences. A ramped learning rate is used with an initial warmup phase of 2000 steps, a max of 0.0001, and cooling back towards 0 for the remainder of the 9400 steps (100 epochs).
1cfe3c3752c74ee1cdaa2b8b4cd26af4
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'ug']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP
7859af31936a43723c87d096f5819264
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'ug']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3036 | 5.32 | 500 | 3.2628 | 1.0 | | 2.9734 | 10.63 | 1000 | 2.5677 | 0.9980 | | 1.3466 | 15.95 | 1500 | 0.4455 | 0.6306 | | 1.2424 | 21.28 | 2000 | 0.3603 | 0.5301 | | 1.1655 | 26.59 | 2500 | 0.3165 | 0.4740 | | 1.1026 | 31.91 | 3000 | 0.2930 | 0.4400 | | 1.0655 | 37.23 | 3500 | 0.2675 | 0.4159 | | 1.0239 | 42.55 | 4000 | 0.2580 | 0.3913 | | 0.9938 | 47.87 | 4500 | 0.2373 | 0.3698 | | 0.9655 | 53.19 | 5000 | 0.2379 | 0.3675 | | 0.9374 | 58.51 | 5500 | 0.2486 | 0.3795 | | 0.9065 | 63.83 | 6000 | 0.2243 | 0.3405 | | 0.888 | 69.15 | 6500 | 0.2157 | 0.3277 | | 0.8646 | 74.47 | 7000 | 0.2103 | 0.3288 | | 0.8602 | 79.78 | 7500 | 0.2088 | 0.3238 | | 0.8442 | 85.11 | 8000 | 0.2045 | 0.3266 | | 0.8335 | 90.42 | 8500 | 0.2038 | 0.3241 | | 0.8288 | 95.74 | 9000 | 0.2024 | 0.3280 |
d92cca9d442e0348c99d0cb5557b93d0
mit
['generated_from_trainer']
false
Klassifizierung-Gewerke This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0964 - F1: 0.9822
e19b00de95dc3d2f80dafa9d1c985db1
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6216 | 1.0 | 91 | 0.1944 | 0.9415 | | 0.1465 | 2.0 | 182 | 0.1180 | 0.9695 | | 0.0651 | 3.0 | 273 | 0.0964 | 0.9822 |
a38027683ced7d714a4bf2791bc7617c
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_cola This model is a fine-tuned version of [gokuls/mobilebert_sa_pre-training-complete](https://huggingface.co/gokuls/mobilebert_sa_pre-training-complete) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.3784 - Matthews Correlation: 0.5440
4770a0fff9cf63f0e49eb48f457d31d3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6286 | 1.0 | 67 | 0.4351 | 0.4469 | | 0.3982 | 2.0 | 134 | 0.4004 | 0.4858 | | 0.2722 | 3.0 | 201 | 0.3867 | 0.5372 | | 0.2056 | 4.0 | 268 | 0.3784 | 0.5440 | | 0.1649 | 5.0 | 335 | 0.4274 | 0.5624 | | 0.135 | 6.0 | 402 | 0.4440 | 0.5261 | | 0.1174 | 7.0 | 469 | 0.4543 | 0.5495 | | 0.108 | 8.0 | 536 | 0.3885 | 0.5506 | | 0.1002 | 9.0 | 603 | 0.4125 | 0.5423 |
8045a976dc47be846d5648fcb8d4d2e4
creativeml-openrail-m
['text-to-image']
false
model by estelleflores ![image 20](https://i.ibb.co/6b0ZBtr/1.png) This is a Stable Diffusion 2 model fine-tuned to the CRIsimsEstelle concept taught to Stable Diffusion with Dreambooth. ![image 21](https://i.ibb.co/7tL6Vs4/59.png) It can be used by modifying the `instance_prompt`: **3d render in \<cri-sims> style** or just using the initializer \'\<cri-sims> style' somewhere in your prompt will work. ![image 22](https://i.ibb.co/H21FX8Q/37.png) Images used for training this concept come from the [project Contain Real Ingredients](https://teia.art/estelle), an art practice inside the game The Sims 4 by artist Estelle Flores: ![image 0](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/13.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/0.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/7.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/11.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/10.jpeg) ![image 5](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/2.jpeg) ![image 6](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/8.jpeg) ![image 7](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/3.jpeg) ![image 8](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/12.jpeg) ![image 9](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/4.jpeg) ![image 10](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/5.jpeg) ![image 11](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/1.jpeg) ![image 12](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/14.jpeg) ![image 13](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/9.jpeg) ![image 14](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/17.jpeg) ![image 15](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/16.jpeg) ![image 16](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/18.jpeg) ![image 17](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/6.jpeg) ![image 18](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/15.jpeg) ![image 19](https://huggingface.co/sd-dreambooth-library/crisimsestelle/resolve/main/concept_images/19.jpeg) You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
6313a5487b3513aa0d718760dc66d012
apache-2.0
['generated_from_keras_callback']
false
AmitBHuji/mt5-small-finetuned-mt5-simplification-1epoch This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 6.2240 - Epoch: 7
940393c1fe79c7012e38396cb2a7b949
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 1192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
93c6e1ac81e0ce58785f1a06eb2fde89
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Epoch | |:----------:|:-----:| | 15.1244 | 0 | | 9.6794 | 1 | | 7.9758 | 2 | | 7.1858 | 3 | | 6.6506 | 4 | | 6.5284 | 5 | | 6.2093 | 6 | | 6.2240 | 7 |
f392c7e359c7ed7233e45eb68f2f1a27
mit
[]
false
Anime girl on Stable Diffusion This is the `<anime-girl>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<anime-girl> 0](https://huggingface.co/sd-concepts-library/anime-girl/resolve/main/concept_images/0.jpeg) ![<anime-girl> 1](https://huggingface.co/sd-concepts-library/anime-girl/resolve/main/concept_images/4.jpeg) ![<anime-girl> 2](https://huggingface.co/sd-concepts-library/anime-girl/resolve/main/concept_images/1.jpeg) ![<anime-girl> 3](https://huggingface.co/sd-concepts-library/anime-girl/resolve/main/concept_images/3.jpeg) ![<anime-girl> 4](https://huggingface.co/sd-concepts-library/anime-girl/resolve/main/concept_images/2.jpeg)
d5cdd6fd9dc1fc31053d14ffc639fc2c
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/csmsc_tts_train_fastspeech2_raw_phn_pypinyin_g2p_phone_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4031953/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
27e7bd6b7189325c368357684adebd79
apache-2.0
['generated_from_keras_callback']
false
train_basic_M_V3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set:
55f790ba553dac520240bea1162b17d5
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 204258, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
ae5fd46df338285d5aa0beb25a77a752
['apache-2.0', 'bsd-3-clause']
['summarization', 'summary', 'booksum', 'long-document', 'long-form']
false
pszemraj/pegasus-x-large-book-summary <a href="https://colab.research.google.com/gist/pszemraj/6c326c0649233ab017d63adc36958d1a/pegasus-x-large-booksum-demo.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Get SparkNotes-esque summaries of arbitrary text! Due to the model size, it's recommended to try it out in Colab (linked above) as the API textbox may time out. This model is a fine-tuned version of [google/pegasus-x-large](https://huggingface.co/google/pegasus-x-large) on the `kmfoda/booksum` dataset for approx eight epochs.
55968dd369a689a6e9fc54a1b098e295
['apache-2.0', 'bsd-3-clause']
['summarization', 'summary', 'booksum', 'long-document', 'long-form']
false
Epochs 5 & 6 The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: _ADAN_ using lucidrains' `adan-pytorch` with default betas - lr_scheduler_type: constant_with_warmup - data type: TF32 - num_epochs: 2
39295523f61a838acd421a8b3739f833
['apache-2.0', 'bsd-3-clause']
['summarization', 'summary', 'booksum', 'long-document', 'long-form']
false
Epochs 7 & 8 - epochs 5 & 6 were trained with 12288 tokens input - this fixes that with 2 epochs at 16384 tokens input The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: _ADAN_ using lucidrains' `adan-pytorch` with default betas - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2
2bd299e67ee9d2753f046e8fde9f898d
apache-2.0
['generated_from_keras_callback']
false
veb/twitch-distilbert-base-cased-finetuned This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.5140 - Validation Loss: 5.4524 - Epoch: 0
588d0c0142b0eb26aa53dc194df4a01d
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -982, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
02658e706920d8575cfc8b6097b4546c
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4102 - Wer: 0.3165
22003e1b803c50c1aaea7e59c8eb4dfb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.9393 | 3.67 | 400 | 0.6784 | 0.7123 | | 0.4104 | 7.34 | 800 | 0.4521 | 0.4865 | | 0.1929 | 11.01 | 1200 | 0.4470 | 0.4802 | | 0.1301 | 14.68 | 1600 | 0.4377 | 0.4384 | | 0.0999 | 18.35 | 2000 | 0.4391 | 0.4067 | | 0.0799 | 22.02 | 2400 | 0.4073 | 0.3456 | | 0.0624 | 25.69 | 2800 | 0.4039 | 0.3286 | | 0.0491 | 29.36 | 3200 | 0.4102 | 0.3165 |
64ebfdbbb7060302fbc883cd6df2e7a6
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2t_fr_vp-100k_s509 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
979ac5538b25475dc175349f3a99618e
other
['vision', 'image-segmentation']
false
Mask2Former Mask2Former model trained on ADE20k semantic segmentation (base-IN21k version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
b6d042913990a5358dddcf558784c4df
other
['vision', 'image-segmentation']
false
load Mask2Former fine-tuned on ADE20k semantic segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-IN21k-ade-semantic") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-IN21k-ade-semantic") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs)
a83becc384c7e9d37a7b63170a06a412
mit
['SlovakBERT']
false
SlovakBERT (base-sized model) SlovakBERT pretrained model on Slovak language using a masked language modeling (MLM) objective. This model is case-sensitive: it makes a difference between slovensko and Slovensko.
5de9f5a9ddb830ace711e9c1381a69e5
mit
['SlovakBERT']
false
Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. **IMPORTANT**: The model was not trained on the “ and ” (direct quote) character -> so before tokenizing the text, it is advised to replace all “ and ” (direct quote marks) with a single "(double quote marks).
28073e5d3e8b62dce4880f786f4a496b