license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
other
['vision', 'image-segmentation']
false
Mask2Former Mask2Former model trained on Cityscapes semantic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
3f956b9e1d387e4884454efaef3305b1
other
['vision', 'image-segmentation']
false
load Mask2Former fine-tuned on Cityscapes semantic segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-semantic") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-semantic") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs)
735ae7c11c873a87809c91d83d8f731c
creativeml-openrail-m
['text-to-image']
false
clay_icon Dreambooth model trained by andrewburns with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: clay (use that on your prompt) ![clay 0](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%281%29.jpg)![clay 1](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%282%29.jpg)![clay 2](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%283%29.jpg)![clay 3](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%284%29.jpg)![clay 4](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%285%29.jpg)![clay 5](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%286%29.jpg)![clay 6](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%287%29.jpg)![clay 7](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%288%29.jpg)![clay 8](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%289%29.jpg)![clay 9](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2810%29.jpg)![clay 10](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2811%29.jpg)![clay 11](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2812%29.jpg)![clay 12](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2813%29.jpg)![clay 13](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2814%29.jpg)![clay 14](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2815%29.jpg)![clay 15](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2816%29.jpg)![clay 16](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2817%29.jpg)![clay 17](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2818%29.jpg)![clay 18](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2819%29.jpg)![clay 19](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2820%29.jpg)![clay 20](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2821%29.jpg)![clay 21](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2822%29.jpg)![clay 22](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2823%29.jpg)![clay 23](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2824%29.jpg)![clay 24](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2825%29.jpg)![clay 25](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2826%29.jpg)![clay 26](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2827%29.jpg)![clay 27](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2828%29.jpg)![clay 28](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2829%29.jpg)![clay 29](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2830%29.jpg)![clay 30](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2831%29.jpg)![clay 31](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2832%29.jpg)![clay 32](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2833%29.jpg)![clay 33](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2834%29.jpg)![clay 34](https://huggingface.co/andrewburns/clay-icon/resolve/main/concept_images/clay_sks_%2835%29.jpg)
b300efefa3743a02dc0baa106cf6d10d
apache-2.0
['generated_from_trainer']
false
w2v2-ami This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8686 - Wer: 0.2861
c6a69c875403203fad5b0ea9d5642d93
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.6021 | 3.07 | 500 | 2.9176 | 0.9997 | | 2.5006 | 6.13 | 1000 | 1.0535 | 0.3617 | | 0.9926 | 9.2 | 1500 | 0.8614 | 0.3036 | | 0.809 | 12.27 | 2000 | 0.8676 | 0.2921 | | 0.73 | 15.34 | 2500 | 0.8190 | 0.2966 | | 0.6658 | 18.4 | 3000 | 0.8707 | 0.2900 | | 0.6295 | 21.47 | 3500 | 0.8660 | 0.2821 | | 0.6089 | 24.54 | 4000 | 0.8767 | 0.2829 | | 0.5974 | 27.61 | 4500 | 0.8686 | 0.2861 |
d9140ce8365990a00fa8249b87ac1c12
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 0.4842 | 0.8129 |
22b002bbf32de1b7ce3116102aaef746
apache-2.0
['translation']
false
opus-mt-pis-fr * source languages: pis * target languages: fr * OPUS readme: [pis-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.eval.txt)
73740c83bb12083ad248718f10d22a25
apache-2.0
['image-classification', 'timm', 'normalization-free', 'efficient-channel-attention']
false
Model description This model variant was slimmed down from the original F0 variant in the paper for improved runtime characteristics (throughput, memory use) in PyTorch, on a GPU accelerator. It utilizes [Efficient Channel Attention (ECA)](https://arxiv.org/abs/1910.03151) instead of Squeeze-Excitation. It also features SiLU activations instead of the usual GELU. Like other models in the NF family, this model contains no normalization layers (batch, group, etc). The models make use of [Weight Standardized](https://arxiv.org/abs/1903.10520) convolutions with additional scaling values in lieu of normalization layers.
8196ab8818fa9423c2a56140f06ed7ed
apache-2.0
['image-classification', 'timm', 'normalization-free', 'efficient-channel-attention']
false
Intended uses & limitations You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head to fine-tune it on a downstream task (another classification task with different labels, image segmentation or object detection, to name a few).
256fd4101cf934baf58e2c53d4f1f0a6
apache-2.0
['image-classification', 'timm', 'normalization-free', 'efficient-channel-attention']
false
How to use You can use this model with the usual factory method in [`timm`](https://github.com/rwightman/pytorch-image-models): ```python import PIL import timm import torch model = timm.create_model("hf_hub:timm/eca_nfnet_l0") config = model.default_cfg img_size = config["test_input_size"][-1] if "test_input_size" in config else config["input_size"][-1] transform = timm.data.transforms_factory.transforms_imagenet_eval( img_size=img_size, interpolation=config["interpolation"], mean=config["mean"], std=config["std"], crop_pct=config["crop_pct"], ) img = PIL.Image.open(path_to_an_image) img = img.convert("RGB") input_tensor = transform(cat_img) input_tensor = input_tensor.unsqueeze(0)
35452b9d2227e79afbc25c8d4b495c5a
apache-2.0
['image-classification', 'timm', 'normalization-free', 'efficient-channel-attention']
false
Limitations and bias The training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will probably not generalize well on drawings or images containing multiple objects with different labels. The training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or models created by fine-tuning this model will work better on images picturing scenes from these countries (see [this paper](https://arxiv.org/abs/1906.02659) for examples). More generally, [recent research](https://arxiv.org/abs/2010.15052) has shown that even models trained in an unsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in the training images.
8bac53e6e33df211def545d17afc3b1f
apache-2.0
['image-classification', 'timm', 'normalization-free', 'efficient-channel-attention']
false
Training procedure For stability during training it is highly recommended to train all NFNet variants with gradient clipping enabled. This model was trained with an Adaptive Gradient Clipping (AGC) factor of 0.015 as described in [the paper](https://arxiv.org/abs/2102.06171). Similar to the paper, a cosine learning rate decay was employed using SGD w/ nesterov. Moderate to heavy augmentation ([RandAugment](https://arxiv.org/abs/1909.13719)) and regularization (dropout, stochastic depth) is recommended for training.
6096545d667e8829ce9b4614de3d024b
apache-2.0
['image-classification', 'timm', 'normalization-free', 'efficient-channel-attention']
false
BibTeX entry and citation info NFNet model architecture: ```bibtex @article{brock2021high, author={Andrew Brock and Soham De and Samuel L. Smith and Karen Simonyan}, title={High-Performance Large-Scale Image Recognition Without Normalization}, journal={arXiv preprint arXiv:2102.06171}, year={2021} } ``` L0 model variant & pretraining: ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
6bb854fb2b9128f26074ed2bf9b7504b
apache-2.0
['translation']
false
opus-mt-bzs-fi * source languages: bzs * target languages: fi * OPUS readme: [bzs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.eval.txt)
66014a265f50095a38c846a3a0ce8368
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
beyaynetu" Dreambooth model trained by Tinsae with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/Tinsae/beyaynetu/resolve/main/sample_images/beyaynetu_(24).png) ![1](https://huggingface.co/Tinsae/beyaynetu/resolve/main/sample_images/beyaynetu_(12).png) ![2](https://huggingface.co/Tinsae/beyaynetu/resolve/main/sample_images/beyaynetu_(20).png) ![3](https://huggingface.co/Tinsae/beyaynetu/resolve/main/sample_images/beyaynetu_(16).png)
aed5cda925dd901270a9ac9f33f33c69
apache-2.0
['generated_from_trainer']
false
tuto-bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0827 - Precision: 0.9380 - Recall: 0.9525 - F1: 0.9452 - Accuracy: 0.9867
69b5ff65c4f952f2300933374af4df24
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0218 | 1.0 | 1756 | 0.0714 | 0.9372 | 0.9524 | 0.9447 | 0.9862 | | 0.0123 | 2.0 | 3512 | 0.0761 | 0.9347 | 0.9510 | 0.9428 | 0.9859 | | 0.0063 | 3.0 | 5268 | 0.0827 | 0.9380 | 0.9525 | 0.9452 | 0.9867 |
2b606371ef95c5953dfb680dd02734aa
creativeml-openrail-m
[]
false
JellyCute7 (artist) Style [Hypernetwork] Hypernetwork trained on art by artist [JellyCute7](https://www.pixiv.net/en/users/1053112). [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/stricky)
565554d8151bc7c95d9dfd393c494f1f
creativeml-openrail-m
[]
false
Settings ``` Model: NAI Layer structure: (1, 2, 1) Activation function: relu Layer normalization: False Use dropout: False Raw dataset size: 208 images Final dataset size: 832 images Size: 512x512 Create flipped copies: True Split oversized images: True Captions: DeepBooru Learning rate: 0.000005 -> 13000 steps Recommended: 13000 steps ```
7fb027fd53c2706ba918a3a708c3e073
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2309 - Accuracy: 0.9319 - F1: 0.9323
c1e57d445362f4523e998132907ba0b3
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_vp-100k_gender_male-0_female-10_s169 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
29e1177aaafbfa00c1950ed67d024ab3
apache-2.0
[]
false
Model Description LUKE (Language Understanding with Knowledge-based Embeddings) is a new pretrained contextualized representation of words and entities based on transformer. - **Developed by:** Studio Ousia - **Shared by [Optional]:** More information needed - **Model type:** EntitySpanClassification - **Language(s) (NLP):** More information needed - **License:** Apache-2.0 - **Related Models:** [Luke-large](https://huggingface.co/studio-ousia/luke-large?text=Paris+is+the+%3Cmask%3E+of+France.) - **Parent Model:** Luke - **Resources for more information:** - [GitHub Repo](https://github.com/studio-ousia/luke) - [Associated Paper](https://arxiv.org/abs/2010.01057)
45ab38f8d74073f2ddd83d08e793c1cb
apache-2.0
[]
false
Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
e0e5945a4ea4e9289a55ca5f243e3f3d
apache-2.0
[]
false
Metrics LUKE achieves state-of-the-art results on five popular NLP benchmarks including * **[SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/)** (extractive question answering), * **[CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/)** (named entity recognition), **[ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/)** (cloze-style question answering), * **[TACRED](https://nlp.stanford.edu/projects/tacred/)** (relation classification), and * **[Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html)** (entity typing).
0c08eb0f5461bf4b934f201082e08fa1
apache-2.0
[]
false
Results The experimental results are provided as follows: | Task | Dataset | Metric | LUKE-large | luke-base | Previous SOTA | | ------------------------------ | ---------------------------------------------------------------------------- | ------ | ----------------- | --------- | ------------------------------------------------------------------------- | | Extractive Question Answering | [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) | EM/F1 | **90.2**/**95.4** | 86.1/92.3 | 89.9/95.1 ([Yang et al., 2019](https://arxiv.org/abs/1906.08237)) | | Named Entity Recognition | [CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/) | F1 | **94.3** | 93.3 | 93.5 ([Baevski et al., 2019](https://arxiv.org/abs/1903.07785)) | | Cloze-style Question Answering | [ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/) | EM/F1 | **90.6**/**91.2** | - | 83.1/83.7 ([Li et al., 2019](https://www.aclweb.org/anthology/D19-6011/)) | | Relation Classification | [TACRED](https://nlp.stanford.edu/projects/tacred/) | F1 | **72.7** | - | 72.0 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) | | Fine-grained Entity Typing | [Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html) | F1 | **78.2** | - | 77.6 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) | Please check the [Github repository](https://github.com/studio-ousia/luke) for more details and updates.
1fc8b46bd795b0f513ab38a9a96d1836
apache-2.0
[]
false
compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed
09262095bd06e63766dd4fcc7dbcab6c
apache-2.0
[]
false
Citation **BibTeX:** ``` @inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} } ```
b6cc531f3534d87e65256fed04077a59
apache-2.0
[]
false
How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, LukeForEntitySpanClassification tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003") model = LukeForEntitySpanClassification.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003") ``` </details>
99f05624bb79e393d44fe9b2707738e4
creativeml-openrail-m
['text-to-image', 'safetensors', 'stable-diffusion']
false
Broken-Mirror-Style Dreambooth model trained by Kagerage with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) In my experience, the best prompting style is "brkmror cracked and shattered mirror, professional photo, descriptor of location, portrait of person, bokeh", potentially with a negative prompt of "closed eyes" Sampler DPM++ SDE Karras with at least 25 steps gives slightly better faces from my testing, CFG 7 seems fine, and since this is trained on top of 2.1, the minimum resolution is 768x768. Or course though, feel free to experiment! \(NOTE!!! This 2.1 based model is FP16, so it may not render outputs properly without Xformers\) Sample pictures of this concept: ![0](https://huggingface.co/Kagerage/broken-mirror-style/resolve/main/sample_images/00036-4212333230-brkmror_cracked_and_shattered_mirror,_professional_photo,_field_of_sunflowers,_portrait_of_woman,_bokeh.png) ![1](https://huggingface.co/Kagerage/broken-mirror-style/resolve/main/sample_images/00039-140956230-brkmror_cracked_and_shattered_mirror,_professional_photo,_snow_covered_mountain_peak,_portrait_of_black_woman,_bokeh.png) ![2](https://huggingface.co/Kagerage/broken-mirror-style/resolve/main/sample_images/00041-1640278942-brkmror_cracked_and_shattered_mirror,_professional_photo,_early_summer_beach_at_dusk_with_shipwreck,_portrait_of_youthful_japane.png) ![3](https://huggingface.co/Kagerage/broken-mirror-style/resolve/main/sample_images/00038-1234041848-brkmror_cracked_and_shattered_mirror,_professional_photo,_field_of_roses,_portrait_of_black_woman,_bokeh.png)
fb2f79feee31a1c2d64ac36e7834b82f
mit
['generated_from_trainer']
false
edos-2023-baseline-microsoft-deberta-v3-base-label_vector This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5524 - F1: 0.3162
81750ffd23e63fcc3446a3395023e1c1
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 12 - mixed_precision_training: Native AMP
80a3afaace29ea5eb8b9a0cede238d1c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.1209 | 1.18 | 100 | 1.9990 | 0.0801 | | 1.7997 | 2.35 | 200 | 1.7293 | 0.1349 | | 1.5749 | 3.53 | 300 | 1.6080 | 0.2431 | | 1.3674 | 4.71 | 400 | 1.5411 | 0.2793 | | 1.2214 | 5.88 | 500 | 1.5285 | 0.2980 | | 1.0752 | 7.06 | 600 | 1.5165 | 0.3054 | | 0.9899 | 8.24 | 700 | 1.5210 | 0.3186 | | 0.8733 | 9.41 | 800 | 1.5385 | 0.3134 | | 0.8578 | 10.59 | 900 | 1.5524 | 0.3162 |
51ece7384cb1fb398875fa9d651a998b
creativeml-openrail-m
['text-to-image']
false
Franck Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: f123ranck (use that on your prompt)
1872766ae07da46b81536bb9a4e3e28f
apache-2.0
['image-to-text', 'image-captioning']
false
This is an image captioning model training by Zayn ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer model = VisionEncoderDecoderModel.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") feature_extractor = ViTFeatureExtractor.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") tokenizer = AutoTokenizer.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 20 num_beams = 8 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_step(['Image URL.jpg'])
ac81624ec7e1bcbac8eb7244033b5f27
apache-2.0
['generated_from_trainer']
false
albert-base-v2_mnli_bc This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.2952 - Accuracy: 0.9399
3e7124f5e9d767d883100792f28b11c1
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP
afe3245ebcee31cd0c3e2fa04165eed2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2159 | 1.0 | 16363 | 0.2268 | 0.9248 | | 0.1817 | 2.0 | 32726 | 0.2335 | 0.9347 | | 0.0863 | 3.0 | 49089 | 0.3014 | 0.9401 |
12d6be64f5a31e6867b13c921a483209
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_stsb_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 1.1268 - Pearson: nan - Spearmanr: nan - Combined Score: nan
84086e2de513faa4f071ee9307889ffe
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 3.1622 | 1.0 | 23 | 1.7502 | -0.0248 | -0.0193 | -0.0221 | | 1.8579 | 2.0 | 46 | 1.3087 | -0.0465 | -0.0476 | -0.0470 | | 1.3508 | 3.0 | 69 | 1.1268 | nan | nan | nan | | 1.1078 | 4.0 | 92 | 1.1974 | 0.0294 | 0.0287 | 0.0290 | | 1.0747 | 5.0 | 115 | 1.1797 | 0.0509 | 0.0597 | 0.0553 | | 1.024 | 6.0 | 138 | 1.2292 | 0.0554 | 0.0782 | 0.0668 | | 0.944 | 7.0 | 161 | 1.2819 | 0.1274 | 0.1441 | 0.1358 | | 0.795 | 8.0 | 184 | 1.2143 | 0.1987 | 0.2082 | 0.2035 |
a56cc3950289afa0f6549840c014f9e3
apache-2.0
['t5', 'translation', 'seq2seq']
false
t5-small-24L-ccmatrix-multi A [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) model finetuned for Dutch to English and English to Dutch translation on the CCMatrix dataset. Evaluation metrics of this model are listed in the **Translation models** section below. You can use this model directly with a pipeline for text translation: ```python model_name = "yhavinga/t5-small-24L-ccmatrix-multi" from transformers import AutoTokenizer from transformers import AutoModelForSeq2SeqLM from transformers import pipeline import torch device_num = 0 if torch.cuda.is_available() else -1 device = "cpu" if device_num < 0 else f"cuda:{device_num}" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) params = {"max_length": 128, "num_beams": 4, "early_stopping": True} en_to_nl = pipeline("translation_en_to_nl", tokenizer=tokenizer, model=model, device=device_num) print(en_to_nl("""Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible.""", **params)[0]['translation_text']) nl_to_en = pipeline("translation_nl_to_en", tokenizer=tokenizer, model=model, device=device_num) print(nl_to_en("""De jonge Wehling zat gebogen in zijn stoel, zijn hoofd in zijn hand. Hij was zo stoffig, zo stil en kleurloos dat hij vrijwel onzichtbaar was.""", **params)[0]['translation_text']) ``` This **t5 eff** model has **249M** parameters. It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset `mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **4d10h**, with a sequence length of **512**, batch size **128** and **851852** total steps (**56B** tokens). Pre-training evaluation loss and accuracy are **1,18** and **0,74**. Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation.
5e4a482ec03ce6129bae2434641d3598
apache-2.0
['generated_from_trainer']
false
t5-base-extraction-cnndm_fs0.01-all This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8747
a61179ff4109ae63723c22f56b48ac64
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP
bc090ff2a722f3ca30c1e55092946a45
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3573 | 2.25 | 200 | 1.9379 | | 1.9645 | 4.49 | 400 | 1.9068 | | 1.862 | 6.74 | 600 | 1.8823 | | 1.7958 | 8.99 | 800 | 1.8796 | | 1.7493 | 11.24 | 1000 | 1.8759 | | 1.7053 | 13.48 | 1200 | 1.8747 | | 1.6773 | 15.73 | 1400 | 1.8786 | | 1.6631 | 17.98 | 1600 | 1.8796 |
b15a1510026449077cf544d819462b93
apache-2.0
['automatic-speech-recognition', 'id']
false
exp_w2v2t_id_hubert_s246 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
fdca3583cbe034a4be26c8c26a619ff4
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_mnli_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5438 - Accuracy: 0.5431
9ecc2ddb027d94b1d66fd3c02aaccd41
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6023 | 1.0 | 1534 | 0.5718 | 0.4960 | | 0.5673 | 2.0 | 3068 | 0.5547 | 0.5184 | | 0.5555 | 3.0 | 4602 | 0.5505 | 0.5278 | | 0.5481 | 4.0 | 6136 | 0.5466 | 0.5381 | | 0.5426 | 5.0 | 7670 | 0.5454 | 0.5403 | | 0.5382 | 6.0 | 9204 | 0.5454 | 0.5354 | | 0.5341 | 7.0 | 10738 | 0.5452 | 0.5344 | | 0.5308 | 8.0 | 12272 | 0.5428 | 0.5410 | | 0.5271 | 9.0 | 13806 | 0.5460 | 0.5451 | | 0.5239 | 10.0 | 15340 | 0.5450 | 0.5462 | | 0.5209 | 11.0 | 16874 | 0.5447 | 0.5449 | | 0.5179 | 12.0 | 18408 | 0.5452 | 0.5475 | | 0.5152 | 13.0 | 19942 | 0.5495 | 0.5454 |
4ac928aa8a7568588ded5e3ec7343ade
apache-2.0
['automatic-speech-recognition', 'zh-CN']
false
exp_w2v2t_zh-cn_no-pretraining_s730 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
0d5192809b58b3e76b201baa82f7a532
mit
['generated_from_trainer']
false
gpt2-ft-with-non-challenging This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.9906
f10337df75d03f3a77a948970c196e21
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100.0 - num_epochs: 4
8a84757a96116ac4b1c3ef443cce7d05
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 4.0984 | | No log | 2.0 | 2 | 4.0802 | | No log | 3.0 | 3 | 4.0443 | | No log | 4.0 | 4 | 3.9906 |
8f11f6e499f8384327212daa627afe5a
apache-2.0
['setfit', 'sentence-transformers', 'text-classification']
false
fathyshalab/massive_calendar-roberta-large-v1-3-93 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer.
69fd715273342d27a047be442f4a2e9c
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3163 - Accuracy: 0.8667 - F1: 0.8693
60c2bf8310a8f1f68a5fd838b1c19b68
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__sst2__train-16-4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1501 - Accuracy: 0.6387
deb1e2473864c98c90cb69060e9b8abe
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7043 | 1.0 | 7 | 0.7139 | 0.2857 | | 0.68 | 2.0 | 14 | 0.7398 | 0.2857 | | 0.641 | 3.0 | 21 | 0.7723 | 0.2857 | | 0.5424 | 4.0 | 28 | 0.8391 | 0.2857 | | 0.5988 | 5.0 | 35 | 0.7761 | 0.2857 | | 0.3698 | 6.0 | 42 | 0.7707 | 0.4286 | | 0.3204 | 7.0 | 49 | 0.8290 | 0.4286 | | 0.2882 | 8.0 | 56 | 0.6551 | 0.5714 | | 0.1512 | 9.0 | 63 | 0.5652 | 0.5714 | | 0.1302 | 10.0 | 70 | 0.5278 | 0.5714 | | 0.1043 | 11.0 | 77 | 0.4987 | 0.7143 | | 0.0272 | 12.0 | 84 | 0.5278 | 0.5714 | | 0.0201 | 13.0 | 91 | 0.5307 | 0.5714 | | 0.0129 | 14.0 | 98 | 0.5382 | 0.5714 | | 0.0117 | 15.0 | 105 | 0.5227 | 0.5714 | | 0.0094 | 16.0 | 112 | 0.5066 | 0.7143 | | 0.0104 | 17.0 | 119 | 0.4869 | 0.7143 | | 0.0069 | 18.0 | 126 | 0.4786 | 0.7143 | | 0.0062 | 19.0 | 133 | 0.4707 | 0.7143 | | 0.0065 | 20.0 | 140 | 0.4669 | 0.7143 | | 0.0051 | 21.0 | 147 | 0.4686 | 0.7143 | | 0.0049 | 22.0 | 154 | 0.4784 | 0.7143 | | 0.0046 | 23.0 | 161 | 0.4839 | 0.7143 | | 0.0039 | 24.0 | 168 | 0.4823 | 0.7143 | | 0.0044 | 25.0 | 175 | 0.4791 | 0.7143 | | 0.0037 | 26.0 | 182 | 0.4778 | 0.7143 | | 0.0038 | 27.0 | 189 | 0.4770 | 0.7143 | | 0.0036 | 28.0 | 196 | 0.4750 | 0.7143 | | 0.0031 | 29.0 | 203 | 0.4766 | 0.7143 | | 0.0031 | 30.0 | 210 | 0.4754 | 0.7143 |
fa2d22bb210fcfaa2464757d4985da69
apache-2.0
['generated_from_trainer']
false
presentation_emotion_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.0237 - F1: 0.7273
6a10f39d3a71edd9d45dc937f99c7a3d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.18796906442746e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4
7b60bff8dad9e13eb641a7ba88f0bcdf
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1189 | 1.0 | 408 | 0.6827 | 0.7164 | | 1.0678 | 2.0 | 816 | 0.6916 | 0.7396 | | 0.6582 | 3.0 | 1224 | 0.9281 | 0.7276 | | 0.0024 | 4.0 | 1632 | 1.0237 | 0.7273 |
c01d5f539cd6c0ad1e989b54b2e3198d
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
list-of-languages) for all available. **Purpose:** This module takes a search query [1] and a passage [2] and calculates if the passage matches the query. It can be used as an improvement for Elasticsearch Results and boosts the relevancy by up to 100%. **Architecture:** On top of BERT there is a Densly Connected NN which takes the 768 Dimensional [CLS] Token as input and provides the output ([Arxiv](https://arxiv.org/abs/1901.04085)). **Output:** Just a single value between between -10 and 10. Better matching query,passage pairs tend to have a higher a score.
1ef1b9b71121cff05ce63c905abffe6c
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
Intended uses & limitations Both query[1] and passage[2] have to fit in 512 Tokens. As you normally want to rerank the first dozens of search results keep in mind the inference time of approximately 300 ms/query.
49bdd42fb7a4c2dc6a99f00bc355fbd6
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
How to use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco") model = AutoModelForSequenceClassification.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco") ``` This Model can be used as a drop-in replacement in the [Nboost Library](https://github.com/koursaros-ai/nboost) Through this you can directly improve your Elasticsearch Results without any coding.
1a91815f7bc22a7146b529d54a6ccf1b
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
Training data This model is trained using the [**Microsoft MS Marco Dataset**](https://microsoft.github.io/msmarco/ "Microsoft MS Marco"). This training dataset contains approximately 400M tuples of a query, relevant and non-relevant passages. All datasets used for training and evaluating are listed in this [table](https://github.com/microsoft/MSMARCO-Passage-Ranking
351757c5faf8750038a5dcb33630d772
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
data-information-and-formating). The used dataset for training is called *Train Triples Large*, while the evaluation was made on *Top 1000 Dev*. There are 6,900 queries in total in the development dataset, where each query is mapped to top 1,000 passage retrieved using BM25 from MS MARCO corpus.
8e09678213ecbd1fbdf9abb829c4075d
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
Training procedure The training is performed the same way as stated in this [README](https://github.com/nyu-dl/dl4marco-bert "NYU Github"). See their excellent Paper on [Arxiv](https://arxiv.org/abs/1901.04085). We changed the BERT Model from an English only to the default BERT Multilingual uncased Model from [Google](https://huggingface.co/bert-base-multilingual-uncased). Training was done 400 000 Steps. This equaled 12 hours an a TPU V3-8.
ec025aadcb2383f96bb79b62c7c4d8ec
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
Eval results We see nearly similar performance than the English only Model in the English [Bing Queries Dataset](http://www.msmarco.org/). Although the training data is English only internal Tests on private data showed a far higher accurancy in German than all other available models. Fine-tuned Models | Dependency | Eval Set | Search Boost<a href='
390943c0d7266dd21d406381c3ad485b
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
benchmarks'> | Speed on GPU ----------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------------------------ | ----------------------------------------------------- | ---------------------------------- **`amberoad/Multilingual-uncased-MSMARCO`** (This Model) | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-blue"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+61%** <sub><sup>(0.29 vs 0.18)</sup></sub> | ~300 ms/query <a href='
1b56d88ac43e4e208c56145965da0f71
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
footnotes'> `nboost/pt-tinybert-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+45%** <sub><sup>(0.26 vs 0.18)</sup></sub> | ~50ms/query <a href='
0f65be3a6118e522daf9fbac90c35b09
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
footnotes'> `nboost/pt-bert-base-uncased-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+62%** <sub><sup>(0.29 vs 0.18)</sup></sub> | ~300 ms/query<a href='
859776893ca87f111b8ee76314d2a675
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
footnotes'> `nboost/pt-bert-large-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+77%** <sub><sup>(0.32 vs 0.18)</sup></sub> | - `nboost/pt-biobert-base-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='https://github.com/naver/biobert-pretrained'>biomed</a> | **+66%** <sub><sup>(0.17 vs 0.10)</sup></sub> | ~300 ms/query<a href='
2bd4dcc3b1fd4c4023aebd60c42e1984
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
Contact Infos ![](https://amberoad.de/images/logo_text.png) Amberoad is a company focussing on Search and Business Intelligence. We provide you: * Advanced Internal Company Search Engines thorugh NLP * External Search Egnines: Find Competitors, Customers, Suppliers **Get in Contact now to benefit from our Expertise:** The training and evaluation was performed by [**Philipp Reissel**](https://reissel.eu/) and [**Igli Manaj**](https://github.com/iglimanaj) [![Amberoad](https://i.stack.imgur.com/gVE0j.png) Linkedin](https://de.linkedin.com/company/amberoad) | <svg xmlns="http://www.w3.org/2000/svg" x="0px" y="0px" width="32" height="32" viewBox="0 0 172 172" style=" fill:
776e10e9a3b82b6f508893c9b8b7a923
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
000000;"><g fill="none" fill-rule="nonzero" stroke="none" stroke-width="1" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="10" stroke-dasharray="" stroke-dashoffset="0" font-family="none" font-weight="none" font-size="none" text-anchor="none" style="mix-blend-mode: normal"><path d="M0,172v-172h172v172z" fill="none"></path><g fill="
e32755667002493b2e707b65f2c3a8ee
apache-2.0
['msmarco', 'multilingual', 'passage reranking']
false
e67e22"><path d="M37.625,21.5v86h96.75v-86h-5.375zM48.375,32.25h10.75v10.75h-10.75zM69.875,32.25h10.75v10.75h-10.75zM91.375,32.25h32.25v10.75h-32.25zM48.375,53.75h75.25v43h-75.25zM80.625,112.875v17.61572c-1.61558,0.93921 -2.94506,2.2687 -3.88428,3.88428h-49.86572v10.75h49.86572c1.8612,3.20153 5.28744,5.375 9.25928,5.375c3.97183,0 7.39808,-2.17347 9.25928,-5.375h49.86572v-10.75h-49.86572c-0.93921,-1.61558 -2.2687,-2.94506 -3.88428,-3.88428v-17.61572z"></path></g></g></svg>[Homepage](https://de.linkedin.com/company/amberoad) | [Email](info@amberoad.de)
7fcefff3e778233ca10d301c4a90add4
mit
['reward-model', 'reward_model', 'RLHF']
false
Reward model trained from human feedback Reward model (RM) trained to predict which generated answer is better judged by a human, given a question. RM are useful in these domain: - QA model evaluation - serves as reward score in RLHF - detect potential toxic response via ranking All models are train on these dataset with a same split seed across datasets (if validation split wasn't available) - [webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons) - [summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - [synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) - [anthropic_hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
ab2994995a065ffb329cfb5dae4f2c68
mit
['reward-model', 'reward_model', 'RLHF']
false
How to use ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2" rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name) question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants." inputs = tokenizer(question, answer, return_tensors='pt') score = rank_model(**inputs).logits[0].cpu().detach() print(score) ``` **Toxic response detection** ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2" rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name) question = "I just came out of from jail, any suggestion of my future?" helpful = "It's great to hear that you have been released from jail." bad = "Go back to jail you scum" inputs = tokenizer(question, helpful, return_tensors='pt') good_score = rank_model(**inputs).logits[0].cpu().detach() inputs = tokenizer(question, bad, return_tensors='pt') bad_score = rank_model(**inputs).logits[0].cpu().detach() print(good_score > bad_score)
aebe96a98648e2af397d78d4d307fe3f
mit
['reward-model', 'reward_model', 'RLHF']
false
Performance Validation split accuracy | Model | [WebGPT](https://huggingface.co/datasets/openai/webgpt_comparisons) | [Summary](https://huggingface.co/datasets/openai/summarize_from_feedback) | [SytheticGPT](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) | [Anthropic RLHF]() | |---|---|---|---|---| | [electra-large-discriminator](https://huggingface.co/OpenAssistant/reward-model-electra-large-discriminator) | 59.30 | 68.66 | 99.85 | 54.33 | | **[deberta-v3-large-v2](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2)** | **61.57** | 71.47 | 99.88 | **69.25** | | [deberta-v3-large](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large) | 61.13 | 72.23 | **99.94** | 55.62 | | [deberta-v3-base](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-base) | 59.07 | 66.84 | 99.85 | 54.51 | | deberta-v2-xxlarge | 58.67 | **73.27** | 99.77 | 66.74 | Its likely SytheticGPT has somekind of surface pattern on the choosen-rejected pair which makes it trivial to differentiate between better the answer.
d567460ea44233e0258949e270741207
mit
['reward-model', 'reward_model', 'RLHF']
false
Other Sincere thanks to [stability.ai](https://stability.ai/) for their unwavering support in terms of A100 computational resources. Their contribution was crucial in ensuring the smooth completion of this research project.
5d18e4efdca932a0868bfb6aaa17c1a7
apache-2.0
[]
false
简介 Brief Introduction 基于simcse无监督版本,用搜集整理的中文NLI数据进行simcse有监督任务的训练。在中文句子对任务上有良好的效果。 **Erlangshen-SimCSE-110M-Chinese** is based on the unsupervised version of simcse, And training simcse supervised task with collected and sorted chinese NLI data for. It has good effect on the task in Chinese sentences pair.
9c0a50e808043035ce58438ed6ab5fa9
apache-2.0
[]
false
模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言生成 NLU | 二郎神 Erlangshen | Bert | 110M | 中文 Chinese |
93bc6c59d6131ea050bb857a6832aa47
apache-2.0
[]
false
模型信息 Model Information 为了获得一个通用句子向量表征的模型,我们基于bert-base模型用了大量的无监督数据和有监督数据进行对比学习,最终获得了一个无需微调就能够利用模型输出的[CLS]进行相似度判断的模型。与用bert模型在针对任务微调后,再进行句子相似度任务不同,我们的模型在预训练完成后直接具备提取句子向量的能力。在一些任务上有如下的测评效果: In order to obtain a general sentence-embedding-model, we use a large number of unsupervised data and supervised data for comparative learning based on the Bert-base model, and finally obtained a model that can use the [CLS] output from the model to judge the similarity without fine-tuning. Different from the sentence similarity task after fine tuning the task with the bert model, our model has the ability to extract sentence vectors directly after pre training. In some tasks, the evaluation results are as follows: |模型 | LCQMC | BQ | PAWSX | ATEC | STS-B | | :----: | :----: | :----: | :----: | :----: | :----: | |Bert | 62 |38.62 |17.38 |28.98 |68.27| |Bert-large| 63.78| 37.51| 18.63| 30.24| 68.87| |RoBerta| 67.3| 39.89| 16.79| 30.57| 69.|36 |RoBerta large |67.25 |38.39 |19.09 |30.85 |69.36| |RoFormer| 63.58| 39.9 |17.52| 29.37 |67.32| |SimBERT| 73.43| 40.98| 15.87| 31.24| 72| |Erlangshen-SimCSE-110M-Chinese|74.94| 56.97| 21.84| 34.12| 70.5| *备注:我们的模型是直接用[cls],无whitening;其余模型是last avg + whitening* *ps:Our model use [cls] directly,and no whitening;Other model use last avg and do whitening*
19f14baf6ff6b1097f0558e9a2d26daf
apache-2.0
[]
false
加载模型 Loading Models ```python from transformers import AutoTokenizer,AutoModelForMaskedLM model =AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese') tokenizer = AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese') ```
59d7146b30cf6ae62e1da21a555438d5
apache-2.0
[]
false
使用示例 Usage Examples ```python import torch from sklearn.metrics.pairwise import cosine_similarity texta = '今天天气真不错,我们去散步吧!' textb = '今天天气真糟糕,还是在宅家里写bug吧!' inputs_a = tokenizer(texta,return_tensors="pt") inputs_b = tokenizer(textb,return_tensors="pt") outputs_a = model(**inputs_a ,output_hidden_states=True) texta_embedding = outputs_a.hidden_states[-1][:,0,:].squeeze() outputs_b = model(**inputs_b ,output_hidden_states=True) textb_embedding = outputs_b.hidden_states[-1][:,0,:].squeeze()
afe0a01bc7782ba85b528c6898daaa3b
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2t_en_vp-nl_s980 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
fffe392e15d15f79d3754cfdecb3509d
apache-2.0
['generated_from_trainer']
false
wav2vec2-hindi-3 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0900 - Wer: 0.7281
96bcbc62d239bc5c321cf9208943a5da
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.609 | 6.41 | 1000 | 1.2290 | 0.7497 | | 0.3754 | 12.82 | 2000 | 1.5350 | 0.7128 | | 0.1587 | 19.23 | 3000 | 1.8671 | 0.7322 | | 0.103 | 25.64 | 4000 | 1.9383 | 0.7300 | | 0.0761 | 32.05 | 5000 | 2.0767 | 0.7306 | | 0.0616 | 38.46 | 6000 | 2.0900 | 0.7281 |
7225ac1bdbe77f45d58aa91aebfa29a8
apache-2.0
['translation']
false
nld-epo * source group: Dutch * target group: Esperanto * OPUS readme: [nld-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md) * model: transformer-align * source language(s): nld * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.eval.txt)
1a063949bd10d65c89dcc8df996bd2a0
apache-2.0
['translation']
false
System Info: - hf_name: nld-epo - source_languages: nld - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['nl', 'eo'] - src_constituents: {'nld'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt - src_alpha3: nld - tgt_alpha3: epo - short_pair: nl-eo - chrF2_score: 0.355 - bleu: 16.1 - brevity_penalty: 0.9359999999999999 - ref_len: 72293.0 - src_name: Dutch - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: nl - tgt_alpha2: eo - prefer_old: False - long_pair: nld-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
a143329b152303f3abc5f19858daf718
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-lg Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset, using train, validation and other (excluding voices that are in the test set), and taking the test data for validation as well as test. When using this model, make sure that your speech input is sampled at 16kHz.
cd13fc40dcc979f2881b6591a3178d5e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "lg", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-luganda") model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-luganda") resampler = torchaudio.transforms.Resample(48_000, 16_000)
2f8679667fd6975bf7a49a655f1af755
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
eefc0953f5acd0c745a6d292e4030e14
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Luganda test data of Common Voice. (Available in Colab [here](https://colab.research.google.com/drive/1XxZ3mJOEXwIn-QH3C23jD_Qpom9aA1vH?usp=sharing).) ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import unidecode test_dataset = load_dataset("common_voice", "lg", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-luganda") model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-luganda") model.to("cuda") chars_to_ignore_regex = '[\[\],?.!;:%"“”(){}‟ˮʺ″«»/…‽�–]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
1e18b60c720033756413a8714682c341
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch def remove_special_characters(batch):
b2c38171bcb200d0c9e236935297426e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
most other punctuation is ignored batch["norm_text"] = re.sub(chars_to_ignore_regex, "", batch["norm_text"]).lower().strip() batch["norm_text"] = re.sub(r"(-|' | '| +)", " ", batch["norm_text"])
aea2d263065fb64825d5b287d8fa55ea
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
remove accents from a few characters (from loanwords, not tones) batch["norm_text"] = unidecode.unidecode(batch["norm_text"]) return batch test_dataset = test_dataset.map(speech_file_to_array_fn) test_dataset = test_dataset.map(remove_special_characters) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["norm_text"]))) ``` **Test Result**: 29.52 %
ce4a4fd0faff2d33f95dee2cef4eba3e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Training The Common Voice `train`, `validation` and `other` datasets were used for training, excluding voices that are in both the `other` and `test` datasets. The data was augmented to twice the original size with added noise and manipulated pitch, phase and intensity. Training proceeded for 60 epochs, on 1 V100 GPU provided by OVHcloud. The `test` data was used for validation. The [script used for training](https://github.com/serapio/transformers/blob/feature/xlsr-finetune/examples/research_projects/wav2vec2/run_common_voice.py) is adapted from the [example script provided in the transformers repo](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py).
b7a647fca73555533ebd9da02f830aec
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 50.2 - GMACs: 8.7 - Activations (M): 21.6 - Image size: 224 x 224 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/facebookresearch/ConvNeXt - **Dataset:** ImageNet-1k
910c17c27d386a10d322a231e21566fe
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('convnext_small.fb_in1k', pretrained=True) model = model.eval()
a350b5191c01ea021472a819b2a5d4e8
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_small.fb_in1k', pretrained=True, features_only=True, ) model = model.eval()
4e1891317fea6dc70f6b8b9efbaa1178
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_small.fb_in1k', pretrained=True, num_classes=0,
502fb5931ecc6c59139ae369fff31f64
mit
['generated_from_keras_callback', 'text_generator']
false
italian-literature-model-mini This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.7067 - Validation Loss: 5.6842 - Epoch: 2
ae2ec2a5f1b1346dbff958075a7759e9
mit
['generated_from_keras_callback', 'text_generator']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 15686, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
3b548e8e8d8d785fe4d374e96aea9217
mit
['generated_from_keras_callback', 'text_generator']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.7065 | 5.6842 | 0 | | 5.7065 | 5.6842 | 1 | | 5.7067 | 5.6842 | 2 |
7e69aaec066ce4eb9fbdc8977fbea0ee