license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
[]
false
Inference import pandas as pd from tabulate import tabulate text = """When Member States adopt those measures, they shall contain a reference to this Directive or be accompanied by such reference on the occasion of their official publication. They shall also include a statement that references in existing laws, regulations and administrative provisions to Article 9 of Directive 97/23/EC shall be construed as references to Article 13 of this Directive. Member States shall determine how such reference is to be made and how that statement is to be formulated.""" entities = pypeline(text) df = pd.DataFrame(entities) print(tabulate(df, showindex=True, headers=df.columns)) ``` ```
512b95a44968a4c0c09da4976737635d
mit
[]
false
Output entity_group score word start end -- ------------------------------ -------- ------------------ ------- ----- 0 current_act 0.999999 Directive 80 89 1 article_relevant_following_act 0.999995 9 296 297 2 another_act 0.999999 Directive 97/23/EC 301 319 3 article_relevant_following_act 0.999996 13 364 366 4 current_act 0.999999 Directive 375 384 ```
b1d9271cca204a9feb18106d50020eba
mit
[]
false
phoenix-01 on Stable Diffusion This is the `<phoenix-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<phoenix-style> 0](https://huggingface.co/sd-concepts-library/phoenix-01/resolve/main/concept_images/1.jpeg) ![<phoenix-style> 1](https://huggingface.co/sd-concepts-library/phoenix-01/resolve/main/concept_images/2.jpeg) ![<phoenix-style> 2](https://huggingface.co/sd-concepts-library/phoenix-01/resolve/main/concept_images/0.jpeg) ![<phoenix-style> 3](https://huggingface.co/sd-concepts-library/phoenix-01/resolve/main/concept_images/3.jpeg)
b197436201dd7f73eff8d53ddb0ab309
mit
[]
false
anime boy on Stable Diffusion This is the `<myAItestShota>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<myAItestShota> 0](https://huggingface.co/sd-concepts-library/anime-boy/resolve/main/concept_images/3.jpeg) ![<myAItestShota> 1](https://huggingface.co/sd-concepts-library/anime-boy/resolve/main/concept_images/0.jpeg) ![<myAItestShota> 2](https://huggingface.co/sd-concepts-library/anime-boy/resolve/main/concept_images/2.jpeg) ![<myAItestShota> 3](https://huggingface.co/sd-concepts-library/anime-boy/resolve/main/concept_images/1.jpeg) ![<myAItestShota> 4](https://huggingface.co/sd-concepts-library/anime-boy/resolve/main/concept_images/4.jpeg)
56269953512a487d7a31f7ea141e095a
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_cola_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6753 - Matthews Correlation: 0.0
70e0499247602fa0ed8f5a13673b95e0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.8155 | 1.0 | 67 | 0.6867 | 0.0 | | 0.797 | 2.0 | 134 | 0.6862 | 0.0 | | 0.7961 | 3.0 | 201 | 0.6836 | 0.0 | | 0.7944 | 4.0 | 268 | 0.6821 | 0.0 | | 0.7863 | 5.0 | 335 | 0.6753 | 0.0 | | 0.7138 | 6.0 | 402 | 0.6790 | 0.1085 | | 0.6262 | 7.0 | 469 | 0.7238 | 0.1231 | | 0.5782 | 8.0 | 536 | 0.7285 | 0.1281 | | 0.5482 | 9.0 | 603 | 0.7484 | 0.1281 | | 0.5318 | 10.0 | 670 | 0.7918 | 0.1182 |
3a9b7fd4bf57feed75511ccc8297173a
apache-2.0
['generated_from_trainer']
false
my_awesome_model4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 25.4886 - Accuracy: 0.0
6cd70b9718ffdf0a7328815596c98f4c
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.02 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
25e6d268c783e5b4ab6509deadf3945c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6252 | 1.0 | 1 | 3.9768 | 0.0 | | 1.0027 | 2.0 | 2 | 25.4886 | 0.0 |
f8ed50e4354cb30c215784c832b1c80a
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_data_aug_mrpc_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 - F1: 1.0 - Combined Score: 1.0
60507cdf93609ebf777c18c8964c4a1e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.1854 | 1.0 | 1959 | 0.0199 | 0.9975 | 0.9982 | 0.9979 | | 0.04 | 2.0 | 3918 | 0.0050 | 0.9975 | 0.9982 | 0.9979 | | 0.0253 | 3.0 | 5877 | 0.0015 | 1.0 | 1.0 | 1.0 | | 0.0175 | 4.0 | 7836 | 0.0003 | 1.0 | 1.0 | 1.0 | | 0.0134 | 5.0 | 9795 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0107 | 6.0 | 11754 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0081 | 7.0 | 13713 | 0.0012 | 1.0 | 1.0 | 1.0 | | 0.0062 | 8.0 | 15672 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0061 | 9.0 | 17631 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0044 | 10.0 | 19590 | 0.0002 | 1.0 | 1.0 | 1.0 | | 0.0041 | 11.0 | 21549 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0034 | 12.0 | 23508 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0029 | 13.0 | 25467 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0016 | 14.0 | 27426 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0019 | 15.0 | 29385 | 0.0140 | 0.9975 | 0.9982 | 0.9979 | | 0.0018 | 16.0 | 31344 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0012 | 17.0 | 33303 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0013 | 18.0 | 35262 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0008 | 19.0 | 37221 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0011 | 20.0 | 39180 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0005 | 21.0 | 41139 | 0.0007 | 1.0 | 1.0 | 1.0 | | 0.0009 | 22.0 | 43098 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 23.0 | 45057 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 24.0 | 47016 | 0.0000 | 1.0 | 1.0 | 1.0 |
7e447f9c6a8a5cd39211d293c4bb2940
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 1 - mixed_precision_training: Native AMP
1ca9e6f7f1d1b8e9a32ca2c78c65b09c
apache-2.0
['translation']
false
cpp-cpp * source group: Creoles and pidgins, Portuguese-based * target group: Creoles and pidgins, Portuguese-based * OPUS readme: [cpp-cpp](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-cpp/README.md) * model: transformer * source language(s): ind pap * target language(s): ind pap * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.eval.txt)
55d2d5f6c957589a2ee2118eb8db3171
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.msa-msa.msa.msa | 0.7 | 0.149 | | Tatoeba-test.msa-pap.msa.pap | 31.7 | 0.577 | | Tatoeba-test.multi.multi | 21.1 | 0.369 | | Tatoeba-test.pap-msa.pap.msa | 17.7 | 0.197 |
8d63d6872ba9033db5fdf0a4ae57a657
apache-2.0
['translation']
false
System Info: - hf_name: cpp-cpp - source_languages: cpp - target_languages: cpp - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-cpp/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['id', 'cpp'] - src_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'} - tgt_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'} - src_multilingual: True - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.test.txt - src_alpha3: cpp - tgt_alpha3: cpp - short_pair: cpp-cpp - chrF2_score: 0.369 - bleu: 21.1 - brevity_penalty: 0.882 - ref_len: 18.0 - src_name: Creoles and pidgins, Portuguese-based - tgt_name: Creoles and pidgins, Portuguese-based - train_date: 2020-07-26 - src_alpha2: cpp - tgt_alpha2: cpp - prefer_old: False - long_pair: cpp-cpp - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
c53ac8ff5fd04bdf6e8c57da6bd65746
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4989 - Accuracy: 0.6525
48eeb10761266fa80b76ea93c02dcd55
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.575 | 1.0 | 1534 | 0.5428 | 0.5554 | | 0.5345 | 2.0 | 3068 | 0.5205 | 0.5987 | | 0.511 | 3.0 | 4602 | 0.5105 | 0.6222 | | 0.4917 | 4.0 | 6136 | 0.5021 | 0.6360 | | 0.4735 | 5.0 | 7670 | 0.5004 | 0.6470 | | 0.4557 | 6.0 | 9204 | 0.4976 | 0.6534 | | 0.4391 | 7.0 | 10738 | 0.4982 | 0.6606 | | 0.4231 | 8.0 | 12272 | 0.4982 | 0.6586 | | 0.4082 | 9.0 | 13806 | 0.5020 | 0.6587 | | 0.394 | 10.0 | 15340 | 0.5082 | 0.6561 | | 0.3816 | 11.0 | 16874 | 0.5140 | 0.6617 |
c1a3e8c5ad7e4bb2e4f2dbf3b5f47719
apache-2.0
['generated_from_trainer']
false
Tagged_Uni_50v0_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6180 - Precision: 0.1063 - Recall: 0.0090 - F1: 0.0166 - Accuracy: 0.7870
adc0b4b80155d5a46be89784bd13eae7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 14 | 0.7325 | 0.0 | 0.0 | 0.0 | 0.7803 | | No log | 2.0 | 28 | 0.6458 | 0.0860 | 0.0039 | 0.0075 | 0.7838 | | No log | 3.0 | 42 | 0.6180 | 0.1063 | 0.0090 | 0.0166 | 0.7870 |
0f804d2181a87cb14b627baa82bc87ca
mit
[]
false
roberta-base-wechsel-ukrainian [`roberta-base`](https://huggingface.co/roberta-base) transferred to Ukrainian using the method from the NAACL2022 paper [WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models](https://aclanthology.org/2022.naacl-main.293/).
4a1c49b0b2527a7d0cc44f4c21a74095
mit
[]
false
Evaluation Evaluation was done on [lang-uk's ner-uk project](https://github.com/lang-uk/ner-uk), the Ukrainian portion of [WikiANN](https://huggingface.co/datasets/wikiann) and the [Ukrainian IU corpus from the Universal Dependencies project](https://github.com/UniversalDependencies/UD_Ukrainian-IU). Evaluation results are the mean of 5 runs with different seeds. __Validation Results__ | | lang-uk NER (Micro F1) | WikiANN (Micro F1) | UD Ukrainian IU POS (Accuracy) | |:-------------------------------------------------|:-------------------------|:-------------|:-------------------------| | roberta-base-wechsel-ukrainian | 88.06 (0.50) | 92.96 (0.08) | 98.70 (0.05) | | roberta-large-wechsel-ukrainian | __89.27 (0.53)__ | __93.22 (0.15)__ | __98.86 (0.03)__ | | | roberta-base-scratch-ukrainian* | 85.49 (0.88) | 91.91 (0.08) | 98.49 (0.04) | | roberta-large-scratch-ukrainian* | 86.54 (0.70) | 92.39 (0.16) | 98.65 (0.09) | | | dbmdz/electra-base-ukrainian-cased-discriminator | 87.49 (0.52) | 93.20 (0.16) | 98.60 (0.03) | | xlm-roberta-base | 86.68 (0.44) | 92.41 (0.13) | 98.53 (0.02) | | xlm-roberta-large | 86.64 (1.61) | 93.01 (0.13) | 98.71 (0.04) | __Test Results__ | | lang-uk NER (Micro F1) | WikiANN (Micro F1) | UD Ukrainian IU POS (Accuracy) | |:-------------------------------------------------|:-------------------------|:-------------|:-------------------------| | roberta-base-wechsel-ukrainian | 90.81 (1.51) | 92.98 (0.12) | 98.57 (0.03) | | roberta-large-wechsel-ukrainian | __91.24 (1.16)__ | __93.22 (0.17)__ | __98.74 (0.06)__ | | | roberta-base-scratch-ukrainian* | 89.57 (1.01) | 92.05 (0.09) | 98.31 (0.08) | | roberta-large-scratch-ukrainian* | 89.96 (0.89) | 92.49 (0.15) | 98.52 (0.04) | | | dbmdz/electra-base-ukrainian-cased-discriminator | 90.43 (1.29) | 92.99 (0.11) | 98.59 (0.06) | | xlm-roberta-base | 90.86 (0.81) | 92.27 (0.09) | 98.45 (0.07) | | xlm-roberta-large | 90.16 (2.98) | 92.92 (0.19) | 98.71 (0.04) | \*trained using the same exact training setup as the wechsel-\* models, but without parameter transfer from WECHSEL.
187c7e038863b49098eeaa80581f73bf
apache-2.0
['3rd', 'generated_from_trainer']
false
bert-finetuned-sem_eval-english This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5536 - F1: 0.5455 - Roc Auc: 0.6968 - Accuracy: 0.1839
58b0fe3018fd8ca98ccde9164ae87334
apache-2.0
['3rd', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30
8906097ed933f1af6563baa28f2c86dd
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2t_en_hubert_s875 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
cc8319ec1c604c2782e93d9545187389
apache-2.0
['translation']
false
opus-mt-ca-en * source languages: ca * target languages: en * OPUS readme: [ca-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.eval.txt)
c6af00f258bf8808beebc53a42639768
apache-2.0
[]
false
ParsBERT (v2.0) A Transformer-based Model for Persian Language Understanding We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
4c584f26ac2f5d0e8864129a270807f9
apache-2.0
[]
false
Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
a31ecc16c1e5a74bccfce706a56065a5
apache-2.0
[]
false
DeepSentiPers which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset. **Binary:** 1. Negative (Furious + Angry) 2. Positive (Happy + Delighted) **Multi** 1. Furious 2. Angry 3. Neutral 4. Happy 5. Delighted | Label |
983a4fe7537c2070cd2fcacbd93255e8
apache-2.0
[]
false
| |:---------:|:----:| | Furious | 236 | | Angry | 1357 | | Neutral | 2874 | | Happy | 2848 | | Delighted | 2516 | **Download** You can download the dataset from: - [SentiPers](https://github.com/phosseini/sentipers) - [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
fe12735c6d39b2924b4e72d11a913ace
apache-2.0
[]
false
Results The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures. | Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------:|:-----------:|:-----:|:-------------:| | SentiPers (Multi Class) | 71.31* | 71.11 | - | 69.33 | | SentiPers (Binary Class) | 92.42* | 92.13 | - | 91.98 |
178a41a3fac47147268052a44db1f42b
apache-2.0
[]
false
How to use :hugs: | Task | Notebook | |---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Sentiment Analysis | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
306d8cc215192c91e5b41f29e0ff092f
apache-2.0
[]
false
BibTeX entry and citation info Please cite in publications as the following: ```bibtex @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ```
29717726973e3959993eaa1ea0718cf5
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__hate_speech_offensive__train-8-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1019 - Accuracy: 0.139
24c8cbc22156b2421be5ce92ab28f3c7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1082 | 1.0 | 5 | 1.1432 | 0.0 | | 1.0524 | 2.0 | 10 | 1.1613 | 0.0 | | 1.0641 | 3.0 | 15 | 1.1547 | 0.0 | | 0.9592 | 4.0 | 20 | 1.1680 | 0.0 | | 0.9085 | 5.0 | 25 | 1.1762 | 0.0 | | 0.8508 | 6.0 | 30 | 1.1809 | 0.2 | | 0.7263 | 7.0 | 35 | 1.1912 | 0.2 | | 0.6448 | 8.0 | 40 | 1.2100 | 0.2 | | 0.5378 | 9.0 | 45 | 1.2037 | 0.2 | | 0.5031 | 10.0 | 50 | 1.2096 | 0.2 | | 0.4041 | 11.0 | 55 | 1.2203 | 0.2 |
504a41dbf42bfd1e4f3d933ec85c20de
apache-2.0
['generated_from_trainer']
false
presentation_hate_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8632 - F1: 0.7730
8f0dd32aa8bb1ff9c054cc1be54e2f2d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.436235805743952e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4
0003f0aae84a98a99419ee85a46c5aa4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.363 | 1.0 | 282 | 0.4997 | 0.7401 | | 0.2145 | 2.0 | 564 | 0.5071 | 0.7773 | | 0.1327 | 3.0 | 846 | 0.7109 | 0.7645 | | 0.0157 | 4.0 | 1128 | 0.8632 | 0.7730 |
c89c6d9e0687ef1cc062921174c98c93
apache-2.0
['generated_from_trainer']
false
bert-finetuned-mutation-recognition-3 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0727 - Dnamutation F1: 0.6484 - Proteinmutation F1: 0.8571 - Snp F1: 1.0 - Precision: 0.7966 - Recall: 0.7625 - F1: 0.7792 - Accuracy: 0.9872
42d83f01c5de291de197578bfd1de8e7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Dnamutation F1 | Proteinmutation F1 | Snp F1 | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:------------------:|:------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 324 | 0.0323 | 0.5996 | 0.7886 | 1.0 | 0.6583 | 0.7982 | 0.7215 | 0.9901 | | 0.0788 | 2.0 | 648 | 0.0314 | 0.6765 | 0.8783 | 1.0 | 0.7453 | 0.8571 | 0.7973 | 0.9907 | | 0.0788 | 3.0 | 972 | 0.0306 | 0.6391 | 0.8679 | 1.0 | 0.7341 | 0.8232 | 0.7761 | 0.9903 | | 0.0273 | 4.0 | 1296 | 0.0424 | 0.6360 | 0.8714 | 1.0 | 0.7792 | 0.775 | 0.7771 | 0.9885 | | 0.0178 | 5.0 | 1620 | 0.0462 | 0.5885 | 0.8683 | 1.0 | 0.7576 | 0.7589 | 0.7583 | 0.9869 | | 0.0178 | 6.0 | 1944 | 0.0531 | 0.6176 | 0.8701 | 1.0 | 0.7734 | 0.7679 | 0.7706 | 0.9873 | | 0.0165 | 7.0 | 2268 | 0.0573 | 0.6597 | 0.8658 | 1.0 | 0.8022 | 0.775 | 0.7884 | 0.9881 | | 0.0144 | 8.0 | 2592 | 0.0636 | 0.6596 | 0.8454 | 1.0 | 0.7919 | 0.7679 | 0.7797 | 0.9871 | | 0.0144 | 9.0 | 2916 | 0.0710 | 0.6568 | 0.8748 | 1.0 | 0.8159 | 0.7679 | 0.7912 | 0.9872 | | 0.0108 | 10.0 | 3240 | 0.0727 | 0.6484 | 0.8571 | 1.0 | 0.7966 | 0.7625 | 0.7792 | 0.9872 |
0b4a86a493a7a35e26e72f5fc83cde38
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_unispeech_s683 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
feef46ef8d13a99c4179143ea6ac4faa
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
MultiBERTs Seed 2 Checkpoint 160k (uncased) Seed 2 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
33879f4a91a42aa587cbdba56a1f5927
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-160k') model = BertModel.from_pretrained("multiberts-seed-2-160k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
da75d682397665b02eeb59856d1b8138
apache-2.0
['hf-ast-leaderboard', 'generated_from_trainer']
false
Whisper Small arb - GP This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Dialect Arabic dataset. It achieves the following results on the evaluation set: - Loss: 2.1489 - Wer: 110.7984
d0d7417c51a8f2e8a42f50f2c4569b6c
apache-2.0
['hf-ast-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9933 | 1.89 | 1000 | 2.0970 | 125.2555 | | 1.3119 | 3.79 | 2000 | 1.9818 | 113.1290 | | 0.7643 | 5.68 | 3000 | 2.0559 | 115.4176 | | 0.5144 | 7.58 | 4000 | 2.1489 | 110.7984 |
92ee8e57a48a121335ee0e0321623c93
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased_fold_8_ternary_v1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8474 - F1: 0.8022
767eec04fe64bf09ce0b5307d145b10f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 289 | 0.5398 | 0.7838 | | 0.5509 | 2.0 | 578 | 0.6062 | 0.7703 | | 0.5509 | 3.0 | 867 | 0.6563 | 0.7666 | | 0.2366 | 4.0 | 1156 | 0.7688 | 0.7961 | | 0.2366 | 5.0 | 1445 | 1.0968 | 0.7690 | | 0.1247 | 6.0 | 1734 | 1.1414 | 0.7924 | | 0.0482 | 7.0 | 2023 | 1.2159 | 0.7875 | | 0.0482 | 8.0 | 2312 | 1.2703 | 0.7887 | | 0.0245 | 9.0 | 2601 | 1.3401 | 0.7985 | | 0.0245 | 10.0 | 2890 | 1.4645 | 0.7961 | | 0.0149 | 11.0 | 3179 | 1.5632 | 0.7801 | | 0.0149 | 12.0 | 3468 | 1.5249 | 0.7875 | | 0.0124 | 13.0 | 3757 | 1.6263 | 0.7948 | | 0.0038 | 14.0 | 4046 | 1.8059 | 0.7764 | | 0.0038 | 15.0 | 4335 | 1.7649 | 0.7776 | | 0.0061 | 16.0 | 4624 | 1.8293 | 0.7850 | | 0.0061 | 17.0 | 4913 | 1.8316 | 0.7887 | | 0.0022 | 18.0 | 5202 | 1.7628 | 0.7973 | | 0.0022 | 19.0 | 5491 | 1.8763 | 0.7862 | | 0.002 | 20.0 | 5780 | 1.8409 | 0.7899 | | 0.0026 | 21.0 | 6069 | 1.8146 | 0.8022 | | 0.0026 | 22.0 | 6358 | 1.8420 | 0.7973 | | 0.0008 | 23.0 | 6647 | 1.8683 | 0.8010 | | 0.0008 | 24.0 | 6936 | 1.8571 | 0.8010 | | 0.0015 | 25.0 | 7225 | 1.8474 | 0.8022 |
52883d35a17dc0e25cf247981002687d
apache-2.0
['align', 'clip']
false
Model Details This is an unofficial implementation of [ALIGN](https://arxiv.org/abs/2102.05918) trained on [COYO-700M](https://github.com/kakaobrain/coyo-dataset). The official ALIGN is trained on its dataset of 1.8B samples. That dataset is not released to the public. Instead, we trained our implementation of ALIGN model on [COYO-700M](https://github.com/kakaobrain/coyo-dataset). It's developed by Kakao Brain to validate the performance of COYO-700M dataset on a large-scale model. The training took about 8 days on TPU V3-512.
b79e0c2777a50217524dd95a5907992e
apache-2.0
['align', 'clip']
false
Evaluation results | | Dataset | ImageNet | Flickr30k | | MsCOCO | | |----------------------------------|:----------:|:--------:|:---------:|:-------:|:-------:|:-------:| | | | KNN | I2T R@1 | T2I R@1 | I2T R@1 | T2I R@1 | | ALIGN-L2-Large(Google) | ALIGN 1.8B | 76.4 | 88.6 | 75.7 | 58.6 | 45.6 | | ALIGN-B7-Base(Google) | ALIGN 1.8B | 69.3 | - | - | 55.4 | 41.7 | | COYO-ALIGN-B7-Base(Kakao Brain) | COYO-700M | 68.6 | 88.1 | 73.2 | 61.2 | 43.1 |
e6a24af04ff794784cb9eb326ccc8526
mit
[]
false
Model miniALBERT is a recursive transformer model which uses cross-layer parameter sharing, embedding factorisation, and bottleneck adapters to achieve high parameter efficiency. Since miniALBERT is a compact model, it is trained using a layer-to-layer distillation technique, using the BioClinicalBERT model as the teacher. This model is trained for 3 epochs on the MIMIC-III notes dataset. In terms of architecture, this model uses an embedding dimension of 312, a hidden size of 768, an MLP expansion rate of 4, and a reduction factor of 16 for bottleneck adapters. In general, this model uses 6 recursions and has a unique parameter count of 18 million parameters.
90c1832336202708796178f4e11b4617
mit
[]
false
For Sequence Classification use the below code model = MiniAlbertForTokenClassification.from_pretrained("nlpie/clinical-miniALBERT-312") ``` In addition, For efficient fine-tuning using the pre-trained bottleneck adapters use the below code: ```Python model.trainAdaptersOnly() ```
28463ee8f09edfd736845a4e2fea8f79
mit
[]
false
Citation If you use the model, please cite our paper: ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.04725, doi = {10.48550/ARXIV.2302.04725}, url = {https://arxiv.org/abs/2302.04725}, author = {Rohanian, Omid and Nouriborji, Mohammadmahdi and Jauncey, Hannah and Kouchaki, Samaneh and Group, ISARIC Clinical Characterisation and Clifton, Lei and Merson, Laura and Clifton, David A.}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7, 68T50}, title = {Lightweight Transformers for Clinical Natural Language Processing}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
a1b4fd0de578f847a4678aef732ca145
apache-2.0
['automatic-speech-recognition', 'ar']
false
exp_w2v2t_ar_vp-nl_s756 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
493e869b93e52bc4e043d3f9c81ec7af
apache-2.0
['generated_from_trainer']
false
bert-base-multilingual-cased-finetuned-multilingual-ner This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2352 - Precision: 0.8109 - Recall: 0.8332 - F1: 0.8219 - Accuracy: 0.9264
aec9054779273a53a29165a0c0513829
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7301 | 0.16 | 100 | 0.3827 | 0.6189 | 0.7009 | 0.6573 | 0.8734 | | 0.3841 | 0.32 | 200 | 0.3195 | 0.7057 | 0.7511 | 0.7277 | 0.8922 | | 0.3451 | 0.48 | 300 | 0.2862 | 0.7094 | 0.7750 | 0.7407 | 0.8952 | | 0.3187 | 0.65 | 400 | 0.2735 | 0.7372 | 0.7802 | 0.7581 | 0.9019 | | 0.3058 | 0.81 | 500 | 0.2533 | 0.7536 | 0.8015 | 0.7768 | 0.9052 | | 0.2918 | 0.97 | 600 | 0.2458 | 0.7587 | 0.8085 | 0.7828 | 0.9126 | | 0.2425 | 1.13 | 700 | 0.2379 | 0.7742 | 0.7976 | 0.7857 | 0.9150 | | 0.2387 | 1.29 | 800 | 0.2300 | 0.7772 | 0.8108 | 0.7936 | 0.9165 | | 0.2125 | 1.45 | 900 | 0.2387 | 0.7900 | 0.8130 | 0.8014 | 0.9180 | | 0.2026 | 1.62 | 1000 | 0.2317 | 0.7877 | 0.8152 | 0.8012 | 0.9186 | | 0.1963 | 1.78 | 1100 | 0.2326 | 0.7842 | 0.8269 | 0.8049 | 0.9220 | | 0.2052 | 1.94 | 1200 | 0.2247 | 0.7924 | 0.8234 | 0.8076 | 0.9212 | | 0.1868 | 2.1 | 1300 | 0.2410 | 0.7903 | 0.8282 | 0.8088 | 0.9204 | | 0.1556 | 2.26 | 1400 | 0.2428 | 0.8064 | 0.8317 | 0.8189 | 0.9256 | | 0.153 | 2.42 | 1500 | 0.2316 | 0.8017 | 0.8282 | 0.8147 | 0.9238 | | 0.1484 | 2.58 | 1600 | 0.2379 | 0.8054 | 0.8338 | 0.8194 | 0.9258 | | 0.137 | 2.75 | 1700 | 0.2331 | 0.8101 | 0.8324 | 0.8211 | 0.9270 | | 0.1638 | 2.91 | 1800 | 0.2352 | 0.8109 | 0.8332 | 0.8219 | 0.9264 |
96d020cc7bb4cfcb8d4afb305cd663e4
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TA dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 1.0
f6e698a38d834371646624ae35b067b9
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP
209176b05d852b22eef7c5ffe36330de
mit
['automatic-speech-recognition', 'generated_from_trainer']
false
Model description We fine-tuned a wav2vec 2.0 large XLSR-53 checkpoint with 842h of unlabelled Luxembourgish speech collected from [RTL.lu](https://www.rtl.lu/). Then the model was fine-tuned on 4h of labelled Luxembourgish speech from the same domain. Additionally, we rescore the output transcription with a 5-gram language model trained on text corpora from RTL.lu and the Luxembourgish parliament.
2b484dae6494345141cd4b27dee85497
mit
['bert', 'language-model', 'flaubert', 'flue', 'french', 'flaubert-small', 'cased']
false
FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert).
fad28aea2bea4270b9f9cdf33043e52e
mit
['bert', 'language-model', 'flaubert', 'flue', 'french', 'flaubert-small', 'cased']
false
FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only.
6b75247cf9c6cb10375be15243a938f0
mit
['bert', 'language-model', 'flaubert', 'flue', 'french', 'flaubert-small', 'cased']
false
Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False)
3e9469e7a02de3ffcff5161010448d6a
mit
['bert', 'language-model', 'flaubert', 'flue', 'french', 'flaubert-small', 'cased']
false
do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape)
e1b5cc7b2e9478f2a12bfaef2243d695
mit
['bert', 'language-model', 'flaubert', 'flue', 'french', 'flaubert-small', 'cased']
false
The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ```
4d1083793b601e0e59214c37233e3cbb
mit
['bert', 'language-model', 'flaubert', 'flue', 'french', 'flaubert-small', 'cased']
false
References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
77800cb422c437608926da203ad08425
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1654 - F1: 0.8590
a2fad795cde60e7908933a2153921e4e
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2845 | 1.0 | 715 | 0.1831 | 0.8249 | | 0.1449 | 2.0 | 1430 | 0.1643 | 0.8479 | | 0.0929 | 3.0 | 2145 | 0.1654 | 0.8590 |
bc46d3376e0478e3bae4abf55d7242a5
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 26.3 - GMACs: 2.6 - Activations (M): 18.5 - Image size: 224 x 224 - **Original:** https://github.com/snap-research/EfficientFormer - **Papers:** - Rethinking Vision Transformers for MobileNet Size and Speed: https://arxiv.org/abs/2212.08059 - **Dataset:** ImageNet-1k
54bcf031fe9ad150ae5f2ffa289dee20
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('efficientformerv2_l.snap_dist_in1k', pretrained=True) model = model.eval()
b08d96ab32f613e0b0abc51daa1c96b5
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'efficientformerv2_l.snap_dist_in1k', pretrained=True, num_classes=0,
32fd026892157270d257eef25144956c
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'efficientformerv2_l.snap_dist_in1k', pretrained=True, features_only=True, ) model = model.eval()
805fc779e60cf5a88649203738ff3dd4
apache-2.0
['image-classification', 'timm']
false
Citation ```bibtex @article{li2022rethinking, title={Rethinking Vision Transformers for MobileNet Size and Speed}, author={Li, Yanyu and Hu, Ju and Wen, Yang and Evangelidis, Georgios and Salahi, Kamyar and Wang, Yanzhi and Tulyakov, Sergey and Ren, Jian}, journal={arXiv preprint arXiv:2212.08059}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
aeee581cf1bb420361dc2739614174d4
cc
['text classification']
false
Model information: This model is the [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) model that has been finetuned using radiology report texts from the MIMIC-III database. The task performed was text classification in order to benchmark this model with a selection of other variants of BERT for the classifcation of MIMIC-III radiology report texts into two classes. Labels of [0,1] were assigned to radiology reports in MIMIC-III that were linked to an ICD9 diagnosis code for lung cancer = 1 and a random sample of reports which were not linked to any type of cancer diagnosis code at all = 0.
c1929c84659e8d5d894f4a8a65b11a0c
cc
['text classification']
false
Limitations: Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before use - - [MIMIC-III](https://www.nature.com/articles/sdata201635.pdf) - [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT)
b4a744a5063370d3b7fce0fe9851238c
cc
['text classification']
false
How to use: Load the model from the library using the following checkpoints: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/bioclinical-bert-ft-m3-lc") model = AutoModel.from_pretrained("sarahmiller137/bioclinical-bert-ft-m3-lc") ```
276e256c2b525f933aae9498b09583c5
mit
['conversational']
false
Aeona | Chatbot ![Aeona Banner](https://github.com/deepsarda/Aeona/blob/master/dashboard/static/banner.png?raw=true) An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small). Recommended to use along with an [AIML Chatbot](https://github.com/deepsarda/Aeona-Aiml) to reduce load, get better replies, add name and personality to your bot. Using an AIML Chatbot will allow you to hardcode some replies also.
96e2b8f124cf70179f80f70297fa0048
mit
['conversational']
false
AEONA Aeona is an chatbot which hope's to be able to talk with humans as if its an friend! It's main target platform is discord. You can invite the bot [here](https://aeona.xyz). To learn more about this project and chat with the ai, you can use this [website](https://aeona.xyz/). Aeona works why using context of the previous messages and guessing the personality of the human who is talking with it and adapting its own personality to better talk with the user.
302b95c785cbc1c80216aaf90fd3fb3b
mit
['conversational']
false
Why not an AI on its own? For AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code! The goal of the AI is to generate responses where the AIML fails. Hence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible! So we use 3 dataset:- 1. [Movielines](https://www.kaggle.com/Cornell-University/movie-dialog-corpus) The movie lines promote longer and more thought out responses but it can be very random. About 200k lines! 2. [Discord Messages](https://www.kaggle.com/jef1056/discord-data) The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages! 3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time!
f8241d72d83ce5814b0681c2d668a006
mit
['conversational']
false
Training The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated. This leads to them covering each others issues! The AI has a context of 6 messages which means it will reply until the 4th message from user. [Example](https://huggingface.co/deepparag/Aeona-Beta/discussions/1)
bab8d6f3dc35ae637e5e5132ff45526a
mit
['conversational']
false
Tips for Hugging Face interference I recommend send the user input, previous 3 AI and human responses. Using more context than this will lead to useless responses but using less is alright but the responses may be random.
b18d63d3d6fbd639d6a98f6aecd612c3
mit
['conversational']
false
Evaluation Below is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics. | Model | Perplexity | |---|---| | Seq2seq Baseline [3] | 29.8 | | Wolf et al. [5] | 16.3 | | GPT-2 baseline | 99.5 | | DialoGPT baseline | 56.6 | | DialoGPT finetuned | 11.4 | | PersonaGPT | 10.2 | | **Aeona** | **7.9** |
3e814e5ceb8b7ba255bdede95b097f11
mit
['conversational']
false
Usage Example: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deepparag/Aeona") model = AutoModelWithLMHead.from_pretrained("deepparag/Aeona")
81d0b439a1b282d46936f74df511a791
mit
['conversational']
false
generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=4, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 )
c71b00813de3afaecd7a73c9a2fbdee4
cc-by-sa-4.0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_8k` Description: This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `sep_clean` task of the Libri3Mix dataset. Training config: ```yml data: n_src: 3 sample_rate: 8000 segment: 3 task: sep_clean train_dir: data/wav8k/min/train-360 valid_dir: data/wav8k/min/dev filterbank: kernel_size: 16 n_filters: 512 stride: 8 masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 3 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 training: batch_size: 24 early_stop: true epochs: 200 half_lr: true num_workers: 4 ``` Results : On Libri3Mix min test set : ```yaml si_sdr: 8.581797049575108 si_sdr_imp: 11.977037288467368 sdr' 9.305885208641385 sdr_imp: 12.3943409734845 sir: 16.42030534048559 sir_imp: 19.508759460400984 sar: 10.641943911079238 sar_imp: -56.4345187842095 stoi: 0.8365148408724333 stoi_imp: 0.24401766199806396 ``` License notice: This work "ConvTasNet_Libri3Mix_sepclean_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov, used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_8k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris.
078e79440aa7e7293f04c0b7ac39241b
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
DreamBooth model for the spaeti concept trained by malysheva42 on the malysheva42/spaeti_store dataset. This is a Stable Diffusion model fine-tuned on the spaeti (späti) concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of spaeti store** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
08e5cb2caa85ff60824df584d29e8f19
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
Examples 1. a picture of spaeti store in the forest ![a picture of spaeti store in the forest](sample-image-forest.png) 2. a picture of spaeti store on the beach near the sea, best quality ![a picture of spaeti store on the beach near the sea, best quality](sample-image-beach.png) 3. a picture of spaeti store in the snow ![a picture of spaeti store in the snow](sample-image-snow.png)
aa9f9cc6e1a5cea449cef493f79bef52
apache-2.0
['part-of-speech', 'token-classification']
false
XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Croatian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
f2667054cf6d257edb8128fdabfb348d
apache-2.0
['part-of-speech', 'token-classification']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hr") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hr") ```
b82dcbd649e219a85756fd4af3bc0fa7
apache-2.0
['text-generation', 'text2text-generation']
false
MVP-question-generation The MVP-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
ae2de5f0c7048183d78cd65f68bcb446
apache-2.0
['text-generation', 'text2text-generation']
false
Model Description MVP-question-generation is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled question generation datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts. MVP-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA.
cee7f4021eb457cf6a8a01ded9585981
apache-2.0
['text-generation', 'text2text-generation']
false
Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-question-generation") >>> inputs = tokenizer( ... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['A bolo punch and a hook are both punches used in what sport?'] ```
24ecffcf6239d3442eb4733b181fb586
apache-2.0
['text-generation', 'text2text-generation']
false
Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
1a27159d326f18d2109d659ab1fc29e7
apache-2.0
['text-generation', 'text2text-generation']
false
Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
ac366490a3203b58a2b8281b639949e6
apache-2.0
['tensorflowtts', 'audio', 'text-to-speech', 'text-to-mel']
false
Tacotron 2 with Guided Attention trained on Synpaflex (Fr) This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on Synpaflex dataset (Fr). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
f51f1b443089c3fde75e92674adffc45
apache-2.0
['tensorflowtts', 'audio', 'text-to-speech', 'text-to-mel']
false
Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr") text = "Oh, je voudrais tant que tu te souviennes Des jours heureux quand nous étions amis" input_ids = processor.text_to_sequence(text) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) ```
3743294c81defc0262d91cf37d87efe9
apache-2.0
['tensorflowtts', 'audio', 'text-to-speech', 'text-to-mel']
false
Referencing Tacotron 2 ``` @article{DBLP:journals/corr/abs-1712-05884, author = {Jonathan Shen and Ruoming Pang and Ron J. Weiss and Mike Schuster and Navdeep Jaitly and Zongheng Yang and Zhifeng Chen and Yu Zhang and Yuxuan Wang and R. J. Skerry{-}Ryan and Rif A. Saurous and Yannis Agiomyrgiannakis and Yonghui Wu}, title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions}, journal = {CoRR}, volume = {abs/1712.05884}, year = {2017}, url = {http://arxiv.org/abs/1712.05884}, archivePrefix = {arXiv}, eprint = {1712.05884}, timestamp = {Thu, 28 Nov 2019 08:59:52 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
8629cc2a12a457b3c10c0997bab1ae80
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
laywaxys Dreambooth model trained by NOISK8 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
7863143b09d7c7a9eb1db837ce9da0ee
apache-2.0
['text-generation', 'text2text-generation']
false
MTL-question-generation The MTL-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
3384d80a60b16c19c5cd6b5d5d589404
apache-2.0
['text-generation', 'text2text-generation']
false
Model Description MTL-question-generation is supervised pre-trained using a mixture of labeled question generation datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture. MTL-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA.
b70d2f03802de089ebe7a30648aad0f5
apache-2.0
['text-generation', 'text2text-generation']
false
Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-question-generation") >>> inputs = tokenizer( ... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['A bolo punch and a hook are both punches used in what sport?] ```
7df267f3bbaf7ea4f9c8f68879cbb364
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.4571
6fe3882a7bbd914a5bb5a48b70a60f93
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
d8c9f8e17773623b974aa67a1bd77966