license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
cc-by-sa-4.0
['vietnamese', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-vietnamese-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-vietnamese-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("Hai cái đầu thì tốt hơn một.")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-vietnamese-upos") print(nlp("Hai cái đầu thì tốt hơn một.")) ```
daf8d813bb8462fc4e6cc5d3e8cdd7ed
apache-2.0
['generated_from_trainer']
false
my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2361 - Accuracy: 0.9313
1e29bf7789a539772f94136c58ac6a49
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2339 | 1.0 | 1563 | 0.1924 | 0.9263 | | 0.1523 | 2.0 | 3126 | 0.2361 | 0.9313 |
d5eeaa80472b49b0cfb8725b18ec4c8d
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4433200/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
efbc29bea2fcf911a4178ae32506e81b
cc-by-sa-4.0
['chinese', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-chinese-upos](https://huggingface.co/KoichiYasuoka/roberta-base-chinese-upos).
d42f3ffbcf5788e1f74a42d72f408329
cc-by-sa-4.0
['chinese', 'token-classification', 'pos', 'dependency-parsing']
false
text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/roberta-base-chinese-ud-goeswith") print(nlp("我把这本书看完了")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-base-chinese-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("我把这本书看完了")) ```
f7defa6805e02bb1a09887826833b39c
apache-2.0
['generated_from_trainer']
false
distilbert-amazon-shoe-reviews-tensorboard This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9534 - Accuracy: 0.5779 - F1: [0.63189419 0.46645049 0.50381304 0.55843496 0.73060507] - Precision: [0.62953754 0.47008547 0.48669202 0.58801498 0.71780957] - Recall: [0.63426854 0.46287129 0.52218256 0.53168844 0.74386503]
64e1e266c3d2fbb6fe1271eb118cb2ec
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------------------------------------:|:--------------------------------------------------------:| | 0.8776 | 1.0 | 2813 | 0.9534 | 0.5779 | [0.63189419 0.46645049 0.50381304 0.55843496 0.73060507] | [0.62953754 0.47008547 0.48669202 0.58801498 0.71780957] | [0.63426854 0.46287129 0.52218256 0.53168844 0.74386503] |
2520acfc683116406718e46965f2e764
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8128 - Matthews Correlation: 0.5364
3f93dcdc3a6e172a478a04f8a46b62b9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5227 | 1.0 | 535 | 0.5220 | 0.4210 | | 0.3467 | 2.0 | 1070 | 0.5048 | 0.4882 | | 0.2335 | 3.0 | 1605 | 0.5652 | 0.5173 | | 0.1811 | 4.0 | 2140 | 0.7633 | 0.5200 | | 0.1333 | 5.0 | 2675 | 0.8128 | 0.5364 |
83db889c55c05d6352b496af3270a01c
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-wikihow_3epoch_b4_lr3e-4 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.2757 - Rouge1: 27.4024 - Rouge2: 10.7065 - Rougel: 23.3153 - Rougelsum: 26.7336 - Gen Len: 18.5506
48949e2b47b85c8085a7521c9f6ecde3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.8424 | 0.13 | 5000 | 2.5695 | 25.2232 | 8.7617 | 21.2019 | 24.4949 | 18.4151 | | 2.7334 | 0.25 | 10000 | 2.5229 | 25.3739 | 9.0477 | 21.5054 | 24.7553 | 18.3802 | | 2.6823 | 0.38 | 15000 | 2.4857 | 26.341 | 9.6711 | 22.3446 | 25.7256 | 18.449 | | 2.6607 | 0.51 | 20000 | 2.4540 | 26.0269 | 9.4722 | 22.0822 | 25.3602 | 18.4704 | | 2.6137 | 0.64 | 25000 | 2.4326 | 26.2966 | 9.6815 | 22.4422 | 25.6326 | 18.3517 | | 2.6077 | 0.76 | 30000 | 2.4108 | 26.0981 | 9.6221 | 22.1189 | 25.454 | 18.5079 | | 2.5847 | 0.89 | 35000 | 2.3879 | 26.2675 | 9.6435 | 22.3738 | 25.6122 | 18.4838 | | 2.5558 | 1.02 | 40000 | 2.3827 | 26.3458 | 9.7844 | 22.4718 | 25.7388 | 18.5097 | | 2.4902 | 1.14 | 45000 | 2.3725 | 26.4987 | 9.9634 | 22.5398 | 25.8399 | 18.5912 | | 2.4785 | 1.27 | 50000 | 2.3549 | 26.884 | 10.1136 | 22.8212 | 26.2262 | 18.4763 | | 2.4822 | 1.4 | 55000 | 2.3467 | 26.8635 | 10.2266 | 22.9161 | 26.2252 | 18.5847 | | 2.46 | 1.53 | 60000 | 2.3393 | 26.8602 | 10.1785 | 22.8453 | 26.1917 | 18.548 | | 2.4523 | 1.65 | 65000 | 2.3330 | 26.91 | 10.237 | 22.9309 | 26.2372 | 18.5154 | | 2.4525 | 1.78 | 70000 | 2.3203 | 27.073 | 10.4317 | 23.1355 | 26.4528 | 18.5063 | | 2.4566 | 1.91 | 75000 | 2.3109 | 27.3853 | 10.5413 | 23.3455 | 26.7408 | 18.5258 | | 2.4234 | 2.03 | 80000 | 2.3103 | 27.0836 | 10.4857 | 23.0538 | 26.409 | 18.5326 | | 2.3686 | 2.16 | 85000 | 2.2986 | 27.311 | 10.6038 | 23.3068 | 26.6636 | 18.4874 | | 2.3758 | 2.29 | 90000 | 2.2969 | 27.3509 | 10.6502 | 23.2764 | 26.6832 | 18.5438 | | 2.3777 | 2.42 | 95000 | 2.2907 | 27.39 | 10.5842 | 23.3601 | 26.7433 | 18.5444 | | 2.3624 | 2.54 | 100000 | 2.2875 | 27.3717 | 10.6098 | 23.3326 | 26.7232 | 18.5521 | | 2.3543 | 2.67 | 105000 | 2.2811 | 27.4188 | 10.6919 | 23.3022 | 26.7426 | 18.564 | | 2.366 | 2.8 | 110000 | 2.2763 | 27.4872 | 10.7079 | 23.4135 | 26.829 | 18.5399 | | 2.3565 | 2.93 | 115000 | 2.2757 | 27.4024 | 10.7065 | 23.3153 | 26.7336 | 18.5506 |
b9e4b5080872a2c901a77166158b68ae
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2185 - Accuracy: 0.9245 - F1: 0.9244
a3d02a51645af3df16a721a25f4aad72
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.847 | 1.0 | 250 | 0.3228 | 0.8985 | 0.8951 | | 0.2543 | 2.0 | 500 | 0.2185 | 0.9245 | 0.9244 |
38d0703922f3fbe4f47090a2a74b03c3
cc-by-sa-4.0
['long-documents']
false
Model description [Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents. This version of Longformer presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529). The model has been warm-started re-using the weights of RoBERTa (Liu et al., 2019), and continued pre-trained for MLM in long sequences following the paradigm of original Longformer released by Beltagy et al. (2020). It supports sequences of length up to 4,096. Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
92fca1e2f988a431d96aba66dba5e277
cc-by-sa-4.0
['long-documents']
false
Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=longformer) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification or question answering.
6deb392357e83a7fc6912fbb07902351
cc-by-sa-4.0
['long-documents']
false
How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline mlm_model = pipeline('fill-mask', model='kiddothe2b/longformer-base-4096', trust_remote_code=True) mlm_model("Hello I'm a <mask> model.") ``` You can also fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks: ```python from transformers import AutoTokenizer, AutoModelforSequenceClassification tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/longformer-base-4096", trust_remote_code=True) doc_classifier = AutoModelforSequenceClassification("kiddothe2b/longformer-base-4096", trust_remote_code=True) ```
c71a52b4aa9527ac878c30985b7afec8
cc-by-sa-4.0
['long-documents']
false
Training hyperparameters TThe following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 50000
423edbd55b54ef6b79880f77e7a585ba
cc-by-sa-4.0
['long-documents']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.7067 | 0.2 | 10000 | 1.5923 | 0.6714 | | 1.6532 | 0.4 | 20000 | 1.5494 | 0.6784 | | 1.622 | 0.6 | 30000 | 1.5208 | 0.6830 | | 1.588 | 0.8 | 40000 | 1.4880 | 0.6876 | | 1.5682 | 1.0 | 50000 | 1.4680 | 0.6908 |
c0075682166dcc2090c8b8ae5bb3bf35
cc-by-sa-4.0
['long-documents']
false
Citing If you use HAT in your research, please cite: [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint). ``` @misc{chalkidis-etal-2022-hat, url = {https://arxiv.org/abs/2210.05529}, author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond}, title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification}, publisher = {arXiv}, year = {2022}, } ``` Also cite the original work: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150). ``` @article{Beltagy2020Longformer, title={Longformer: The Long-Document Transformer}, author={Iz Beltagy and Matthew E. Peters and Arman Cohan}, journal={arXiv:2004.05150}, year={2020}, } ```
768ca1461743c3126f3be9251ab867eb
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_1200k']
false
MultiBERTs, Intermediate Checkpoint - Seed 3, Step 1200k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
a05abbec4552a679bfee3fd6ab6a85cd
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_1200k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_1200k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_1200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_1200k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_1200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
64665019a662cdfa07b83066024ec569
apache-2.0
['image-classification', 'timm']
false
Model card for maxvit_rmlp_small_rw_224.sw_in1k A timm specific MaxViT (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman. ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
a9541d3dc2e1c1b84d739a3dc8cc4cdb
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 64.9 - GMACs: 10.7 - Activations (M): 49.3 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Dataset:** ImageNet-1k
84d289e8bba54aeb2ec66ec893c67e9a
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxvit_rmlp_small_rw_224.sw_in1k', pretrained=True) model = model.eval()
1578b1a2e9d4c256a4f39f6557563c65
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_rmlp_small_rw_224.sw_in1k', pretrained=True, features_only=True, ) model = model.eval()
85e0d13fcec8c8cce3558ae1368e31b9
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_rmlp_small_rw_224.sw_in1k', pretrained=True, num_classes=0,
7ca6b12d3eeae79fddf8beeaf3e43e92
apache-2.0
['translation']
false
ceb-eng * source group: Cebuano * target group: English * OPUS readme: [ceb-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md) * model: transformer-align * source language(s): ceb * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.eval.txt)
c78fb5ed6ad44282d923b7412e0cf819
apache-2.0
['translation']
false
System Info: - hf_name: ceb-eng - source_languages: ceb - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ceb', 'en'] - src_constituents: {'ceb'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt - src_alpha3: ceb - tgt_alpha3: eng - short_pair: ceb-en - chrF2_score: 0.387 - bleu: 21.5 - brevity_penalty: 1.0 - ref_len: 2293.0 - src_name: Cebuano - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ceb - tgt_alpha2: en - prefer_old: False - long_pair: ceb-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
344bcb6060dac43c28acf28fd40362ee
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3016 - Accuracy: 0.8633 - F1: 0.8629
ef6d6031b10ea0c5bee5c58eb4af718b
cc-by-4.0
['audio', 'automatic-speech-recognition', 'icelandic', 'xlrs-53-icelandic', 'iceland', 'reykjavik', 'samromur']
false
wav2vec2-large-xlsr-53-icelandic-ep10-1000h The "wav2vec2-large-xlsr-53-icelandic-ep10-1000h" is an acoustic model suitable for Automatic Speech Recognition in Icelandic. It is the result of fine-tuning the model "facebook/wav2vec2-large-xlsr-53" for 10 epochs with around 1000 hours of Icelandic data developed by the [Language and Voice Laboratory](https://huggingface.co/language-and-voice-lab). Most of the data is available at public repositories such as [LDC](https://www.ldc.upenn.edu/), [OpenSLR](https://openslr.org/) or [Clarin.is](https://clarin.is/) The specific list of corpora used to fine-tune the model is: - [Samrómur 21.05 (114h34m)](http://www.openslr.org/112/) - [Samrómur Children (127h25m)](https://catalog.ldc.upenn.edu/LDC2022S11) - [Malrómur (119hh03m)](https://clarin.is/en/resources/malromur/) - [Althingi Parliamentary Speech (514h29m)](https://catalog.ldc.upenn.edu/LDC2021S01) - L2-Speakers Data (125h55m) **Unpublished material** The fine-tuning process was performed during December (2022) in the servers of the Language and Voice Laboratory (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.
31ebd08b0f8cacecc08feb4973af75de
cc-by-4.0
['audio', 'automatic-speech-recognition', 'icelandic', 'xlrs-53-icelandic', 'iceland', 'reykjavik', 'samromur']
false
Load the processor and model. MODEL_NAME="carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h" processor = Wav2Vec2Processor.from_pretrained(MODEL_NAME) model = Wav2Vec2ForCTC.from_pretrained(MODEL_NAME)
b6e647efd5632afac8a03035e348281a
cc-by-4.0
['audio', 'automatic-speech-recognition', 'icelandic', 'xlrs-53-icelandic', 'iceland', 'reykjavik', 'samromur']
false
BibTeX entry and citation info *When publishing results based on these models please refer to:* ```bibtex @misc{mena2022xlrs53icelandic, title={Acoustic Model in Icelandic: wav2vec2-large-xlsr-53-icelandic-ep10-1000h.}, author={Hernandez Mena, Carlos Daniel}, year={2022}, url={https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h}, } ```
2ea9d4154650d6a461cde8294b2320e5
apache-2.0
['tensorflowtts', 'audio', 'text-to-speech', 'mel-to-wav']
false
Multi-band MelGAN trained on LJSpeech (En) This repository provides a pretrained [Multi-band MelGAN](https://arxiv.org/abs/2005.05106) trained on LJSpeech dataset (Eng). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
9a421b7607aaf3df05db67dbb8d0c272
apache-2.0
['tensorflowtts', 'audio', 'text-to-speech', 'mel-to-wav']
false
Converting your Text to Wav ```python import soundfile as sf import numpy as np import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-ljspeech-en") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-ljspeech-en") mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-ljspeech-en") text = "This is a demo to show how to use our model to generate mel spectrogram from raw text." input_ids = processor.text_to_sequence(text)
1580527cce47d6a3d492ef255c4de7e9
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Citrinet1024', 'NeMo', 'pytorch']
false
Automatically instantiate the model ```python import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.ASRModel.from_pretrained("ypluit/stt_kr_citrinet1024_PublicCallCenter_1000H_0.22") ```
39e033c5a5f6283c77d37c9341c3450e
afl-3.0
[]
false
Clone this repo. In the /CIFAR100+CIFAR10_weights/CIFAR100+10_model/ directory, there are three weights for the three models trained on CIFAR100 + CIFAR10 dataset. The names of the weights can be found in my notebook respectively: https://colab.research.google.com/drive/1zInKDML24y8eZTtElMrdxGZjaK4F-vTu?usp=sharing The weights of the 2 models trained on CIFAR100+Svhn is in the root directory. Link to that notebook: https://colab.research.google.com/drive/1R__SCmY-zu5FoH7ZEjKA6DZntDcCPWIK?authuser=3
33a3159f31e5e5cc9d4f2749f892760b
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-utility-2-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3728 - Accuracy: 0.3956
6253442d29a31dc7a5c0c7c2678bf31d
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-Welsh Fine-tuned facebook/wav2vec2-large-xlsr-53 on the Welsh Common Voice dataset. The data was augmented using standard augmentation approach. When using this model, make sure that your speech input is sampled at 16kHz. Test Result: 29.4% Usage The model can be used directly (without a language model) as follows: ``` import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "cy", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("Srulikbdd/Wav2vec2-large-xlsr-welsh") model = Wav2Vec2ForCTC.from_pretrained("Srulikbdd/Wav2vec2-large-xlsr-welsh") resampler = torchaudio.transforms.Resample(48_000, 16_000)
ffd99af96500fdcf5155c8aae743e65d
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` Evaluation The model can be evaluated as follows on the Welsh test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "cy", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Srulikbdd/Wav2Vec2-large-xlsr-welsh") model = Wav2Vec2ForCTC.from_pretrained("Srulikbdd/Wav2Vec2-large-xlsr-welsh") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u2013\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u2014\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
b738b2ae6004a4cd23476433ca059d06
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ```
0b3f1f42bd45c98ce2c26c65a5c30f59
mit
['generated_from_keras_callback']
false
turkish-poem-generation This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 7.2815 - Validation Loss: 7.2658 - Epoch: 5
350ccbba51d64cf8ba92a1be93b30db5
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 2660, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02} - training_precision: mixed_float16
6c3764313b4316919cb9e3ba356b5f4b
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 7.2815 | 7.2657 | 0 | | 7.2815 | 7.2659 | 1 | | 7.2817 | 7.2653 | 2 | | 7.2815 | 7.2657 | 3 | | 7.2816 | 7.2660 | 4 | | 7.2815 | 7.2658 | 5 |
19d3fe4776d81fb5b36d86eed1cddb31
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_xls-r_accent_us-8_england-2_s946 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
e0b8d4a6dc40642b0555e43691fac726
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the germeval_14 dataset. It achieves the following results on the evaluation set: - Loss: 0.0744 - F1: 0.8588
8db13102a1565bc5950f5ee83fd90f68
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1261 | 1.0 | 1000 | 0.0769 | 0.8335 | | 0.0555 | 2.0 | 2000 | 0.0679 | 0.8568 | | 0.0329 | 3.0 | 3000 | 0.0744 | 0.8588 |
fbf105ea93a677c660db70f96fe9d064
apache-2.0
['whisper-event']
false
Whisper Gujarati Small This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Gujarati data available from multiple publicly available ASR corpuses. It has been fine-tuned as a part of the Whisper fine-tuning sprint.
ece69fbd5d33699a6a8cf3efdcb6c7a0
apache-2.0
['whisper-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.7e-05 - train_batch_size: 48 - eval_batch_size: 32 - seed: 22 - optimizer: adamw_bnb_8bit - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - training_steps: 20532 (terminated upon convergence. Initially set to 21240 steps) - mixed_precision_training: True
192069e8f6f697d5833fb3c68c6abad2
apache-2.0
[]
false
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person’s name right after another person’s name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person’s name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
26e715196af1557fbfa96abbf7458c4a
mit
[]
false
Tonal1 on Stable Diffusion This is the `<Tonal>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<Tonal> 0](https://huggingface.co/sd-concepts-library/tonal1/resolve/main/concept_images/1.jpeg) ![<Tonal> 1](https://huggingface.co/sd-concepts-library/tonal1/resolve/main/concept_images/2.jpeg) ![<Tonal> 2](https://huggingface.co/sd-concepts-library/tonal1/resolve/main/concept_images/0.jpeg) ![<Tonal> 3](https://huggingface.co/sd-concepts-library/tonal1/resolve/main/concept_images/3.jpeg)
1145e1b1a8aebe6ed69469260e969741
apache-2.0
['object-detection']
false
DETR (End-to-End Object Detection) model with ResNet-101 backbone (dilated C5 stage) DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
f35e83808ad5cc00328ffb95e781b89b
apache-2.0
['object-detection']
false
How to use Here is how to use this model: ```python from transformers import DetrFeatureExtractor, DetrForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101-dc5') model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-101-dc5') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs)
f1246613121cfddcc7987bd3ad5871e8
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2083 - Accuracy: 0.9245 - F1: 0.9248
6e18dd5921588cb63a16e88254f20b4d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7794 | 1.0 | 250 | 0.2870 | 0.9115 | 0.9099 | | 0.2311 | 2.0 | 500 | 0.2083 | 0.9245 | 0.9248 |
73e7f53d200455c9b3b19e9c579ce486
apache-2.0
['super-image', 'image-super-resolution']
false
Model description The MDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
2327d90b5814ca49699a3e5607db362a
apache-2.0
['super-image', 'image-super-resolution']
false
How to use The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library: ```bash pip install super-image ``` Here is how to use a pre-trained model to upscale your image: ```python from super_image import MdsrModel, ImageLoader from PIL import Image import requests url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg' image = Image.open(requests.get(url, stream=True).raw) model = MdsrModel.from_pretrained('eugenesiow/mdsr', scale=2)
dd5e431a741efad3f945fe172f3c2d2b
apache-2.0
['super-image', 'image-super-resolution']
false
Algorithm). Evaluation datasets include: - Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5) - Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14) - BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100) - Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100) The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline. |Dataset |Scale |Bicubic |mdsr | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |**38.04/0.9608** | |Set5 |3x |30.39/0.8678 |**35.11/0.9406** | |Set5 |4x |28.42/0.8101 |**32.26/0.8953** | |Set14 |2x |30.22/0.8683 |**33.71/0.9184** | |Set14 |3x |27.53/0.7737 |**31.06/0.8593** | |Set14 |4x |25.99/0.7023 |**28.77/0.7856** | |BSD100 |2x |29.55/0.8425 |**33.79/0.9256** | |BSD100 |3x |27.20/0.7382 |**29.66/0.8196** | |BSD100 |4x |25.96/0.6672 |**28.53/0.7653** | |Urban100 |2x |26.66/0.8408 |**32.14/0.9283** | |Urban100 |3x | |**29.29/0.8738** | |Urban100 |4x |23.14/0.6573 |**26.07/0.7851** | ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/mdsr_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2") You can find a notebook to easily run evaluation on pretrained models below: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
88dde91d625ca96f595fe7349d0a212c
apache-2.0
['super-image', 'image-super-resolution']
false
BibTeX entry and citation info ```bibtex @article{ahn2018fast, title={Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network}, author={Ahn, Namhyuk and Kang, Byungkon and Sohn, Kyung-Ah}, journal={arXiv preprint arXiv:1803.08664}, year={2018} } ```
a32f10ae02ff4a9401d7659bbc07b4a0
apache-2.0
['translation']
false
eng-urd * source group: English * target group: Urdu * OPUS readme: [eng-urd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urd/README.md) * model: transformer-align * source language(s): eng * target language(s): urd * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.eval.txt)
9286a82075acd3f75da19c452f1544b1
apache-2.0
['translation']
false
System Info: - hf_name: eng-urd - source_languages: eng - target_languages: urd - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urd/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ur'] - src_constituents: {'eng'} - tgt_constituents: {'urd'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: urd - short_pair: en-ur - chrF2_score: 0.39 - bleu: 12.1 - brevity_penalty: 1.0 - ref_len: 12155.0 - src_name: English - tgt_name: Urdu - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: ur - prefer_old: False - long_pair: eng-urd - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
cc259fc0d722feccb9fc2ac44199a89d
gpl-3.0
['image-classification', 'computer-vision', 'vision', 'yolo', 'yolov5']
false
perform inference results = model(img) ``` - Finetune the model on your custom dataset: ```bash yolov5 classify train --img 128 --data mnist2560 --model fcakyon/yolov5n-cls-v7.0 --epochs 1 --device cpu ```
44b92d8d844be1d00ef62ac23d2474a6
apache-2.0
['sagemaker', 'ruperta', 'TextClassification', 'SentimentAnalysis']
false
**A finetuned model for Sentiment analysis in Spanish** This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container, The base model is **RuPERTa-base (uncased)** which is a RoBERTa model trained on a uncased version of big Spanish corpus. It was trained by mrm8488, Manuel Romero.[Link to base model](https://huggingface.co/mrm8488/RuPERTa-base)
30e9c7c2e035301084838ec83217f83b
apache-2.0
['sagemaker', 'ruperta', 'TextClassification', 'SentimentAnalysis']
false
Dataset The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages. Sizes of datasets: - Train dataset: 42,500 - Validation dataset: 3,750 - Test dataset: 3,750
8e9eddd63d15370b8ba102c9b549fdc1
apache-2.0
['sagemaker', 'ruperta', 'TextClassification', 'SentimentAnalysis']
false
Hyperparameters { "epochs": "4", "train_batch_size": "32", "eval_batch_size": "8", "fp16": "true", "learning_rate": "3e-05", "model_name": "\"mrm8488/RuPERTa-base\"", "sagemaker_container_log_level": "20", "sagemaker_program": "\"train.py\"", }
09494c7761425e87d2c29024ffcd3dda
apache-2.0
['sagemaker', 'ruperta', 'TextClassification', 'SentimentAnalysis']
false
Usage for Sentiment Analysis ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("edumunozsala/RuPERTa_base_sentiment_analysis_es") model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/RuPERTa_base_sentiment_analysis_es") text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal" input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) outputs = model(input_ids) output = outputs.logits.argmax(1) ``` Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
ebd593cc93c08f3c690ca9bc45c0486e
apache-2.0
['translation']
false
bul-fra * source group: Bulgarian * target group: French * OPUS readme: [bul-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md) * model: transformer * source language(s): bul * target language(s): fra * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.eval.txt)
4c375b5bd9ff39b065923cabf0aca3ac
apache-2.0
['translation']
false
System Info: - hf_name: bul-fra - source_languages: bul - target_languages: fra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'fr'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'fra'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: fra - short_pair: bg-fr - chrF2_score: 0.693 - bleu: 53.7 - brevity_penalty: 0.977 - ref_len: 3669.0 - src_name: Bulgarian - tgt_name: French - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: fr - prefer_old: False - long_pair: bul-fra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
c66ca7b7e41ff1a4f865897af1da9837
mit
['generated_from_trainer']
false
finetuned_gpt2-large_sst2_negation0.01_pretrainedTrue_epochs1 This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 2.8378
db3d586ca4c5501185e43140feff6ef0
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-wnli-target-glue-qnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4737 - Accuracy: 0.7794
0b1132b0ed3e1952a547494b158db649
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6298 | 0.15 | 500 | 0.5598 | 0.7249 | | 0.563 | 0.31 | 1000 | 0.5282 | 0.7435 | | 0.5386 | 0.46 | 1500 | 0.5010 | 0.7571 | | 0.527 | 0.61 | 2000 | 0.5312 | 0.7426 | | 0.5221 | 0.76 | 2500 | 0.4837 | 0.7743 | | 0.5131 | 0.92 | 3000 | 0.4730 | 0.7785 | | 0.4991 | 1.07 | 3500 | 0.4643 | 0.7860 | | 0.4896 | 1.22 | 4000 | 0.4685 | 0.7809 | | 0.4755 | 1.37 | 4500 | 0.4734 | 0.7783 | | 0.4829 | 1.53 | 5000 | 0.4737 | 0.7794 |
36044c3faa2570e8da4561d5a61764e6
apache-2.0
['stanza', 'token-classification']
false
Stanza model for Upper_Sorbian (hsb) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-09-25 01:29:15.088
1973559e60e6976b958829e6f097e172
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small Hi - Sanchit Gandhi This model is a fine-tuned version of [Taqwa/whisper-small-hiTaqwa](https://huggingface.co/Taqwa/whisper-small-hiTaqwa) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3353 - Wer: 35.7403
43515c2c6d851aa98642f54abe877aa4
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 500 - mixed_precision_training: Native AMP
f96b13189b1d3302bca63567d4a254d2
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0762 | 0.31 | 125 | 0.2818 | 33.3573 | | 0.0653 | 0.61 | 250 | 0.2930 | 33.9584 | | 0.062 | 0.92 | 375 | 0.3060 | 34.7456 | | 0.0518 | 1.22 | 500 | 0.3353 | 35.7403 |
9fcb6226ad947cc11046ca1fb12b8f6c
apache-2.0
['generated_from_keras_callback']
false
TestZee/t5-base-finetuned-question-generation-data-t5-base This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.0855 - Validation Loss: 4.4354 - Train Rouge1: 27.4892 - Train Rouge2: 8.6370 - Train Rougel: 24.3146 - Train Rougelsum: 24.3146 - Train Gen Len: 19.0 - Epoch: 0
dd8970c8a3c21d25d8883f3930a16362
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 4.0855 | 4.4354 | 27.4892 | 8.6370 | 24.3146 | 24.3146 | 19.0 | 0 |
ef703ff6f697ba1d4d25d33997dcc3a6
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
MultiBERTs Seed 2 Checkpoint 1600k (uncased) Seed 2 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
aa4c48c2f2fe32a2c2683e48ceeac4eb
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1600k') model = BertModel.from_pretrained("multiberts-seed-2-1600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
40758bd6ff9d89f83516a03f370ecf45
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1580
45495f67fffb5a412c335eb9e3d23f5c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2246 | 1.0 | 5533 | 1.1484 | | 0.9433 | 2.0 | 11066 | 1.1294 | | 0.7625 | 3.0 | 16599 | 1.1580 |
2ef96ba988ab9a7618e4b09e11fe6ee3
apache-2.0
['generated_from_trainer']
false
biomedical-roberta-finetuned-cantemist-test This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist) on the cantemist-ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0597 - F1: 0.8379
d5c866ce81c242ab9c06ff6776218ffb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0015 | 1.0 | 607 | 0.0597 | 0.8379 |
5c094dd1d2258618c2c9fbd830919145
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_no-pretraining_s615 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
8550c383776ce92000fe6eaa075bc814
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7903 - Matthews Correlation: 0.5596
c7c20f53ce857c05459f8fec1ee87dc0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5224 | 1.0 | 535 | 0.5373 | 0.3974 | | 0.3503 | 2.0 | 1070 | 0.5142 | 0.4942 | | 0.2328 | 3.0 | 1605 | 0.5449 | 0.5449 | | 0.1775 | 4.0 | 2140 | 0.7457 | 0.5487 | | 0.1235 | 5.0 | 2675 | 0.7903 | 0.5596 |
984086f6b0853185c7bcbd3224e8aa48
apache-2.0
['automatic-speech-recognition', 'zh-CN']
false
exp_w2v2t_zh-cn_unispeech-ml_s515 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
141d17413ea1b09a38824a1cd1139b41
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape', 'heywhale']
false
DreamBooth model for the taolu concept trained by chenglu. This is a Stable Diffusion model fine-tuned on the taolu concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of taolu road** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
a6ebd3d1978383c2600ec58b8fd12d74
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape', 'heywhale']
false
Description This is a Stable Diffusion model fine-tuned on `road` images for the landscape theme. For the HF Dreambooth hackathon, from Hugging Face China Commuinity, Collabration with the HeyWhale platform.
d06317d2fcd6760515273d7d60dda550
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto).
0d1d4cbec3bc40f26e00c2c5070840ab
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos', 'dependency-parsing']
false
text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/roberta-classical-chinese-base-ud-goeswith") print(nlp("孟子見梁惠王")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-classical-chinese-base-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("孟子見梁惠王")) ```
bc408fec7f10d759a1a5a3e594039283
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [tkubotake/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/tkubotake/xlm-roberta-base-finetuned-panx-de) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2290 - F1: 0.8629
537b782145a44511820b74378883d53d
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1259 | 1.0 | 835 | 0.1879 | 0.8478 | | 0.078 | 2.0 | 1670 | 0.2121 | 0.8582 | | 0.0439 | 3.0 | 2505 | 0.2290 | 0.8629 |
406796da698397f06b27a10fc79559c0
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SV-SE dataset. It achieves the following results on the evaluation set: **Without LM**: - Wer: 0.2465 - Cer: 0.0717 **With LM**: - Wer: 0.1710 - Cer: 0.0569
fd04990b53fc7f0c3de9ac5c6ac045a0
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.3224 | 1.37 | 500 | 3.2676 | 1.0 | | 2.9319 | 2.74 | 1000 | 2.9287 | 1.0000 | | 2.1173 | 4.11 | 1500 | 1.1478 | 0.8788 | | 1.6973 | 5.48 | 2000 | 0.6749 | 0.6547 | | 1.5865 | 6.85 | 2500 | 0.5500 | 0.5634 | | 1.5094 | 8.22 | 3000 | 0.4840 | 0.5430 | | 1.4644 | 9.59 | 3500 | 0.4844 | 0.4142 | | 1.4061 | 10.96 | 4000 | 0.4356 | 0.3808 | | 1.3584 | 12.33 | 4500 | 0.4192 | 0.3698 | | 1.3438 | 13.7 | 5000 | 0.3980 | 0.3584 | | 1.3332 | 15.07 | 5500 | 0.3896 | 0.3572 | | 1.3025 | 16.44 | 6000 | 0.3835 | 0.3487 | | 1.2979 | 17.81 | 6500 | 0.3781 | 0.3417 | | 1.2736 | 19.18 | 7000 | 0.3734 | 0.3270 | | 1.2415 | 20.55 | 7500 | 0.3637 | 0.3316 | | 1.2255 | 21.92 | 8000 | 0.3546 | 0.3147 | | 1.2193 | 23.29 | 8500 | 0.3524 | 0.3196 | | 1.2104 | 24.66 | 9000 | 0.3403 | 0.3097 | | 1.1965 | 26.03 | 9500 | 0.3508 | 0.3093 | | 1.1976 | 27.4 | 10000 | 0.3419 | 0.3071 | | 1.182 | 28.77 | 10500 | 0.3364 | 0.2963 | | 1.158 | 30.14 | 11000 | 0.3338 | 0.2932 | | 1.1414 | 31.51 | 11500 | 0.3376 | 0.2940 | | 1.1402 | 32.88 | 12000 | 0.3370 | 0.2891 | | 1.1213 | 34.25 | 12500 | 0.3201 | 0.2874 | | 1.1207 | 35.62 | 13000 | 0.3261 | 0.2826 | | 1.1074 | 36.98 | 13500 | 0.3117 | 0.2786 | | 1.0818 | 38.36 | 14000 | 0.3194 | 0.2776 | | 1.0889 | 39.73 | 14500 | 0.3188 | 0.2738 | | 1.0672 | 41.1 | 15000 | 0.3196 | 0.2773 | | 1.0838 | 42.47 | 15500 | 0.3130 | 0.2739 | | 1.0553 | 43.83 | 16000 | 0.3165 | 0.2704 | | 1.0786 | 45.21 | 16500 | 0.3108 | 0.2706 | | 1.0546 | 46.57 | 17000 | 0.3102 | 0.2677 | | 1.0425 | 47.94 | 17500 | 0.3115 | 0.2679 | | 1.0398 | 49.31 | 18000 | 0.3131 | 0.2666 |
9de6106d5ff6be65ec0fe130c652e5d1
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-kitchen_and_dining-8-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3560 - Accuracy: 0.2692
a57301ba70f0fff1f8f027a7961fe6cb
apache-2.0
[]
false
How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("google/electra-small-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-small-discriminator") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.squeeze().tolist()] ```
17451baf9cd08e75a1fd47abff597c60
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0628 - Precision: 0.9254 - Recall: 0.9352 - F1: 0.9303 - Accuracy: 0.9835
5c600dc19e87829f0a91207f0aac5641
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2388 | 1.0 | 878 | 0.0723 | 0.9108 | 0.9186 | 0.9147 | 0.9798 | | 0.0526 | 2.0 | 1756 | 0.0633 | 0.9176 | 0.9290 | 0.9232 | 0.9817 | | 0.0303 | 3.0 | 2634 | 0.0628 | 0.9254 | 0.9352 | 0.9303 | 0.9835 |
171f24d0e4df47e68f43c0733afbc57c
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0731 - Precision: 0.9331 - Recall: 0.9432 - F1: 0.9381 - Accuracy: 0.9851
d417d134e1d7723538b98a6ca57e2142