license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107'] | false | Intended uses & limitations The model has two uses: - use 'as is' for spoken language recognition - use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data The model is trained on automatically collected YouTube data. For more information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/). | 67d1987047d91054be2ced8e84c210b6 |
apache-2.0 | ['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107'] | false | Download Thai language sample from Omniglot and cvert to suitable form signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3") prediction = language_id.classify_batch(signal) print(prediction) | 181a3ab4bfb8868046e0249da5e27cf8 |
apache-2.0 | ['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107'] | false | torch.Size([1, 1, 256]) ``` To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*. | 5f4e574ce4c9a749e91ea49063618a79 |
apache-2.0 | ['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107'] | false | Limitations and bias Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are: - Probably it's accuracy on smaller languages is quite limited - Probably it works worse on female speech than male speech (because YouTube data includes much more male speech) - Based on subjective experiments, it doesn't work well on speech with a foreign accent - Probably it doesn't work well on children's speech and on persons with speech disorders | e18b8bfaef4b08de9919df1824b185b2 |
apache-2.0 | ['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107'] | false | Training data The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/). VoxLingua107 is a speech dataset for training spoken language identification models. The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives. VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours. The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language. | 81f3fb4e7452e956103fd5a9b6795fdd |
apache-2.0 | ['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107'] | false | Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` | ef5ad66fe5c019cb772b4a077356110c |
apache-2.0 | ['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107'] | false | Referencing VoxLingua107 ```bibtex @inproceedings{valk2021slt, title={{VoxLingua107}: a Dataset for Spoken Language Recognition}, author={J{\"o}rgen Valk and Tanel Alum{\"a}e}, booktitle={Proc. IEEE SLT Workshop}, year={2021}, } ``` | 9c4abc05dc2a403b4b878ef4439fc79a |
apache-2.0 | ['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107'] | false | About SpeechBrain SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains. Website: https://speechbrain.github.io/ GitHub: https://github.com/speechbrain/speechbrain | c70fdf0e087def87206fc5520ca2e580 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Wav2Vec2-Large-XLSR-53-Lithuanian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Lithuanian using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. | 8e7a77ab5feae85f8bad4161b20e40ec |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "lt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian") model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian") resampler = torchaudio.transforms.Resample(48_000, 16_000) | 31ef45240d02031f806b1f7cdd757468 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | We need to read the audio files as arrays def speech_file_to_array_fn(batch): \\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` | f1bdc4d76cf09331ab1e9b7e901741f9 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Evaluation The model can be evaluated as follows on the Lithuanian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "lt", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian") model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian") model.to("cuda") chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) | fc0dfeb304ce089dafd03569753dc3ac |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | We need to read the audio files as arrays def speech_file_to_array_fn(batch): \\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() \\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) | 6bbf1892ce3e38711cb835ec4a7aaa40 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | We need to read the audio files as arrays def evaluate(batch): \\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) \\twith torch.no_grad(): \\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) \\tbatch["pred_strings"] = processor.batch_decode(pred_ids) \\treturn batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 56.55 % | cca5e8e1c7f0dd5e1754a616e323d49b |
apache-2.0 | ['text2text-generation', 'Guyanese Creole', 'Caribbean dialect'] | false | Guyanese English Creole to English Translator This model utilises T5-base pre-trained model. It was fine tuned using a custom dataset for translation of Guyanese English Creole to English. This model will be updated periodically as more data is compiled. For more on the Caribbean English Creoles checkout the library [Caribe](https://pypi.org/project/Caribe/). ___ | b157493e442006154ceb7f79bbd2add1 |
apache-2.0 | ['text2text-generation', 'Guyanese Creole', 'Caribbean dialect'] | false | Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("KES/GEC-English") model = AutoModelForSeq2SeqLM.from_pretrained("KES/GEC-English") text = "Ah waan ah phone" inputs = tokenizer("guy:"+text, truncation=True, return_tensors='pt') output = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True) translation=tokenizer.batch_decode(output, skip_special_tokens=True) print("".join(translation)) | 26ac7561ecc95e5ba63e6c1b5f840556 |
mit | ['generated_from_trainer'] | false | xlm-roberta-large-finetuned-ner This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the hi_ner_config dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2329 - eval_precision: 0.7110 - eval_recall: 0.6854 - eval_f1: 0.6980 - eval_accuracy: 0.9332 - eval_runtime: 162.3478 - eval_samples_per_second: 66.9 - eval_steps_per_second: 16.73 - epoch: 2.64 - step: 50198 | fe2b278e69dcc596454df7edc4064299 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 | 5a433b9f3ea4a7064df6f3f11cba4086 |
apache-2.0 | ['generated_from_trainer'] | false | whisper-tiny-ar-quran This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0928 - Wer: 7.0535 | 79ae98872e535ac11710c206250d2d57 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP | 443f7cb45f1b27cde9fc3702f8310156 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1766 | 0.05 | 500 | 0.2829 | 20.0236 | | 0.1129 | 0.09 | 1000 | 0.1981 | 13.8364 | | 0.0775 | 0.14 | 1500 | 0.1763 | 12.5450 | | 0.0678 | 0.19 | 2000 | 0.1485 | 10.7302 | | 0.0437 | 0.23 | 2500 | 0.1336 | 9.6693 | | 0.0341 | 0.28 | 3000 | 0.1244 | 8.9602 | | 0.0302 | 0.33 | 3500 | 0.1059 | 8.2224 | | 0.0189 | 0.37 | 4000 | 0.1044 | 7.6902 | | 0.0167 | 0.42 | 4500 | 0.0966 | 7.2643 | | 0.0151 | 0.47 | 5000 | 0.0928 | 7.0535 | | db356ace2c9af81232b3a34143cedd34 |
mit | [] | false | ZINC-t5 This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/ZINC-canonicalized dataset. It achieves the following results on the evaluation set: - Loss: 0.1228 - Accuracy: 0.9476 | 15f62776eafca0771148f0f2abe3ddf1 |
mit | [] | false | Model description We trained t5 on SMILES from ZINC using the task of masked-language modeling (MLM). Compared to ZINC-t5, ZINC-t5-v2 uses a character-level tokenizer, and it was also trained on ZINC. | b5ca1bdaaf5bfff7461ae409544a7b01 |
mit | [] | false | Intended uses & limitations This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning. As an example, We finetuned this model to predict products. The model is [here](https://huggingface.co/sagawa/ZINC-t5-productpredicition), and you can use the demo [here](https://huggingface.co/spaces/sagawa/predictproduct-t5). Using its encoder, we trained a regression model to predict a reaction yield. You can use this demo [here](https://huggingface.co/spaces/sagawa/predictyield-t5). | cf3f55787a67081d70fea2b425fd87f9 |
mit | [] | false | Training and evaluation data We downloaded [ZINC data](https://drive.google.com/drive/folders/1lSPCqh31zxTVEhuiPde7W3rZG8kPgp-z) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 22992522, and they were randomly split into train:validation=10:1. | 43cab4c522481352cc0a15cf80296da5 |
mit | [] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-03 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 | 516fe8f663405bbdb0b2a8503511c2a7 |
mit | [] | false | Training results | Training Loss | Step | Accuracy | Validation Loss | |:-------------:|:------:|:--------:|:---------------:| | 0.2090 | 100000 | 0.9264 | 0.1860 | | 0.1628 | 200000 | 0.9349 | 0.1613 | | 0.1632 | 300000 | 0.9395 | 0.1467 | | 0.1451 | 400000 | 0.9435 | 0.1345 | | 0.1311 | 500000 | 0.9465 | 0.1261 | | 9c166915cd962771d30bb72d2ca88423 |
apache-2.0 | ['speech', 'audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | false | Hubert-Large-Finetuned [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. The model is a fine-tuned version of [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k). [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . | d03a428d25bdc192a3caa80bc9b0844b |
apache-2.0 | ['speech', 'audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | false | Usage The model can be used for automatic-speech-recognition as follows: ```python import torch from transformers import Wav2Vec2Processor, HubertForCTC from datasets import load_dataset processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft") model = HubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft") ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values | 427bd053393d3a2f31eb6a06b27ad6ea |
apache-2.0 | ['image-classification', 'timm'] | false | Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 28.6 - GMACs: 4.5 - Activations (M): 13.4 - Image size: 224 x 224 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/facebookresearch/ConvNeXt - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k | efa450e63660cdc2893b88a6e0463b3a |
apache-2.0 | ['image-classification', 'timm'] | false | Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('convnext_tiny.fb_in22k_ft_in1k', pretrained=True) model = model.eval() | 5784b1d11dbb8c459834127c30f5312f |
apache-2.0 | ['image-classification', 'timm'] | false | Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_tiny.fb_in22k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() | 8225d48349b34a804b6894acf09a47f6 |
apache-2.0 | ['image-classification', 'timm'] | false | Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_tiny.fb_in22k_ft_in1k', pretrained=True, num_classes=0, | 10f4a9fa337ff77bd2361f879f6d905f |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 | b6e8822344441fb58293e05eef08a1bd |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4043 - F1: 0.6886 | 1658f5637cd1967fc7f0c47187dfbc00 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 | | 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 | | 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 | | a23d024d738409d8e6eb0d0271c5de28 |
apache-2.0 | ['generated_from_trainer'] | false | longformer_summarise This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the scientific_papers dataset. It achieves the following results on the evaluation set: - Loss: 2.3003 - Rouge2 Precision: 0.1654 - Rouge2 Recall: 0.0966 - Rouge2 Fmeasure: 0.1118 | b1bff6c588a93a0b9efd299e08e85daa |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP | 1b078cb958e15501460a5578bc3e3c3e |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | 2.909 | 0.08 | 10 | 2.8969 | 0.09 | 0.1439 | 0.0953 | | 2.615 | 0.16 | 20 | 2.6182 | 0.1232 | 0.0865 | 0.0924 | | 2.581 | 0.24 | 30 | 2.4687 | 0.1357 | 0.0733 | 0.09 | | 2.1294 | 0.32 | 40 | 2.5215 | 0.1495 | 0.0932 | 0.1044 | | 2.8083 | 0.4 | 50 | 2.3870 | 0.1794 | 0.1054 | 0.1224 | | 3.0704 | 0.48 | 60 | 2.3676 | 0.1572 | 0.0989 | 0.1108 | | 2.4716 | 0.56 | 70 | 2.3554 | 0.1707 | 0.1039 | 0.1198 | | 2.454 | 0.64 | 80 | 2.3411 | 0.1619 | 0.0943 | 0.1115 | | 2.3046 | 0.72 | 90 | 2.3105 | 0.1547 | 0.0965 | 0.1116 | | 1.7467 | 0.8 | 100 | 2.3417 | 0.1551 | 0.0877 | 0.1046 | | 2.7696 | 0.88 | 110 | 2.3226 | 0.1543 | 0.0954 | 0.1085 | | 2.4999 | 0.96 | 120 | 2.3003 | 0.1654 | 0.0966 | 0.1118 | | 91a66d5811f8a4024f313ee6aaefb20f |
apache-2.0 | ['automatic-speech-recognition', 'id'] | false | exp_w2v2t_id_wav2vec2_s226 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 4f023a92bf82f11b904abb6ab859c10b |
mit | ['spacy', 'token-classification'] | false | de_core_news_lg German pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner. | Feature | Description | | --- | --- | | **Name** | `de_core_news_lg` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` | | **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` | | **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) | | **Sources** | [TIGER Corpus](https://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html) (Brants, Sabine, Stefanie Dipper, Peter Eisenberg, Silvia Hansen, Esther König, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkoreit)<br />[Tiger2Dep](https://www.ims.uni-stuttgart.de/forschung/ressourcen/werkzeuge/tiger2dep/) (Wolfgang Seeker)<br />[WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) (Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, James R Curran)<br />[Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia)](https://spacy.io) (Explosion) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) | | 84e0ad76877a92520363c05ea16669d6 |
mit | ['spacy', 'token-classification'] | false | Label Scheme <details> <summary>View label scheme (772 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$(`, `$,`, `$.`, `ADJA`, `ADJD`, `ADV`, `APPO`, `APPR`, `APPRART`, `APZR`, `ART`, `CARD`, `FM`, `ITJ`, `KOKOM`, `KON`, `KOUI`, `KOUS`, `NE`, `NN`, `NNE`, `PDAT`, `PDS`, `PIAT`, `PIS`, `PPER`, `PPOSAT`, `PPOSS`, `PRELAT`, `PRELS`, `PRF`, `PROAV`, `PTKA`, `PTKANT`, `PTKNEG`, `PTKVZ`, `PTKZU`, `PWAT`, `PWAV`, `PWS`, `TRUNC`, `VAFIN`, `VAIMP`, `VAINF`, `VAPP`, `VMFIN`, `VMINF`, `VMPP`, `VVFIN`, `VVIMP`, `VVINF`, `VVIZU`, `VVPP`, `XY`, `_SP` | | **`morphologizer`** | `POS=PUNCT`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=ADV`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `POS=VERB\|VerbForm=Part`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Degree=Pos\|POS=ADV`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=ADP`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=CCONJ`, `POS=SCONJ`, `Case=Acc\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `POS=VERB\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PART`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Plur\|POS=PROPN`, `POS=PRON\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=NUM`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADP`, `Gender=Neut\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=INTJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=SCONJ\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADP`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Degree=Cmp\|POS=ADV`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADP`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Dat\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=X`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=SPACE`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Pos\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=AUX\|VerbForm=Inf`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=AUX\|VerbForm=Part`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Ind`, `Degree=Sup\|POS=ADV`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Fem\|POS=NOUN`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|POS=PROPN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Masc\|POS=NOUN`, `Case=Dat\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|POS=PROPN`, `Case=Gen\|Definite=Def\|POS=DET\|PronType=Art`, `Case=Gen\|POS=PROPN`, `Case=Acc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Gen\|POS=PRON\|PronType=Dem`, `Definite=Ind\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Dat\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Neut\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, _(truncated: full list in pipeline meta)_ | | **`parser`** | `ROOT`, `ac`, `adc`, `ag`, `ams`, `app`, `avc`, `cc`, `cd`, `cj`, `cm`, `cp`, `cvc`, `da`, `dep`, `dm`, `ep`, `ju`, `mnr`, `mo`, `ng`, `nk`, `nmc`, `oa`, `oc`, `og`, `op`, `par`, `pd`, `pg`, `ph`, `pm`, `pnc`, `punct`, `rc`, `re`, `rs`, `sb`, `sbp`, `svp`, `uc`, `vo` | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | </details> | 7d07b4c5a3eb18900139916435b98c2a |
mit | ['spacy', 'token-classification'] | false | Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.96 | | `TOKEN_P` | 99.92 | | `TOKEN_R` | 99.90 | | `TOKEN_F` | 99.91 | | `TAG_ACC` | 97.96 | | `POS_ACC` | 98.41 | | `MORPH_ACC` | 92.06 | | `MORPH_MICRO_P` | 96.01 | | `MORPH_MICRO_R` | 95.99 | | `MORPH_MICRO_F` | 96.00 | | `SENTS_P` | 95.18 | | `SENTS_R` | 96.48 | | `SENTS_F` | 95.41 | | `DEP_UAS` | 92.66 | | `DEP_LAS` | 90.78 | | `LEMMA_ACC` | 97.91 | | `ENTS_P` | 85.27 | | `ENTS_R` | 84.44 | | `ENTS_F` | 84.85 | | d61b6b65d83b4ed7fd3182e269077f99 |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | opus-mt-tc-base-tr-uk Neural machine translation model for translating from Turkish (tr) to Ukrainian (uk). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` | 0429aa4f917a3e8f6198f03cbb812fca |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Model info * Release: 2022-03-07 * source language(s): * target language(s): ukr * model: transformer-align * data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+pbt_transformer-align_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.zip) * more information released models: [OPUS-MT tur-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md) | 3fa61d8af2e7fa0d17919b95ee2713e2 |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "1000 yen yeterli mi?", "Zürih, İsviçre'de bir şehirdir." ] model_name = "pytorch-models/opus-mt-tc-base-tr-uk" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) | 04d62f29071af33dbf53550c7d213e71 |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Цюрих - місто в Швейцарії. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-tr-uk") print(pipe("1000 yen yeterli mi?")) | 2f859f29cfd1b56515f1f0651e7b4e37 |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Benchmarks * test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt) * test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | | 1bea2412dc4cde6314adc11c2e3508a2 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2243 - Accuracy: 0.925 - F1: 0.9251 | 359de735d4742117c6e5329f04688264 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.866 | 1.0 | 250 | 0.3365 | 0.896 | 0.8905 | | 0.2626 | 2.0 | 500 | 0.2243 | 0.925 | 0.9251 | | 4ceced0063893be2bdb08b6c38f88eae |
apache-2.0 | ['thai', 'token-classification', 'pos', 'dependency-parsing'] | false | Model Description This is a DeBERTa(V2) model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [deberta-base-thai](https://huggingface.co/KoichiYasuoka/deberta-base-thai). | 25ee7b657c25dd0f360011c65801081d |
apache-2.0 | ['thai', 'token-classification', 'pos', 'dependency-parsing'] | false | How to Use ```py class UDgoeswith(object): def __init__(self,bert): from transformers import AutoTokenizer,AutoModelForTokenClassification self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForTokenClassification.from_pretrained(bert) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=self.tokenizer(text,return_offsets_mapping=True) v=w["input_ids"] x=[v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[j] for i,j in enumerate(v[1:-1],1)] with torch.no_grad(): e=self.model(input_ids=torch.tensor(x)).logits.numpy()[:,1:-2,:] r=[1 if i==0 else -1 if j.endswith("|root") else 0 for i,j in sorted(self.model.config.id2label.items())] e+=numpy.where(numpy.add.outer(numpy.identity(e.shape[0]),r)==0,0,numpy.nan) g=self.model.config.label2id["X|_|goeswith"] r=numpy.tri(e.shape[0]) for i in range(e.shape[0]): for j in range(i+2,e.shape[1]): r[i,j]=r[i,j-1] if numpy.nanargmax(e[i,j-1])==g else 1 e[:,:,g]+=numpy.where(r==0,0,numpy.nan) m=numpy.full((e.shape[0]+1,e.shape[1]+1),numpy.nan) m[1:,1:]=numpy.nanmax(e,axis=2).transpose() p=numpy.zeros(m.shape) p[1:,1:]=numpy.nanargmax(e,axis=2).transpose() for i in range(1,m.shape[0]): m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: m[:,0]+=numpy.where(m[:,0]==numpy.nanmax(m[[i for i,j in enumerate(h) if j==0],0]),0,numpy.nan) m[[i for i,j in enumerate(h) if j==0]]+=[0 if i==0 or j==0 else numpy.nan for i,j in enumerate(h)] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u=" | ff0116ac72525445b64015b41f9fd65b |
apache-2.0 | ['thai', 'token-classification', 'pos', 'dependency-parsing'] | false | text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/deberta-base-thai-ud-goeswith") print(nlp("หลายหัวดีกว่าหัวเดียว")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/deberta-base-thai-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("หลายหัวดีกว่าหัวเดียว")) ``` | 8f36f06f16bfe367c651744a6859c3e8 |
mit | ['generated_from_trainer'] | false | bert_base_tcm_no_objeto_0.8 This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0076 - Criterio Julgamento Precision: 0.7444 - Criterio Julgamento Recall: 0.8684 - Criterio Julgamento F1: 0.8016 - Criterio Julgamento Number: 114 - Data Sessao Precision: 0.7297 - Data Sessao Recall: 0.9153 - Data Sessao F1: 0.8120 - Data Sessao Number: 59 - Modalidade Licitacao Precision: 0.9412 - Modalidade Licitacao Recall: 0.9697 - Modalidade Licitacao F1: 0.9552 - Modalidade Licitacao Number: 462 - Numero Exercicio Precision: 0.9018 - Numero Exercicio Recall: 0.9619 - Numero Exercicio F1: 0.9309 - Numero Exercicio Number: 210 - Valor Objeto Precision: 0.7778 - Valor Objeto Recall: 0.8537 - Valor Objeto F1: 0.8140 - Valor Objeto Number: 41 - Overall Precision: 0.8803 - Overall Recall: 0.9458 - Overall F1: 0.9119 - Overall Accuracy: 0.9983 | 2635cadda694fb7d6c4ba257e8f05f60 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 | 052136068ca96cd9fac1521cfc86ed47 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Valor Objeto Precision | Valor Objeto Recall | Valor Objeto F1 | Valor Objeto Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.012 | 1.0 | 2863 | 0.0099 | 0.7059 | 0.8421 | 0.7680 | 114 | 0.7013 | 0.9153 | 0.7941 | 59 | 0.9366 | 0.9589 | 0.9476 | 462 | 0.9136 | 0.9571 | 0.9349 | 210 | 0.5902 | 0.8780 | 0.7059 | 41 | 0.8583 | 0.9368 | 0.8958 | 0.9974 | | 0.0095 | 2.0 | 5726 | 0.0076 | 0.8095 | 0.8947 | 0.8500 | 114 | 0.6935 | 0.7288 | 0.7107 | 59 | 0.9346 | 0.9589 | 0.9466 | 462 | 0.9054 | 0.9571 | 0.9306 | 210 | 0.8409 | 0.9024 | 0.8706 | 41 | 0.8901 | 0.9323 | 0.9107 | 0.9981 | | 0.005 | 3.0 | 8589 | 0.0091 | 0.7574 | 0.9035 | 0.8240 | 114 | 0.6471 | 0.9322 | 0.7639 | 59 | 0.9371 | 0.9675 | 0.9521 | 462 | 0.9091 | 0.9524 | 0.9302 | 210 | 0.7660 | 0.8780 | 0.8182 | 41 | 0.8715 | 0.9492 | 0.9087 | 0.9978 | | 0.0042 | 4.0 | 11452 | 0.0076 | 0.7444 | 0.8684 | 0.8016 | 114 | 0.7297 | 0.9153 | 0.8120 | 59 | 0.9412 | 0.9697 | 0.9552 | 462 | 0.9018 | 0.9619 | 0.9309 | 210 | 0.7778 | 0.8537 | 0.8140 | 41 | 0.8803 | 0.9458 | 0.9119 | 0.9983 | | 0.004 | 5.0 | 14315 | 0.0100 | 0.7373 | 0.7632 | 0.7500 | 114 | 0.7534 | 0.9322 | 0.8333 | 59 | 0.9124 | 0.9697 | 0.9402 | 462 | 0.9196 | 0.9810 | 0.9493 | 210 | 0.76 | 0.9268 | 0.8352 | 41 | 0.8724 | 0.9413 | 0.9055 | 0.9979 | | 0.0041 | 6.0 | 17178 | 0.0103 | 0.7377 | 0.7895 | 0.7627 | 114 | 0.75 | 0.8644 | 0.8031 | 59 | 0.9492 | 0.9697 | 0.9593 | 462 | 0.92 | 0.9857 | 0.9517 | 210 | 0.7872 | 0.9024 | 0.8409 | 41 | 0.8919 | 0.9402 | 0.9154 | 0.9980 | | 0.002 | 7.0 | 20041 | 0.0092 | 0.7984 | 0.8684 | 0.8319 | 114 | 0.68 | 0.8644 | 0.7612 | 59 | 0.9471 | 0.9697 | 0.9583 | 462 | 0.9196 | 0.9810 | 0.9493 | 210 | 0.7872 | 0.9024 | 0.8409 | 41 | 0.8918 | 0.9492 | 0.9196 | 0.9983 | | 0.0014 | 8.0 | 22904 | 0.0100 | 0.8033 | 0.8596 | 0.8305 | 114 | 0.7612 | 0.8644 | 0.8095 | 59 | 0.9532 | 0.9697 | 0.9614 | 462 | 0.9186 | 0.9667 | 0.9420 | 210 | 0.8222 | 0.9024 | 0.8605 | 41 | 0.9049 | 0.9447 | 0.9244 | 0.9983 | | 0.0015 | 9.0 | 25767 | 0.0108 | 0.7787 | 0.8333 | 0.8051 | 114 | 0.7067 | 0.8983 | 0.7910 | 59 | 0.9513 | 0.9719 | 0.9615 | 462 | 0.9107 | 0.9714 | 0.9401 | 210 | 0.8409 | 0.9024 | 0.8706 | 41 | 0.8943 | 0.9458 | 0.9194 | 0.9984 | | 0.0008 | 10.0 | 28630 | 0.0112 | 0.7934 | 0.8421 | 0.8170 | 114 | 0.7222 | 0.8814 | 0.7939 | 59 | 0.9533 | 0.9719 | 0.9625 | 462 | 0.9193 | 0.9762 | 0.9469 | 210 | 0.8409 | 0.9024 | 0.8706 | 41 | 0.9012 | 0.9470 | 0.9235 | 0.9984 | | 5fbcb1903d473ae8096c788ca042af37 |
cc-by-sa-4.0 | [] | false | electra-base-cyberbullying This is a BERT Base model for the Japanese language finetuned for automatic cyberbullying detection. The model was based on [daigo's BERT Base for Japanese sentiment analysis](https://huggingface.co/daigo/bert-base-japanese-sentiment), and later finetuned on a balanced dataset created by unifying two datasets, namely "Harmful BBS Japanese comments dataset" and "Twitter Japanese cyberbullying dataset". | 64f0405d1a9495e4fd27c8ec15ce5f97 |
cc-by-sa-4.0 | [] | false | Licenses The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License. <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a> | 535cbc01ecb038f49eebbb99d27cb222 |
cc-by-sa-4.0 | [] | false | Citations Please, cite this model using the following citation. ``` @inproceedings{tanabe2022bert-base-cyberbullying, title={北見工業大学 テキスト情報処理研究室 BERT Base ネットいじめ検出モデル (Daigo ver.)}, author={田邊 威裕 and プタシンスキ ミハウ and エロネン ユーソ and 桝井 文人}, publisher={HuggingFace}, year={2022}, url = "https://huggingface.co/kit-nlp/bert-base-japanese-sentiment-cyberbullying" } ``` | 44179d4ff361639fe89e2020d887868a |
cc-by-sa-4.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 | 4e77918734c6ce7ae9a9b6b2f6e9c998 |
apache-2.0 | ['PyTorch', 'Transformers'] | false | SbertPuncCase SbertPuncCase - модель восстановления пунктуации и регистра для русского языка. Модель способна расставлять точки, запятые и знаки вопроса; определять регистр - слово в нижнем регистре, слово с первой буквой в верхнем регистре, слово в верхнем регистре. Модель разработана для восстановления текста после распознавания речи, поэтому работает со строками в нижнем регистре. В основу модели легла [sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru). В качестве обучающих данных использованы текстовые расшифровки интервью. | acd46f1f92176c14428f770f7329fc94 |
apache-2.0 | ['PyTorch', 'Transformers'] | false | Как это работает 1. Текст переводится в нижний регистр и разбивается на слова. 2. Слова разделяются на токены. 3. Модель (по аналогии с задачей NER) предсказывает класс для каждого токена. Классификация на 12 классов: 3+1 знака препинания * 3 варианта регистра. 4. Функция декодировки восстанавливает текст соответственно предсказанным классам. | 8a95a25cd5d20f95e7c236df218a8903 |
apache-2.0 | ['PyTorch', 'Transformers'] | false | Как использовать Код модели находится в файле `sbert-punc-case-ru/sbertpunccase.py`. Для быстрой установки можно воспользоваться командой: ``` pip install git+https://huggingface.co/kontur-ai/sbert_punc_case_ru ``` Использование модели: ``` from sbert_punc_case_ru import SbertPuncCase model = SbertPuncCase() model.punctuate("sbert punc case расставляет точки запятые и знаки вопроса вам нравится") ``` | 9bc44a5a19def9b1228205f12f4ccbd1 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 | 968978355f232be53eded9ab3b43da61 |
mit | [] | false | PGT PGT is a GPT-2 prompt-based model trained to facilitate 3 patent generation-related tasks, namely: *part-of-patent generation*, *part-of-patent editing* and *patent coherence check*. For more information about the dataset and the training procedure with refer the reader to [our paper](https://openreview.net/pdf?id=dLHtwZKvJmE). The task specification is taken place by appending a short sentence at the end of a given input. The general format is: `input <|sep|> task specific prompt <|sep|>` In all cases, the generated output ends with the special token <|endoftext|> to facilitate postprocessing. | 3d0a54a43654f1f7f8371e685954324a |
mit | [] | false | Supported tasks **Part-of-patent generation** attempts to generate a part of a patent given as input another, already existing part of it. The model has been trained to perform title-to-abstract, abstract-to-claim as well as their inverse generations. For the claim case, the model was only exposed to independent claims during the training. Input example for part-of-patent generation for the abstract-to-title case: `An interesting patent abstract. <|sep|> Given the above abstract, suggest a title <|sep|>` **Part-of-patent editing** attempts to suggest alternatives for some highlighted parts of a patent abstract or claim. These parts are defined in the input with the special [MASK] token. The expected size of these masked parts can be from a single word to a small phrase. If more than one masks are given in the input, then the generated suggestions are distinguished in the output but the special <|mask_sep|> token. Input example for part-of-patent editing working on a claim input: `An interesting patent claim with a [MASK] part. <|sep|> Replace the [MASK] tokens in the above claim <|sep|>` The **coherence check** assesses the quality of a patent by examining whether to given parts of a patent could belong to the same patent in terms of content and syntax. The input patent parts can be title, abstract or claim. The expected output is Yes or No. Input example for the coherence check task having as input a title and a claim: `A patent title <|sep|> An interesting patent claim. <|sep|> Do the above title and claim belong to the same patent? <|sep|>"` Further prompts and tasks can be tried in a zero-shot fashion. The model and the tasks are also integrated and available via the [GT4SD python library](https://github.com/GT4SD/gt4sd-core/blob/main/notebooks/explore-pgt.ipynb). | bd88aa1f4dab8dd3e8e5d5d9f1984018 |
mit | [] | false | Example A full example of part-of-patent generation ``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("christofid/pgt") model = AutoModelForCausalLM.from_pretrained("christofid/pgt") text = "Automated patent generation <|sep|> Given the above title, suggest an abstract <|sep|>" text_encoded = tokenizer.encode(text, return_tensors="pt") generated = model.generate(text_encoded, do_sample=True, top_k=50, num_return_sequences = 3, max_length=512) generated_text = [tokenizer.decode(case).split("<|endoftext|>")[0].strip() for case in generated] ``` | 50165c1434f9b47c681f9f25ea7fd874 |
mit | [] | false | BibTeX entry and citation info ``` @inproceedings{christofidellis2022pgt, title={PGT: a prompt based generative transformer for the patent domain}, author={Christofidellis, Dimitrios and Torres, Antonio Berrios and Dave, Ashish and Roveri, Manuel and Schmidt, Kristin and Swaminathan, Sarath and Vandierendonck, Hans and Zubarev, Dmitry and Manica, Matteo}, booktitle={ICML 2022 Workshop on Knowledge Retrieval and Language Models}, year={2022} } ``` | 4bfeb6dad123b0ecd17ef473dc341371 |
apache-2.0 | ['generated_from_trainer'] | false | model_en This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8610 - Wer: 0.2641 | fd736725ccd5ba7e1d3d93f5290bfdfc |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 200 | e62b33db5fe6ef74f64342c8832c18b8 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 6.3443 | 3.05 | 250 | 3.0966 | 1.0 | | 2.9847 | 6.1 | 500 | 3.0603 | 1.0 | | 2.9263 | 9.15 | 750 | 2.9131 | 1.0 | | 2.2584 | 12.19 | 1000 | 1.4318 | 0.6575 | | 1.2603 | 15.24 | 1250 | 1.1964 | 0.4994 | | 0.9182 | 18.29 | 1500 | 1.1494 | 0.4485 | | 0.7462 | 21.34 | 1750 | 1.2171 | 0.4357 | | 0.6129 | 24.39 | 2000 | 1.0557 | 0.3468 | | 0.5364 | 27.44 | 2250 | 1.1069 | 0.4222 | | 0.4607 | 30.48 | 2500 | 1.3270 | 0.3370 | | 0.4139 | 33.53 | 2750 | 1.1814 | 0.3658 | | 0.3587 | 36.58 | 3000 | 1.2423 | 0.3419 | | 0.321 | 39.63 | 3250 | 1.2931 | 0.3211 | | 0.2961 | 42.68 | 3500 | 1.1409 | 0.3315 | | 0.2635 | 45.73 | 3750 | 1.4537 | 0.3241 | | 0.2498 | 48.78 | 4000 | 1.2643 | 0.3192 | | 0.2352 | 51.82 | 4250 | 1.2789 | 0.3278 | | 0.2193 | 54.87 | 4500 | 1.4220 | 0.3021 | | 0.2068 | 57.92 | 4750 | 1.3567 | 0.3713 | | 0.2055 | 60.97 | 5000 | 1.5375 | 0.3051 | | 0.198 | 64.02 | 5250 | 1.2676 | 0.2782 | | 0.1835 | 67.07 | 5500 | 1.3905 | 0.2825 | | 0.1655 | 70.12 | 5750 | 1.7000 | 0.2978 | | 0.1677 | 73.17 | 6000 | 1.4250 | 0.2812 | | 0.1522 | 76.22 | 6250 | 1.4220 | 0.2941 | | 0.1522 | 79.27 | 6500 | 1.5195 | 0.3021 | | 0.1344 | 82.32 | 6750 | 1.3749 | 0.2996 | | 0.1298 | 85.36 | 7000 | 1.6663 | 0.2849 | | 0.1293 | 88.41 | 7250 | 1.4564 | 0.2892 | | 0.1264 | 91.46 | 7500 | 1.4373 | 0.2935 | | 0.1243 | 94.51 | 7750 | 1.6572 | 0.2972 | | 0.1141 | 97.56 | 8000 | 1.4936 | 0.2892 | | 0.1086 | 100.61 | 8250 | 1.5231 | 0.2868 | | 0.1056 | 103.65 | 8500 | 1.3733 | 0.2763 | | 0.098 | 106.7 | 8750 | 1.4887 | 0.2923 | | 0.0984 | 109.75 | 9000 | 1.3779 | 0.2923 | | 0.0916 | 112.8 | 9250 | 1.4868 | 0.2604 | | 0.0881 | 115.85 | 9500 | 1.7991 | 0.2996 | | 0.0846 | 118.9 | 9750 | 1.5845 | 0.2849 | | 0.0861 | 121.95 | 10000 | 1.6684 | 0.2794 | | 0.0806 | 124.99 | 10250 | 1.5774 | 0.3039 | | 0.0822 | 128.05 | 10500 | 1.5928 | 0.2886 | | 0.0788 | 131.1 | 10750 | 1.6158 | 0.2880 | | 0.0704 | 134.15 | 11000 | 1.7679 | 0.2941 | | 0.0721 | 137.19 | 11250 | 1.7055 | 0.2629 | | 0.0723 | 140.24 | 11500 | 1.5473 | 0.2653 | | 0.0676 | 143.29 | 11750 | 1.8963 | 0.2745 | | 0.0665 | 146.34 | 12000 | 1.6367 | 0.2739 | | 0.0618 | 149.39 | 12250 | 1.6757 | 0.2745 | | 0.0595 | 152.44 | 12500 | 1.5900 | 0.2745 | | 0.056 | 155.48 | 12750 | 1.5362 | 0.2794 | | 0.0587 | 158.53 | 13000 | 1.4616 | 0.2684 | | 0.0519 | 161.58 | 13250 | 1.6867 | 0.2549 | | 0.0569 | 164.63 | 13500 | 1.8294 | 0.2574 | | 0.0497 | 167.68 | 13750 | 1.7844 | 0.2868 | | 0.0531 | 170.73 | 14000 | 1.7564 | 0.2770 | | 0.0489 | 173.78 | 14250 | 1.5811 | 0.2629 | | 0.0524 | 176.82 | 14500 | 1.6925 | 0.2684 | | 0.0431 | 179.87 | 14750 | 1.7236 | 0.2653 | | 0.0457 | 182.92 | 15000 | 1.7460 | 0.2512 | | 0.045 | 185.97 | 15250 | 1.8096 | 0.2610 | | 0.0402 | 189.02 | 15500 | 1.8795 | 0.2635 | | 0.0529 | 192.07 | 15750 | 1.8310 | 0.2616 | | 0.0396 | 195.12 | 16000 | 1.8380 | 0.2635 | | 0.0432 | 198.17 | 16250 | 1.8610 | 0.2641 | | d7375c8a2624223cf03196dbf2aeb792 |
apache-2.0 | ['i-dont-know-what-im-doing', 'generated_from_trainer'] | false | Whisper Small sv-SE - Lab 2 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3278 - Wer: 19.7736 | e1b673698aeba1e9b1af09e3d32ff107 |
apache-2.0 | ['i-dont-know-what-im-doing', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP | 48825ff541ec1614b48f8bdc5ed3e2d1 |
apache-2.0 | ['i-dont-know-what-im-doing', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1378 | 1.29 | 1000 | 0.2953 | 21.4165 | | 0.0475 | 2.59 | 2000 | 0.2913 | 20.2495 | | 0.0186 | 3.88 | 3000 | 0.3027 | 19.8193 | | 0.0042 | 5.17 | 4000 | 0.3278 | 19.7736 | | e80dd75f7c987a29fac67d497ab3fba0 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-base-timit-moaiz_exp2_new This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6849 - Wer: 0.5396 | 1d6ee293db7124453a8e1a1a4bc3189c |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP | 00c7b6fef471e056a9b134e2572bb76d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.1266 | 13.89 | 500 | 1.0233 | 0.7034 | | 0.5928 | 27.78 | 1000 | 0.6849 | 0.5396 | | 14cca75ac4f14a46cf30daf69208f2c9 |
mit | ['generated_from_trainer'] | false | predict-perception-bert-blame-assassin This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5128 - Rmse: 1.0287 - Rmse Blame::a L'assassino: 1.0287 - Mae: 0.8883 - Mae Blame::a L'assassino: 0.8883 - R2: 0.5883 - R2 Blame::a L'assassino: 0.5883 - Cos: 0.6522 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.5795 - Rsa: nan | e28e830aba482556f8f57bed8e7c80ba |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 | 207b40bde45c1b681b0e92f1d5ab2840 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a L'assassino | Mae | Mae Blame::a L'assassino | R2 | R2 Blame::a L'assassino | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------------------------:|:------:|:------------------------:|:------:|:-----------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.0184 | 1.0 | 15 | 1.2219 | 1.5879 | 1.5879 | 1.4308 | 1.4308 | 0.0191 | 0.0191 | 0.3913 | 0.0 | 0.5 | 0.3781 | nan | | 0.9214 | 2.0 | 30 | 1.0927 | 1.5017 | 1.5017 | 1.3634 | 1.3634 | 0.1227 | 0.1227 | 0.5652 | 0.0 | 0.5 | 0.4512 | nan | | 0.7809 | 3.0 | 45 | 0.8206 | 1.3013 | 1.3013 | 1.1808 | 1.1808 | 0.3412 | 0.3412 | 0.4783 | 0.0 | 0.5 | 0.3819 | nan | | 0.6593 | 4.0 | 60 | 0.5894 | 1.1029 | 1.1029 | 1.0145 | 1.0145 | 0.5268 | 0.5268 | 0.7391 | 0.0 | 0.5 | 0.6408 | nan | | 0.4672 | 5.0 | 75 | 0.4759 | 0.9910 | 0.9910 | 0.8868 | 0.8868 | 0.6180 | 0.6180 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.3356 | 6.0 | 90 | 0.4220 | 0.9332 | 0.9332 | 0.8083 | 0.8083 | 0.6612 | 0.6612 | 0.6522 | 0.0 | 0.5 | 0.4249 | nan | | 0.2782 | 7.0 | 105 | 0.4477 | 0.9612 | 0.9612 | 0.8046 | 0.8046 | 0.6406 | 0.6406 | 0.6522 | 0.0 | 0.5 | 0.6101 | nan | | 0.2075 | 8.0 | 120 | 0.4389 | 0.9518 | 0.9518 | 0.8050 | 0.8050 | 0.6476 | 0.6476 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.1725 | 9.0 | 135 | 0.4832 | 0.9985 | 0.9985 | 0.8356 | 0.8356 | 0.6121 | 0.6121 | 0.7391 | 0.0 | 0.5 | 0.6616 | nan | | 0.1642 | 10.0 | 150 | 0.4368 | 0.9494 | 0.9494 | 0.8060 | 0.8060 | 0.6493 | 0.6493 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.1172 | 11.0 | 165 | 0.4538 | 0.9677 | 0.9677 | 0.8174 | 0.8174 | 0.6357 | 0.6357 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.104 | 12.0 | 180 | 0.4672 | 0.9819 | 0.9819 | 0.8384 | 0.8384 | 0.6249 | 0.6249 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0822 | 13.0 | 195 | 0.4401 | 0.9530 | 0.9530 | 0.8107 | 0.8107 | 0.6467 | 0.6467 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0755 | 14.0 | 210 | 0.4464 | 0.9598 | 0.9598 | 0.8251 | 0.8251 | 0.6416 | 0.6416 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0801 | 15.0 | 225 | 0.4834 | 0.9988 | 0.9988 | 0.8604 | 0.8604 | 0.6119 | 0.6119 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.053 | 16.0 | 240 | 0.4846 | 1.0001 | 1.0001 | 0.8651 | 0.8651 | 0.6109 | 0.6109 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0573 | 17.0 | 255 | 0.4970 | 1.0128 | 1.0128 | 0.8743 | 0.8743 | 0.6010 | 0.6010 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0571 | 18.0 | 270 | 0.4803 | 0.9956 | 0.9956 | 0.8503 | 0.8503 | 0.6144 | 0.6144 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0483 | 19.0 | 285 | 0.4936 | 1.0093 | 1.0093 | 0.8740 | 0.8740 | 0.6037 | 0.6037 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0414 | 20.0 | 300 | 0.5138 | 1.0297 | 1.0297 | 0.8943 | 0.8943 | 0.5875 | 0.5875 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0513 | 21.0 | 315 | 0.5240 | 1.0399 | 1.0399 | 0.9050 | 0.9050 | 0.5793 | 0.5793 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0499 | 22.0 | 330 | 0.5275 | 1.0434 | 1.0434 | 0.9048 | 0.9048 | 0.5765 | 0.5765 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0423 | 23.0 | 345 | 0.5350 | 1.0508 | 1.0508 | 0.8872 | 0.8872 | 0.5705 | 0.5705 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0447 | 24.0 | 360 | 0.4963 | 1.0120 | 1.0120 | 0.8754 | 0.8754 | 0.6016 | 0.6016 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0364 | 25.0 | 375 | 0.5009 | 1.0167 | 1.0167 | 0.8809 | 0.8809 | 0.5979 | 0.5979 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0412 | 26.0 | 390 | 0.5060 | 1.0219 | 1.0219 | 0.8781 | 0.8781 | 0.5938 | 0.5938 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0297 | 27.0 | 405 | 0.5027 | 1.0185 | 1.0185 | 0.8838 | 0.8838 | 0.5964 | 0.5964 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0416 | 28.0 | 420 | 0.5071 | 1.0230 | 1.0230 | 0.8867 | 0.8867 | 0.5929 | 0.5929 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0327 | 29.0 | 435 | 0.5124 | 1.0283 | 1.0283 | 0.8883 | 0.8883 | 0.5887 | 0.5887 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0383 | 30.0 | 450 | 0.5128 | 1.0287 | 1.0287 | 0.8883 | 0.8883 | 0.5883 | 0.5883 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | c1a0c629bb1b9113018146d1f6fe9b5d |
apache-2.0 | ['automatic-speech-recognition', 'nl'] | false | exp_w2v2t_nl_unispeech-ml_s498 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 73e501bc76b0fdd96d6322ae5ddfba58 |
apache-2.0 | ['generated_from_trainer'] | false | emotion_trained_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9051 - F1: 0.7302 | dbe790aa4d2184f9cb4d17446ebd6536 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.961635072722524e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 | 37641cfe4016593955a9df48f77c7859 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 204 | 0.6480 | 0.7231 | | No log | 2.0 | 408 | 0.6114 | 0.7403 | | 0.5045 | 3.0 | 612 | 0.7592 | 0.7311 | | 0.5045 | 4.0 | 816 | 0.9051 | 0.7302 | | 3f251b253e6d94ab94009ce1f0b47420 |
apache-2.0 | ['automatic-speech-recognition', 'de'] | false | exp_w2v2t_de_r-wav2vec2_s460 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 6372d984ab043e72b9d825187f779bf9 |
mit | ['audio', 'music', 'generation', 'tensorflow'] | false | Model provided by: comehu Pretrained musika_sm64_ost model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation. Introduced in [this paper](https://arxiv.org/abs/2208.08706). | 52635da29cf8736c271517b2c9a72fa0 |
mit | ['audio', 'music', 'generation', 'tensorflow'] | false | Model description This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio. The generator has a context window of about 12 seconds of audio. | 80cb400058ec5287b873bed4564bb8c1 |
apache-2.0 | ['generated_from_trainer'] | false | bert-large-uncased-finetuned-docvqa This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6367 | 0ed6f5582b3994c8a4a25f819e1bdb72 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 250500 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 | 0a8595fadfe5ace39573437cb019451c |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.5228 | 0.05 | 1000 | 2.6645 | | 2.4909 | 0.1 | 2000 | 2.8985 | | 2.1679 | 0.16 | 3000 | 2.3551 | | 1.9451 | 0.21 | 4000 | 2.2226 | | 1.6814 | 0.26 | 5000 | 2.1590 | | 1.8868 | 0.31 | 6000 | 2.6197 | | 1.6618 | 0.36 | 7000 | 2.3632 | | 1.8313 | 0.41 | 8000 | 2.4519 | | 1.7017 | 0.47 | 9000 | 2.2682 | | 1.8169 | 0.52 | 10000 | 2.4486 | | 1.7074 | 0.57 | 11000 | 2.3862 | | 1.7674 | 0.62 | 12000 | 2.1801 | | 1.8134 | 0.67 | 13000 | 2.3032 | | 1.8334 | 0.73 | 14000 | 2.4205 | | 1.6819 | 0.78 | 15000 | 2.2398 | | 1.5846 | 0.83 | 16000 | 2.3834 | | 1.6758 | 0.88 | 17000 | 1.9683 | | 1.6303 | 0.93 | 18000 | 2.3297 | | 1.5652 | 0.98 | 19000 | 2.0581 | | 1.3045 | 1.04 | 20000 | 2.4950 | | 1.2393 | 1.09 | 21000 | 2.6622 | | 1.1526 | 1.14 | 22000 | 2.3749 | | 1.2631 | 1.19 | 23000 | 2.3915 | | 1.1846 | 1.24 | 24000 | 2.2592 | | 1.2731 | 1.3 | 25000 | 2.4239 | | 1.3057 | 1.35 | 26000 | 2.2920 | | 1.134 | 1.4 | 27000 | 2.3107 | | 1.2017 | 1.45 | 28000 | 2.4271 | | 1.2202 | 1.5 | 29000 | 2.1814 | | 1.2179 | 1.56 | 30000 | 2.3365 | | 1.2359 | 1.61 | 31000 | 2.1256 | | 1.1964 | 1.66 | 32000 | 2.1720 | | 1.269 | 1.71 | 33000 | 2.4363 | | 1.1812 | 1.76 | 34000 | 2.2372 | | 1.2187 | 1.81 | 35000 | 2.2318 | | 1.1805 | 1.87 | 36000 | 2.3693 | | 1.1458 | 1.92 | 37000 | 2.5128 | | 1.1958 | 1.97 | 38000 | 2.1311 | | 0.8924 | 2.02 | 39000 | 2.4635 | | 0.869 | 2.07 | 40000 | 2.8231 | | 0.8333 | 2.13 | 41000 | 2.6762 | | 0.9194 | 2.18 | 42000 | 2.4588 | | 0.8089 | 2.23 | 43000 | 2.6443 | | 0.8612 | 2.28 | 44000 | 2.4300 | | 0.7981 | 2.33 | 45000 | 2.7418 | | 0.9765 | 2.38 | 46000 | 2.6543 | | 0.8646 | 2.44 | 47000 | 2.5990 | | 1.0316 | 2.49 | 48000 | 2.4625 | | 0.9862 | 2.54 | 49000 | 2.4691 | | 1.027 | 2.59 | 50000 | 2.4156 | | 0.9412 | 2.64 | 51000 | 2.4204 | | 0.9353 | 2.7 | 52000 | 2.4933 | | 0.9509 | 2.75 | 53000 | 2.4708 | | 0.9351 | 2.8 | 54000 | 2.5351 | | 0.9968 | 2.85 | 55000 | 2.2506 | | 1.025 | 2.9 | 56000 | 2.6317 | | 1.627 | 2.95 | 57000 | 2.7843 | | 0.9294 | 3.01 | 58000 | 2.9396 | | 0.6043 | 3.06 | 59000 | 3.1560 | | 0.7903 | 3.11 | 60000 | 2.8330 | | 0.7373 | 3.16 | 61000 | 2.9422 | | 0.6499 | 3.21 | 62000 | 3.0948 | | 0.6411 | 3.27 | 63000 | 2.7900 | | 0.625 | 3.32 | 64000 | 2.5268 | | 0.6264 | 3.37 | 65000 | 2.8701 | | 0.6143 | 3.42 | 66000 | 3.2544 | | 0.6286 | 3.47 | 67000 | 2.6208 | | 0.739 | 3.53 | 68000 | 2.8107 | | 0.5981 | 3.58 | 69000 | 2.8073 | | 0.6502 | 3.63 | 70000 | 2.6293 | | 0.6548 | 3.68 | 71000 | 2.9501 | | 0.7243 | 3.73 | 72000 | 2.7917 | | 0.598 | 3.78 | 73000 | 2.9341 | | 0.6159 | 3.84 | 74000 | 2.7629 | | 0.5905 | 3.89 | 75000 | 2.6441 | | 0.6393 | 3.94 | 76000 | 2.6660 | | 0.677 | 3.99 | 77000 | 2.7616 | | 0.3281 | 4.04 | 78000 | 3.6873 | | 0.4524 | 4.1 | 79000 | 3.3441 | | 0.3994 | 4.15 | 80000 | 3.3129 | | 0.4686 | 4.2 | 81000 | 3.1813 | | 0.5293 | 4.25 | 82000 | 2.9088 | | 0.3961 | 4.3 | 83000 | 3.0765 | | 0.4406 | 4.35 | 84000 | 3.1254 | | 0.401 | 4.41 | 85000 | 3.2415 | | 0.4594 | 4.46 | 86000 | 3.0691 | | 0.4523 | 4.51 | 87000 | 3.0493 | | 0.4719 | 4.56 | 88000 | 3.1352 | | 0.4895 | 4.61 | 89000 | 2.8991 | | 0.423 | 4.67 | 90000 | 3.1738 | | 0.3984 | 4.72 | 91000 | 3.1862 | | 0.4206 | 4.77 | 92000 | 3.1213 | | 0.4587 | 4.82 | 93000 | 3.0030 | | 0.381 | 4.87 | 94000 | 3.3218 | | 0.4138 | 4.92 | 95000 | 3.1529 | | 0.4003 | 4.98 | 96000 | 3.1375 | | 0.2098 | 5.03 | 97000 | 3.7443 | | 0.2334 | 5.08 | 98000 | 3.7359 | | 0.2534 | 5.13 | 99000 | 3.7814 | | 0.3067 | 5.18 | 100000 | 3.7128 | | 0.2363 | 5.24 | 101000 | 3.6091 | | 0.2652 | 5.29 | 102000 | 3.4015 | | 0.3311 | 5.34 | 103000 | 3.4793 | | 0.2344 | 5.39 | 104000 | 3.6792 | | 0.2741 | 5.44 | 105000 | 3.5385 | | 0.2896 | 5.5 | 106000 | 3.8118 | | 0.2071 | 5.55 | 107000 | 3.8690 | | 0.3023 | 5.6 | 108000 | 3.7087 | | 0.3299 | 5.65 | 109000 | 3.4925 | | 0.1943 | 5.7 | 110000 | 3.6739 | | 0.2488 | 5.75 | 111000 | 3.7614 | | 0.3138 | 5.81 | 112000 | 3.5156 | | 0.2555 | 5.86 | 113000 | 3.6056 | | 0.2918 | 5.91 | 114000 | 3.6533 | | 0.2751 | 5.96 | 115000 | 3.6367 | | 84f58a0529466564395f1ceedd46bc34 |
apache-2.0 | ['translation'] | false | opus-mt-ig-fi * source languages: ig * target languages: fi * OPUS readme: [ig-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.eval.txt) | b6ec9e830e95932c9c4e491c3e11e517 |
apache-2.0 | ['generated_from_trainer'] | false | mbert-profane-final This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4464 - Accuracy: 0.8983 - Precision: 0.8135 - Recall: 0.8120 - F1: 0.8128 | f89cbd1492332fbebc71aa40f1bf45f6 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 | 1a1b032852afe9a796d9280207fe6e65 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 296 | 0.2313 | 0.9154 | 0.8687 | 0.8010 | 0.8294 | | 0.3077 | 2.0 | 592 | 0.2223 | 0.9125 | 0.8473 | 0.8205 | 0.8330 | | 0.3077 | 3.0 | 888 | 0.2137 | 0.9259 | 0.8784 | 0.8379 | 0.8563 | | 0.2102 | 4.0 | 1184 | 0.2334 | 0.9163 | 0.8483 | 0.8417 | 0.8449 | | 0.2102 | 5.0 | 1480 | 0.2737 | 0.9068 | 0.8305 | 0.8242 | 0.8273 | | 0.1533 | 6.0 | 1776 | 0.3214 | 0.8964 | 0.8034 | 0.8510 | 0.8239 | | 0.1092 | 7.0 | 2072 | 0.3409 | 0.9002 | 0.8115 | 0.8414 | 0.8252 | | 0.1092 | 8.0 | 2368 | 0.3849 | 0.9049 | 0.8322 | 0.8066 | 0.8185 | | 0.0775 | 9.0 | 2664 | 0.4408 | 0.8983 | 0.8113 | 0.8215 | 0.8162 | | 0.0775 | 10.0 | 2960 | 0.4464 | 0.8983 | 0.8135 | 0.8120 | 0.8128 | | a7d001c412e8348421a4d0e5cde870de |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | restaurant_test_model This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the test_data dataset. It achieves the following results on the evaluation set: - Loss: 0.5435 - Wer: 78.5714 | 4eb360714f92f3beeff621b783628739 |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 40 - mixed_precision_training: Native AMP | e9bfd1d5aecb3a64d680a3f00cf705df |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 10.0 | 10 | 2.2425 | 7.1429 | | No log | 20.0 | 20 | 0.6651 | 0.0 | | 2.4375 | 30.0 | 30 | 0.5776 | 35.7143 | | 2.4375 | 40.0 | 40 | 0.5435 | 78.5714 | | eaa073c375497fa89a66c355eda8f0e9 |
apache-2.0 | ['generated_from_trainer'] | false | my_awesome_ko_en_model This model is a fine-tuned version of [KETI-AIR/ke-t5-small](https://huggingface.co/KETI-AIR/ke-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Bleu: 0.0 - Gen Len: 19.0 | 9b5193fdeaa53a00e8c7beb907d73f8b |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP | 8876408f0ac77ea00bd81e35c878ee87 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----:|:-------:| | No log | 1.0 | 67 | nan | 0.0 | 19.0 | | No log | 2.0 | 134 | nan | 0.0 | 19.0 | | 6c5ef2eb65b8fdd4f46fa53786a160d8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.