license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
[]
false
Introduction Recent years have witnessed the rise and success of pre-training techniques in visually-rich document understanding. However, most existing methods lack the systematic mining and utilization of layout-centered knowledge, leading to sub-optimal performances. In this paper, we propose ERNIE-Layout, a novel document pre-training solution with layout knowledge enhancement in the whole workflow, to learn better representations that combine the features from text, layout, and image. Specifically, we first rearrange input sequences in the serialization stage, and then present a correlative pre-training task, reading order prediction, to learn the proper reading order of documents. To improve the layout awareness of the model, we integrate a spatial-aware disentangled attention into the multi-modal transformer and a replaced regions prediction task into the pre-training phase. Experimental results show that ERNIE-Layout achieves superior performance on various downstream tasks, setting new state-of-the-art on key information extraction, document image classification, and document question answering datasets. More detail: https://arxiv.org/abs/2210.06155
4b7b9614f5c0d2beace2fb7cda24e0da
apache-2.0
[]
false
Citation Info ```text @article{ernie2.0, title = {ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding}, author = {Peng, Qiming and Pan, Yinxu and Wang, Wenjin and Luo, Bin and Zhang, Zhenyu and Huang, Zhengjie and Hu, Teng and Yin, Weichong and Chen, Yongfeng and Zhang, Yin and Feng, Shikun and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:2210.06155}, year = {2022}, } ```
9cffa53164b80ec67e933f3f44e83766
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3179 - Accuracy: 0.8733 - F1: 0.8742
07abdf17e37fbdf1cb05e6810b5defc0
apache-2.0
['generated_from_trainer']
false
bert-uncased-massive-intent-classification This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.8396 - Accuracy: 0.8854
d733139bdb42fc6842a57a682666a49d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4984 | 1.0 | 720 | 0.6402 | 0.8495 | | 0.4376 | 2.0 | 1440 | 0.5394 | 0.8731 | | 0.2318 | 3.0 | 2160 | 0.5903 | 0.8760 | | 0.1414 | 4.0 | 2880 | 0.6221 | 0.8805 | | 0.087 | 5.0 | 3600 | 0.7072 | 0.8819 | | 0.0622 | 6.0 | 4320 | 0.7121 | 0.8819 | | 0.036 | 7.0 | 5040 | 0.7750 | 0.8805 | | 0.0234 | 8.0 | 5760 | 0.7767 | 0.8834 | | 0.0157 | 9.0 | 6480 | 0.8243 | 0.8805 | | 0.0122 | 10.0 | 7200 | 0.8198 | 0.8839 | | 0.0092 | 11.0 | 7920 | 0.8105 | 0.8849 | | 0.0047 | 12.0 | 8640 | 0.8561 | 0.8844 | | 0.0038 | 13.0 | 9360 | 0.8367 | 0.8815 | | 0.0029 | 14.0 | 10080 | 0.8396 | 0.8854 | | 0.0014 | 15.0 | 10800 | 0.8410 | 0.8849 |
34f6e1e79f78eef210935ed2adc73d8d
apache-2.0
['generated_from_trainer']
false
edos-2023-baseline-bert-base-uncased-label_vector This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5258 - F1: 0.2606
1b7d404078ccfa56825eeb370a38c536
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.1324 | 1.18 | 100 | 1.9573 | 0.0997 | | 1.8322 | 2.35 | 200 | 1.8104 | 0.1286 | | 1.6653 | 3.53 | 300 | 1.7238 | 0.1577 | | 1.5292 | 4.71 | 400 | 1.6735 | 0.1655 | | 1.423 | 5.88 | 500 | 1.5987 | 0.1916 | | 1.2936 | 7.06 | 600 | 1.5628 | 0.2359 | | 1.2256 | 8.24 | 700 | 1.5492 | 0.2496 | | 1.1385 | 9.41 | 800 | 1.5388 | 0.2618 | | 1.1138 | 10.59 | 900 | 1.5233 | 0.2678 | | 1.0599 | 11.76 | 1000 | 1.5258 | 0.2606 |
91eb69cbbbc1c5abb7dcdf51b6c822ad
apache-2.0
['generated_from_trainer']
false
stack-overflow-open-status-classifier-pt This model is a fine-tuned version of [reubenjohn/stack-overflow-open-status-classifier-pt](https://huggingface.co/reubenjohn/stack-overflow-open-status-classifier-pt) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.9448 - eval_runtime: 3.554 - eval_samples_per_second: 28.137 - eval_steps_per_second: 0.563 - epoch: 0.01 - step: 60
641a3e76810349a46fe46e826e25e85f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 1
de00c92a95b8bf9ef0fe9ff639c1e732
cc-by-4.0
['question generation']
false
Model Card of `research-backup/bart-large-subjqa-vanilla-tripadvisor-qg` This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: tripadvisor) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
f8bfee9b330c1efd742c20eaa82179eb
cc-by-4.0
['question generation']
false
Overview - **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (tripadvisor) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
994b3fdf4cca2d3f138ac7dc80751037
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/bart-large-subjqa-vanilla-tripadvisor-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
aed0be014a7ebb87a707834e7da33998
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-large-subjqa-vanilla-tripadvisor-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) | | Score | Type | Dataset | |:-----------|--------:|:------------|:-----------------------------------------------------------------| | BERTScore | 81.75 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 3.06 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 1.22 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 0.31 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 0 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 7.89 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 49.6 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 5.99 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
c5f240291b4af76ed8dc2f1a4959e80e
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: tripadvisor - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: facebook/bart-large - max_length: 512 - max_length_output: 32 - epoch: 1 - batch: 8 - lr: 1e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-large-subjqa-vanilla-tripadvisor-qg/raw/main/trainer_config.json).
790825048e51060f39779e115e0182d0
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'whisper-event']
false
Whisper Medium Danish (CV11 + FLEAURS) This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0,google/fleurs da,da_dk dataset. It achieves the following results on the evaluation set: - Loss: 0.5814 - Wer: 13.7086
25cf7846227230a10cc0c5853bda900d
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'whisper-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP
e75e78e871fd13c68699919c8e1608a9
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'whisper-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.0265 | 3.14 | 1000 | 0.3690 | 14.7607 | | 0.0063 | 6.29 | 2000 | 0.4342 | 14.0926 | | 0.0016 | 9.43 | 3000 | 0.4847 | 14.3609 | | 0.002 | 12.58 | 4000 | 0.4919 | 14.1715 | | 0.0013 | 15.72 | 5000 | 0.5114 | 14.2294 | | 0.0014 | 18.87 | 6000 | 0.5197 | 13.9137 | | 0.0003 | 22.01 | 7000 | 0.5422 | 14.1978 | | 0.0001 | 25.16 | 8000 | 0.5659 | 13.8716 | | 0.0001 | 28.3 | 9000 | 0.5772 | 13.7296 | | 0.0001 | 31.45 | 10000 | 0.5814 | 13.7086 |
23ba42d341420dcf34572b1e7ef2e1dd
apache-2.0
['dialogue-summarization']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-4 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50.0 - label_smoothing_factor: 0.1
4671404f525fbf34630baacada9675c5
apache-2.0
['dialogue-summarization']
false
Results on Test Set - predict_gen_len = 329.2 - predict_rouge1 = **48.7673** - predict_rouge2 = **18.1832** - predict_rougeL = **26.1713** - predict_rougeLsum = **46.8434** - predict_samples = 20 - predict_samples_per_second = 1.098 - predict_steps_per_second = 0.274
10429d15dc311477eec7d0919b26af94
apache-2.0
['generated_from_keras_callback']
false
philschmid/vit-base-patch16-224-in21k-euroSat This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0218 - Train Accuracy: 0.9990 - Train Top-3-accuracy: 1.0000 - Validation Loss: 0.0440 - Validation Accuracy: 0.9906 - Validation Top-3-accuracy: 1.0 - Epoch: 5
48ce6fbc93a479ef06978f48103bcd8e
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3585, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
2dea9521fc39532c182b67c659f0853c
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 0.4692 | 0.9471 | 0.9878 | 0.1455 | 0.9861 | 0.9998 | 1 | | 0.0998 | 0.9888 | 0.9996 | 0.0821 | 0.9864 | 0.9995 | 2 | | 0.0517 | 0.9939 | 0.9999 | 0.0617 | 0.9871 | 1.0 | 3 | | 0.0309 | 0.9971 | 0.9999 | 0.0524 | 0.9878 | 0.9998 | 4 | | 0.0218 | 0.9990 | 1.0000 | 0.0440 | 0.9906 | 1.0 | 5 |
e618d15f07a04a017712a344f3309083
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_logit_kd_stsb_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 1.1348 - Pearson: nan - Spearmanr: nan - Combined Score: nan
93ab52a72cf97b7114fda4cf14c225c1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 3.4305 | 1.0 | 23 | 2.1402 | -0.0344 | -0.0359 | -0.0352 | | 2.3785 | 2.0 | 46 | 1.6911 | nan | nan | nan | | 1.8497 | 3.0 | 69 | 1.3624 | -0.0028 | -0.0046 | -0.0037 | | 1.455 | 4.0 | 92 | 1.1653 | nan | nan | nan | | 1.1878 | 5.0 | 115 | 1.1348 | nan | nan | nan | | 1.0926 | 6.0 | 138 | 1.1581 | nan | nan | nan | | 1.0833 | 7.0 | 161 | 1.1832 | nan | nan | nan | | 1.0904 | 8.0 | 184 | 1.2266 | 0.0782 | 0.0759 | 0.0771 | | 1.0833 | 9.0 | 207 | 1.1724 | 0.0826 | 0.0744 | 0.0785 | | 1.0805 | 10.0 | 230 | 1.1530 | 0.0798 | 0.0761 | 0.0779 |
dc4dedbcb909e6aa7c233bd4dac749ae
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
AltDiffusion | 名称 Name | 任务 Task | 语言 Language(s) | 模型 Model | Github | |:----------:| :----: |:-------------------:| :----: |:------:| | AltDiffusion-m9 | 多模态 Multimodal | Multilingual | Stable Diffusion | [FlagAI](https://github.com/FlagAI-Open/FlagAI) |
9d7fe10a5d033465d54b4534e13a905d
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run AltDiffusion-m9: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/AltDiffusion-m9)
943562beb89b40ccd8c5c897edd6db48
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
模型信息 Model Information 我们使用 [AltCLIP-m9](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md),基于 [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion) 训练了双语Diffusion模型,训练数据来自 [WuDao数据集](https://data.baai.ac.cn/details/WuDaoCorporaText) 和 [LAION](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus) 。 我们的版本在多语言对齐方面表现非常出色,是目前市面上开源的最强多语言版本,保留了原版stable diffusion的大部分能力,并且在某些例子上比有着比原版模型更出色的能力。 AltDiffusion-m9 模型由名为 AltCLIP-m9 的多语 CLIP 模型支持,该模型也可在本项目中访问。您可以阅读 [此教程](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) 了解更多信息。 We used [AltCLIP-m9](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md), and trained a bilingual Diffusion model based on [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion), with training data from [WuDao dataset](https://data.baai.ac.cn/details/WuDaoCorporaText) and [LAION](https://huggingface.co/datasets/laion/laion2B-en). Our model performs well in aligning multilanguage and is the strongest open-source version on the market today, retaining most of the stable diffusion capabilities of the original, and in some cases even better than the original model. AltDiffusion-m9 model is backed by a multilingual CLIP model named AltCLIP-m9, which is also accessible in FlagAI. You can read [this tutorial](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) for more information.
37923e4974ee3a94dfb71a259172bac7
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
引用 关于AltCLIP-m9,我们已经推出了相关报告,有更多细节可以查阅,如对您的工作有帮助,欢迎引用。 If you find this work helpful, please consider to cite ``` @article{https://doi.org/10.48550/arxiv.2211.06679, doi = {10.48550/ARXIV.2211.06679}, url = {https://arxiv.org/abs/2211.06679}, author = {Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences}, title = {AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
255c801873b44b5b1b2078595b572089
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
模型权重 Model Weights 第一次运行AltDiffusion-m9模型时会自动从huggingface下载如下权重, The following weights are automatically downloaded from HF when the AltDiffusion-m9 model is run for the first time: | 模型名称 Model name | 大小 Size | 描述 Description | |------------------------------|---------|-------------------------------------------------------| | StableDiffusionSafetyChecker | 1.13G | 图片的安全检查器;Safety checker for image | | AltDiffusion-m9 | 8.0G | support English(En), Chinese(Zh), Spanish(Es), French(Fr), Russian(Ru), Japanese(Ja), Korean(Ko), Arabic(Ar) and Italian(It) | | AltCLIP-m9 | 3.22G | support English(En), Chinese(Zh), Spanish(Es), French(Fr), Russian(Ru), Japanese(Ja), Korean(Ko), Arabic(Ar) and Italian(It) |
419627680ed668cabfa46fe23f9dfc79
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
scrollTo=1TrIQp9N1Bnm)已放到colab上,欢迎使用。 您可以在 [此处](https://huggingface.co/docs/diffusers/main/en/api/pipelines/alt_diffusion) 查看文档页面。 以下示例将使用fast DPM 调度程序生成图像, 在V100 上耗时大约为 2 秒。 You can run our diffusers example through [here](https://colab.research.google.com/drive/1htPovT5YNutl2i31mIYrOzlIgGLm06IX
5d02c217d4e88704caee281f567f0f28
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
scrollTo=1TrIQp9N1Bnm) in colab. You can see the documentation page [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/alt_diffusion). The following example will use the fast DPM scheduler to generate an image in ca. 2 seconds on a V100. First you should install diffusers main branch and some dependencies: ``` pip install git+https://github.com/huggingface/diffusers.git torch transformers accelerate sentencepiece ``` then you can run the following example: ```python from diffusers import AltDiffusionPipeline, DPMSolverMultistepScheduler import torch pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16, revision="fp16") pipe = pipe.to("cuda") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图"
c0b452ee0baba0a94ffea2ecb705806b
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
prompt = "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap" image = pipe(prompt, num_inference_steps=25).images[0] image.save("./alt.png") ``` ![alt](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/hub/alt.png)
688f4bb07d67e6c7c1a8ad787c1d442f
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
Transformers Example ```python import os import torch import transformers from transformers import BertPreTrainedModel from transformers.models.clip.modeling_clip import CLIPPreTrainedModel from transformers.models.xlm_roberta.tokenization_xlm_roberta import XLMRobertaTokenizer from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler from diffusers import StableDiffusionPipeline from transformers import BertPreTrainedModel,BertModel,BertConfig import torch.nn as nn import torch from transformers.models.xlm_roberta.configuration_xlm_roberta import XLMRobertaConfig from transformers import XLMRobertaModel from transformers.activations import ACT2FN from typing import Optional class RobertaSeriesConfig(XLMRobertaConfig): def __init__(self, pad_token_id=1, bos_token_id=0, eos_token_id=2,project_dim=768,pooler_fn='cls',learn_encoder=False, **kwargs): super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs) self.project_dim = project_dim self.pooler_fn = pooler_fn
90b7cf7a50564d4884b7888a76c74ccc
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
self.learn_encoder = learn_encoder class RobertaSeriesModelWithTransformation(BertPreTrainedModel): _keys_to_ignore_on_load_unexpected = [r"pooler"] _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] base_model_prefix = 'roberta' config_class= XLMRobertaConfig def __init__(self, config): super().__init__(config) self.roberta = XLMRobertaModel(config) self.transformation = nn.Linear(config.hidden_size, config.project_dim) self.post_init() def get_text_embeds(self,bert_embeds,clip_embeds): return self.merge_head(torch.cat((bert_embeds,clip_embeds))) def set_tokenizer(self, tokenizer): self.tokenizer = tokenizer def forward(self, input_ids: Optional[torch.Tensor] = None) : attention_mask = (input_ids != self.tokenizer.pad_token_id).to(torch.int64) outputs = self.base_model( input_ids=input_ids, attention_mask=attention_mask, ) projection_state = self.transformation(outputs.last_hidden_state) return (projection_state,) model_path_encoder = "BAAI/RobertaSeriesModelWithTransformation" model_path_diffusion = "BAAI/AltDiffusion-m9" device = "cuda" seed = 12345 tokenizer = XLMRobertaTokenizer.from_pretrained(model_path_encoder, use_auth_token=True) tokenizer.model_max_length = 77 text_encoder = RobertaSeriesModelWithTransformation.from_pretrained(model_path_encoder, use_auth_token=True) text_encoder.set_tokenizer(tokenizer) print("text encode loaded") pipe = StableDiffusionPipeline.from_pretrained(model_path_diffusion, tokenizer=tokenizer, text_encoder=text_encoder, use_auth_token=True, ) print("diffusion pipeline loaded") pipe = pipe.to(device) prompt = "Thirty years old lee evans as a sad 19th century postman. detailed, soft focus, candle light, interesting lights, realistic, oil canvas, character concept art by munkácsy mihály, csók istván, john everett millais, henry meynell rheam, and da vinci" with torch.no_grad(): image = pipe(prompt, guidance_scale=7.5).images[0] image.save("3.png") ``` 您可以在`predict_generate_images`函数里通过改变参数来调整设置,具体信息如下: More parameters of predict_generate_images for you to adjust for `predict_generate_images` are listed below: | 参数名 Parameter | 类型 Type | 描述 Description | |--------------------------------|------------|-------------------------------------------------------| | prompt | str | 提示文本; The prompt text | | out_path | str | 输出路径; The output path to save images | | n_samples | int | 输出图片数量; Number of images to be generate | | skip_grid | bool | 如果为True, 会将所有图片拼接在一起,输出一张新的图片; If set to true, image gridding step will be skipped | | ddim_step | int | DDIM模型的步数; Number of steps in ddim model | | plms | bool | 如果为True, 则会使用plms模型; If set to true, PLMS Sampler instead of DDIM Sampler will be applied | | scale | float | 这个值决定了文本在多大程度上影响生成的图片,值越大影响力越强; This value determines how important the prompt incluences generate images | | H | int | 图片的高度; Height of image | | W | int | 图片的宽度; Width of image | | C | int | 图片的channel数; Numeber of channels of generated images | | seed | int | 随机种子; Random seed number | 注意:模型推理要求一张至少10G以上的GPU。 Note that the model inference requires a GPU of at least 10G above.
e516591b57c44d530f1f8afcdf4d09e3
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
prompt:dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap
d51cbec53f07c5767714cf185d7009cc
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
Ours: ![image](https://raw.githubusercontent.com/BAAI-OpenPlatform/test_open/main/dog.png) 注: 此处长图生成技术由右脑科技(RightBrain AI)提供。 Note: The long image generation technology here is provided by Right Brain Technology.
d1f2b90c0f8682b45ebad17eb8058759
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'multilingual', 'English(En)', 'Chinese(Zh)', 'Spanish(Es)', 'French(Fr)', 'Russian(Ru)', 'Japanese(Ja)', 'Korean(Ko)', 'Arabic(Ar)', 'Italian(It)', 'diffusers']
false
许可/License 该模型通过 [CreativeML Open RAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) 获得许可。作者对您生成的输出不主张任何权利,您可以自由使用它们并对它们的使用负责,不得违反本许可中的规定。该许可证禁止您分享任何违反任何法律、对他人造成伤害、传播任何可能造成伤害的个人信息、传播错误信息和针对弱势群体的任何内容。您可以出于商业目的修改和使用模型,但必须包含相同使用限制的副本。有关限制的完整列表,请[阅读许可证](https://huggingface.co/spaces/CompVis/stable-diffusion-license) 。 The model is licensed with a [CreativeML Open RAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license). The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. You can modify and use the model for commercial purposes, but a copy of the same use restrictions must be included. For the full list of restrictions please [read the license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) .
98483acac2356e603c8dbdf45f1e683f
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-fi-to-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset. It achieves the following results on the evaluation set: - Loss: 3.3598 - Bleu: 1.618 - Gen Len: 17.3223
84d5dc2991d87829602b622d7fbcc8d4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 3.3627 | 1.0 | 6250 | 3.5122 | 1.2882 | 17.1803 | | 3.2162 | 2.0 | 12500 | 3.4442 | 1.4329 | 17.2617 | | 3.1304 | 3.0 | 18750 | 3.3872 | 1.4862 | 17.296 | | 3.0832 | 4.0 | 25000 | 3.3648 | 1.5795 | 17.3047 | | 3.0623 | 5.0 | 31250 | 3.3598 | 1.618 | 17.3223 |
695f1ff63fe16befc72aaeaa60791ca6
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'pt', 'robust-speech-event']
false
sew-tiny-portuguese-cv8 This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4082 - Wer: 0.3053
d908aae5f19ddddc146a8355c5e0c924
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'pt', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 40000 - mixed_precision_training: Native AMP
3fda5e1039f035a97403866f3a7de380
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'pt', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | No log | 1.93 | 1000 | 2.9134 | 0.9767 | | 2.9224 | 3.86 | 2000 | 2.8405 | 0.9789 | | 2.9224 | 5.79 | 3000 | 2.8094 | 0.9800 | | 2.8531 | 7.72 | 4000 | 2.7439 | 0.9891 | | 2.8531 | 9.65 | 5000 | 2.7057 | 1.0159 | | 2.7721 | 11.58 | 6000 | 2.7235 | 1.0709 | | 2.7721 | 13.51 | 7000 | 2.5931 | 1.1035 | | 2.6566 | 15.44 | 8000 | 2.2171 | 0.9884 | | 2.6566 | 17.37 | 9000 | 1.2399 | 0.8081 | | 1.9558 | 19.31 | 10000 | 0.9045 | 0.6353 | | 1.9558 | 21.24 | 11000 | 0.7705 | 0.5533 | | 1.4987 | 23.17 | 12000 | 0.7068 | 0.5165 | | 1.4987 | 25.1 | 13000 | 0.6641 | 0.4718 | | 1.3811 | 27.03 | 14000 | 0.6043 | 0.4470 | | 1.3811 | 28.96 | 15000 | 0.5532 | 0.4268 | | 1.2897 | 30.89 | 16000 | 0.5371 | 0.4101 | | 1.2897 | 32.82 | 17000 | 0.5924 | 0.4150 | | 1.225 | 34.75 | 18000 | 0.4949 | 0.3894 | | 1.225 | 36.68 | 19000 | 0.5591 | 0.4045 | | 1.193 | 38.61 | 20000 | 0.4927 | 0.3731 | | 1.193 | 40.54 | 21000 | 0.4922 | 0.3712 | | 1.1482 | 42.47 | 22000 | 0.4799 | 0.3662 | | 1.1482 | 44.4 | 23000 | 0.4846 | 0.3648 | | 1.1201 | 46.33 | 24000 | 0.4770 | 0.3623 | | 1.1201 | 48.26 | 25000 | 0.4530 | 0.3426 | | 1.0892 | 50.19 | 26000 | 0.4523 | 0.3527 | | 1.0892 | 52.12 | 27000 | 0.4573 | 0.3443 | | 1.0583 | 54.05 | 28000 | 0.4488 | 0.3353 | | 1.0583 | 55.98 | 29000 | 0.4295 | 0.3285 | | 1.0319 | 57.92 | 30000 | 0.4321 | 0.3220 | | 1.0319 | 59.85 | 31000 | 0.4244 | 0.3236 | | 1.0076 | 61.78 | 32000 | 0.4197 | 0.3201 | | 1.0076 | 63.71 | 33000 | 0.4230 | 0.3208 | | 0.9851 | 65.64 | 34000 | 0.4090 | 0.3127 | | 0.9851 | 67.57 | 35000 | 0.4088 | 0.3133 | | 0.9695 | 69.5 | 36000 | 0.4123 | 0.3088 | | 0.9695 | 71.43 | 37000 | 0.4017 | 0.3090 | | 0.9514 | 73.36 | 38000 | 0.4184 | 0.3086 | | 0.9514 | 75.29 | 39000 | 0.4075 | 0.3043 | | 0.944 | 77.22 | 40000 | 0.4082 | 0.3053 |
c4706c8316f2503cbdaf6d9c62d02d33
apache-2.0
['generated_from_keras_callback']
false
DistBERT_ideology This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set:
c20d68a4fa57ea9f6195069a5868a3e7
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4120, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
5f360ec7b030b5965437b319718d134f
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-eli5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset. It achieves the following results on the evaluation set: - Loss: 3.5993 - Rouge1: 15.1689 - Rouge2: 2.1762 - Rougel: 12.7542 - Rougelsum: 14.0214 - Gen Len: 18.9988
e403cec95f90ab0b2d79eedcbc202fd6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.8011 | 1.0 | 17040 | 3.5993 | 15.1689 | 2.1762 | 12.7542 | 14.0214 | 18.9988 |
e874e80184160d42e38082d061b587b6
apache-2.0
['audio', 'speech', 'wav2vec2', 'pt', 'Russian-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch']
false
Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, MAILABS plus data augmentation [Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, M-AILABS plus data augmentation method based on TTS and voice conversion.
3df2ed2be9bc3b63684ff6ed8bb1b505
apache-2.0
['audio', 'speech', 'wav2vec2', 'pt', 'Russian-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch']
false
Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian") ```
e331e4a7f1e4adfb5c78b968d4930706
apache-2.0
['audio', 'speech', 'wav2vec2', 'pt', 'Russian-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch']
false
Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
23dbf4c5f72a228e974c38511840b9f8
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-b0.04 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8124 - Bleu: 7.5994 - Gen Len: 44.6753
cfd4ec7267717267fbe65f45c1c173b5
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased_fold_3_ternary_v1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8908 - F1: 0.7879
57b7ac0dab5ceeb4e148f96a2e79296f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 289 | 0.5873 | 0.7636 | | 0.5479 | 2.0 | 578 | 0.5788 | 0.7697 | | 0.5479 | 3.0 | 867 | 0.6286 | 0.7770 | | 0.2412 | 4.0 | 1156 | 0.8845 | 0.7661 | | 0.2412 | 5.0 | 1445 | 0.9894 | 0.7818 | | 0.1191 | 6.0 | 1734 | 1.0856 | 0.7842 | | 0.0543 | 7.0 | 2023 | 1.2852 | 0.7830 | | 0.0543 | 8.0 | 2312 | 1.4295 | 0.7673 | | 0.0223 | 9.0 | 2601 | 1.4716 | 0.7806 | | 0.0223 | 10.0 | 2890 | 1.6007 | 0.7636 | | 0.0122 | 11.0 | 3179 | 1.6744 | 0.7673 | | 0.0122 | 12.0 | 3468 | 1.6954 | 0.7685 | | 0.0129 | 13.0 | 3757 | 1.7273 | 0.7733 | | 0.0057 | 14.0 | 4046 | 1.7114 | 0.7758 | | 0.0057 | 15.0 | 4335 | 1.7480 | 0.7733 | | 0.0045 | 16.0 | 4624 | 1.8322 | 0.7830 | | 0.0045 | 17.0 | 4913 | 1.7448 | 0.7830 | | 0.0047 | 18.0 | 5202 | 1.8126 | 0.7782 | | 0.0047 | 19.0 | 5491 | 1.9021 | 0.7673 | | 0.0018 | 20.0 | 5780 | 1.9011 | 0.7830 | | 0.0026 | 21.0 | 6069 | 1.8771 | 0.7806 | | 0.0026 | 22.0 | 6358 | 1.8634 | 0.7806 | | 0.0012 | 23.0 | 6647 | 1.8926 | 0.7830 | | 0.0012 | 24.0 | 6936 | 1.8922 | 0.7855 | | 0.0005 | 25.0 | 7225 | 1.8908 | 0.7879 |
d79d748def11278d4180d8060254dd79
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-gradient-clinic This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2601
a5506b2a3294d9c3fa386b50fa9312c4
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 36 - eval_batch_size: 36 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
90159d0132e22ae66189d51d244c3be9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 24 | 0.8576 | | No log | 2.0 | 48 | 0.3439 | | No log | 3.0 | 72 | 0.2807 | | No log | 4.0 | 96 | 0.2653 | | No log | 5.0 | 120 | 0.2601 |
2dfe9c670a292f0f0f0826700b01d4ba
apache-2.0
['generated_from_trainer']
false
hf_fine_tune_hello_world This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset. It achieves the following results on the evaluation set: - Loss: 1.6084 - Accuracy: 0.205
fd96ed4ebee15b434b1cddf70254a009
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 1.6245 | 0.22 | | No log | 2.0 | 250 | 1.6120 | 0.205 | | No log | 3.0 | 375 | 1.6084 | 0.205 |
f606deadc9530bcece23208459def93c
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7784 - Matthews Correlation: 0.5499
3002104a58a632cae001c3f96efcf3c9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5248 | 1.0 | 535 | 0.5367 | 0.4142 | | 0.3488 | 2.0 | 1070 | 0.5116 | 0.5083 | | 0.2343 | 3.0 | 1605 | 0.5575 | 0.5485 | | 0.1766 | 4.0 | 2140 | 0.7784 | 0.5499 | | 0.1238 | 5.0 | 2675 | 0.8351 | 0.5487 |
cf193ae7b7586f2c0b0b100249977459
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2t_es_vp-100k_s468 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b755c98825213e1e7b0a2add4cb19567
apache-2.0
['bert', 'pytorch', 'zh', 'ner']
false
BertSpan for Chinese Named Entity Recognition(bertspan4ner) Model 中文实体识别模型 `bertspan4ner-base-chinese` evaluate PEOPLE(人民日报) test data: The overall performance of BertSpan on people **test**: | | Accuracy | Recall | F1 | | ------------ | ------------------ | ------------------ | ------------------ | | BertSpan | 0.9610 | 0.9600 | 0.9605 | 在PEOPLE的测试集上达到SOTA水平。
0f6dda0b0d8207a4b8d0b811b35c0d5a
apache-2.0
['bert', 'pytorch', 'zh', 'ner']
false
Usage 本项目开源在实体识别项目:[nerpy](https://github.com/shibing624/nerpy),可支持bertspan模型,通过如下命令调用: ```shell >>> from nerpy import NERModel >>> model = NERModel("bertspan", "shibing624/bertspan4ner-base-chinese") >>> predictions, raw_outputs, entities = model.predict(["常建良,男,1963年出生,工科学士,高级工程师"], split_on_space=False) entities: [('常建良', 'PER'), ('1963年', 'TIME')] ``` 模型文件组成: ``` bertspan4ner-base-chinese ├── config.json ├── model_args.json ├── pytorch_model.bin ├── special_tokens_map.json ├── tokenizer_config.json └── vocab.txt ```
cd75546ac3ce1d9ff0009b0d9de6e182
apache-2.0
['bert', 'pytorch', 'zh', 'ner']
false
中文实体识别数据集 | 数据集 | 语料 | 下载链接 | 文件大小 | | :------- | :--------- | :---------: | :---------: | | **`CNER中文实体识别数据集`** | CNER(12万字) | [CNER github](https://github.com/shibing624/nerpy/tree/main/examples/data/cner)| 1.1MB | | **`PEOPLE中文实体识别数据集`** | 人民日报数据集(200万字) | [PEOPLE github](https://github.com/shibing624/nerpy/tree/main/examples/data/people)| 12.8MB | CNER中文实体识别数据集,数据格式: ```text 美 B-LOC 国 I-LOC 的 O 华 B-PER 莱 I-PER 士 I-PER 我 O 跟 O 他 O ``` 如果需要训练bertspan4ner,请参考[https://github.com/shibing624/nerpy/tree/main/examples](https://github.com/shibing624/nerpy/tree/main/examples)
c6f6e45917e5ad5213441b5ce648c7b5
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Citrinet', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
deployment-with-nvidia-riva) | This model transcribes speech in lower case English alphabet along with spaces and apostrophes. It is an "large" versions of Citrinet-CTC (around 140M parameters) model. See the [model architecture](
3b8e19aa0fa37798907675bd1337e51c
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Citrinet', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_en_citrinet_1024_ls" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ```
d7485ea7bd3aea83e40f7f6ab05aa8b6
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Citrinet', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Performance The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. | Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | |---------|---------------------------|-----------------|---------------|---------------| | 1.0.0 | SentencePiece Unigram [2] | 256 | 6.3 | 2.5 |
d0ee8d6de3bdfb096d6dbae9ee8804cf
cc-by-sa-4.0
['spacy', 'token-classification']
false
mk_core_news_md Macedonian pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `mk_core_news_md` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 274587 keys, 20000 unique vectors (300 dimensions) | | **Sources** | [Macedonian Corpus](https://blog.netcetera.com/macedonian-spacy-f3c85484777f) (Damjan Zlatinov, Melanija Gerasimovska, Borijan Georgievski, Marija Todosovska)<br />[spaCy lookups data](https://github.com/explosion/spacy-lookups-data) (Explosion)<br />[Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia)](https://spacy.io) (Explosion) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) |
42fba7dda554e7325a6bd0777efe0710
cc-by-sa-4.0
['spacy', 'token-classification']
false
Label Scheme <details> <summary>View label scheme (54 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `POS=PROPN`, `POS=AUX`, `POS=ADJ`, `POS=NOUN`, `POS=ADP`, `POS=PUNCT`, `POS=CONJ`, `POS=NUM`, `POS=VERB`, `POS=PRON`, `POS=ADV`, `POS=SCONJ`, `POS=PART`, `POS=SYM`, `_`, `POS=SPACE`, `POS=X`, `POS=INTJ` | | **`parser`** | `ROOT`, `advmod`, `att`, `aux`, `cc`, `dep`, `det`, `dobj`, `iobj`, `neg`, `nsubj`, `pobj`, `poss`, `pozm`, `pozv`, `prep`, `punct`, `relcl` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | </details>
74b96641eda33bbb926ba7a30d6b13f9
cc-by-sa-4.0
['spacy', 'token-classification']
false
Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 100.00 | | `TOKEN_P` | 100.00 | | `TOKEN_R` | 100.00 | | `TOKEN_F` | 100.00 | | `SENTS_P` | 80.00 | | `SENTS_R` | 67.53 | | `SENTS_F` | 73.24 | | `DEP_UAS` | 67.71 | | `DEP_LAS` | 52.01 | | `ENTS_P` | 74.72 | | `ENTS_R` | 74.47 | | `ENTS_F` | 74.60 | | `POS_ACC` | 92.61 |
806a6b5230af3adabf15b41633a8e1cf
apache-2.0
['generated_from_trainer']
false
bert-small-finetuned-ner-to-multilabel-xglue-ner This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0616
6f3ca7ac858d6133413c252e7df52f10
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 20
ba533c3c12e1941d177a850afb340712
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2168 | 0.28 | 500 | 0.1212 | | 0.1067 | 0.57 | 1000 | 0.0865 | | 0.0878 | 0.85 | 1500 | 0.0710 | | 0.0667 | 1.14 | 2000 | 0.0670 | | 0.0529 | 1.42 | 2500 | 0.0614 | | 0.0516 | 1.71 | 3000 | 0.0577 | | 0.0469 | 1.99 | 3500 | 0.0608 | | 0.033 | 2.28 | 4000 | 0.0592 | | 0.0317 | 2.56 | 4500 | 0.0616 |
8d0dfa86de2b3866a5f1ef6900d25511
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model Details Neural machine translation model for translating from Italic languages (itc) to Arabic (ar). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-08-09 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): cat fra glg ita lat_Latn por ron spa - Target Language(s): ara - Language Pair(s): cat-ara fra-ara glg-ara ita-ara por-ara ron-ara spa-ara - Valid Target Language Labels: >>ajp<< >>apc<< >>ara<< >>arq<< >>ary<< >>arz<< - **Original Model**: [opusTCv20210807_transformer-big_2022-08-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-ara/opusTCv20210807_transformer-big_2022-08-09.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT itc-ara README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-ara/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>ara<<`
324c8e4899efab806d1f489ab5f22757
cc-by-4.0
['translation', 'opus-mt-tc']
false
How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>ary<< Entiendo.", ">>arq<< Por favor entiende mi posición." ] model_name = "pytorch-models/opus-mt-tc-big-itc-ar" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
bc04fe02f1fa55b3a1f2427c4f7121cb
cc-by-4.0
['translation', 'opus-mt-tc']
false
من فضلك افهم موقفي. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-itc-ar") print(pipe(">>ary<< Entiendo."))
1a3c3025c655f1b4023bf6c28243ab0e
cc-by-4.0
['translation', 'opus-mt-tc']
false
Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-08-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-ara/opusTCv20210807_transformer-big_2022-08-09.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
aefe4f627bde78a6d3a9a377f885332c
cc-by-4.0
['translation', 'opus-mt-tc']
false
Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-08-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-ara/opusTCv20210807_transformer-big_2022-08-09.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-08-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-ara/opusTCv20210807_transformer-big_2022-08-09.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
3ce0fdfda6a3d8e8319186c31fac6015
cc-by-4.0
['translation', 'opus-mt-tc']
false
words | |----------|---------|-------|-------|-------|--------| | fra-ara | tatoeba-test-v2021-08-07 | 0.46463 | 18.9 | 1569 | 7956 | | ita-ara | tatoeba-test-v2021-08-07 | 0.53797 | 25.7 | 235 | 1161 | | spa-ara | tatoeba-test-v2021-08-07 | 0.55520 | 26.6 | 1511 | 7547 | | cat-ara | flores101-devtest | 0.52029 | 18.9 | 1012 | 21357 | | fra-ara | flores101-devtest | 0.52573 | 19.5 | 1012 | 21357 | | glg-ara | flores101-devtest | 0.51181 | 19.2 | 1012 | 21357 | | ita-ara | flores101-devtest | 0.49401 | 15.0 | 1012 | 21357 | | por-ara | flores101-devtest | 0.53356 | 20.2 | 1012 | 21357 | | ron-ara | flores101-devtest | 0.51849 | 18.4 | 1012 | 21357 | | spa-ara | flores101-devtest | 0.47872 | 14.3 | 1012 | 21357 |
c2d06ad5d23c028fefdd09390ba90109
apache-2.0
['generated_from_trainer']
false
finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865
ee3a5db5c12581a103f1923b70f69716
apache-2.0
[]
false
Model description **CAMeLBERT-DA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model. For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
fc74a0138854fa07ad58cdc2d5943464
apache-2.0
[]
false
How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf') >>> text = 'شلونك ؟ شخبارك ؟' >>> pos(text) [{'entity': 'noun', 'score': 0.84596395, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.7230489, 'index': 2, 'word': '
536b309b769ed65dc182133ca86cca23
apache-2.0
[]
false
ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.99996364, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9990874, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.99985224, 'index': 5, 'word': '
e973a3c4f48930c7140e6a4a6fba2c02
apache-2.0
[]
false
ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999683, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
f88795b030dfb58d70aee1e94bacab91
apache-2.0
['generated_from_keras_callback']
false
ytsai25/bert-finetuned-ner-ADR This model is a fine-tuned version of [ytsai25/bert-finetuned-ner](https://huggingface.co/ytsai25/bert-finetuned-ner) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0347 - Validation Loss: 0.0804 - Epoch: 2
268203313d5b1e14db9cbeb6c56a6f36
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 669, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
0f9f44b97796de9c734590478895e89e
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1307 | 0.0799 | 0 | | 0.0579 | 0.0758 | 1 | | 0.0347 | 0.0804 | 2 |
17a2c56e940e4ec5ae41200c2eeb18f5
apache-2.0
['generated_from_trainer']
false
finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4064 - Accuracy: 0.8289 - F1: 0.8901
ddcd84411e8f2ac2c2c2b1bf0b958c4c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4163 | 0.8085 | 0.8780 | | No log | 2.0 | 390 | 0.4098 | 0.8268 | 0.8878 | | 0.312 | 3.0 | 585 | 0.5892 | 0.8244 | 0.8861 | | 0.312 | 4.0 | 780 | 0.7580 | 0.8232 | 0.8845 | | 0.312 | 5.0 | 975 | 0.9028 | 0.8183 | 0.8824 |
972d0287a7b0ab2c7cc413848b046ee1
mit
['pyannote', 'pyannote-audio', 'pyannote-audio-pipeline', 'audio', 'voice', 'speech', 'speaker', 'speaker-diarization', 'speaker-change-detection', 'voice-activity-detection', 'overlapped-speech-detection']
false
Accuracy This pipeline is benchmarked on a growing collection of datasets. Processing is fully automatic: * no manual voice activity detection (as is sometimes the case in the literature) * no manual number of speakers (though it is possible to provide it to the pipeline) * no fine-tuning of the internal models nor tuning of the pipeline hyper-parameters to each dataset ... with the least forgiving diarization error rate (DER) setup (named *"Full"* in [this paper](https://doi.org/10.1016/j.csl.2021.101254)): * no forgiveness collar * evaluation of overlapped speech | Benchmark | [DER%](. "Diarization error rate") | [FA%](. "False alarm rate") | [Miss%](. "Missed detection rate") | [Conf%](. "Speaker confusion rate") | Expected output | File-level evaluation | | ---------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------- | ---------------------------------- | ----------------------------------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------ | | [AISHELL-4](http://www.openslr.org/111/) | 14.61 | 3.31 | 4.35 | 6.95 | [RTTM](reproducible_research/AISHELL.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/AISHELL.SpeakerDiarization.Full.test.eval) | | [AMI *Mix-Headset*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 18.21 | 3.28 | 11.07 | 3.87 | [RTTM](reproducible_research/2022.07/AMI.SpeakerDiarization.only_words.test.rttm) | [eval](reproducible_research/2022.07/AMI.SpeakerDiarization.only_words.test.eval) | | [AMI *Array1-01*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 29.00 | 2.71 | 21.61 | 4.68 | [RTTM](reproducible_research/2022.07/AMI-SDM.SpeakerDiarization.only_words.test.rttm) | [eval](reproducible_research/2022.07/AMI-SDM.SpeakerDiarization.only_words.test.eval) | | [CALLHOME](https://catalog.ldc.upenn.edu/LDC2001S97) [*Part2*](https://github.com/BUTSpeechFIT/CALLHOME_sublists/issues/1) | 30.24 | 3.71 | 16.86 | 9.66 | [RTTM](reproducible_research/2022.07/CALLHOME.SpeakerDiarization.CALLHOME.test.rttm) | [eval](reproducible_research/2022.07/CALLHOME.SpeakerDiarization.CALLHOME.test.eval) | | [DIHARD 3 *Full*](https://arxiv.org/abs/2012.01477) | 20.99 | 4.25 | 10.74 | 6.00 | [RTTM](reproducible_research/2022.07/DIHARD.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/2022.07/DIHARD.SpeakerDiarization.Full.test.eval) | | [REPERE *Phase 2*](https://islrn.org/resources/360-758-359-485-0/) | 12.62 | 1.55 | 3.30 | 7.76 | [RTTM](reproducible_research/2022.07/REPERE.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/2022.07/REPERE.SpeakerDiarization.Full.test.eval) | | [VoxConverse *v0.0.2*](https://github.com/joonson/voxconverse) | 12.76 | 3.45 | 3.85 | 5.46 | [RTTM](reproducible_research/2022.07/VoxConverse.SpeakerDiarization.VoxConverse.test.rttm) | [eval](reproducible_research/2022.07/VoxConverse.SpeakerDiarization.VoxConverse.test.eval) |
d8545a57dccd8ac73afd018d7fb0667b
apache-2.0
['generated_from_trainer']
false
Tagged_Uni_100v7_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v7_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5083 - Precision: 0.2364 - Recall: 0.1162 - F1: 0.1559 - Accuracy: 0.8209
f5552b3f6c39c7133ee8531d29a6ffa6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 26 | 0.5987 | 0.0582 | 0.0029 | 0.0054 | 0.7847 | | No log | 2.0 | 52 | 0.5016 | 0.2218 | 0.1002 | 0.1380 | 0.8192 | | No log | 3.0 | 78 | 0.5083 | 0.2364 | 0.1162 | 0.1559 | 0.8209 |
da17a33e71f2bd1f62c03c02068207d9
cc-by-sa-4.0
['spacy', 'token-classification']
false
da_core_news_lg Danish pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner, attribute_ruler. | Feature | Description | | --- | --- | | **Name** | `da_core_news_lg` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` | | **Components** | `tok2vec`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` | | **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) | | **Sources** | [UD Danish DDT v2.8](https://github.com/UniversalDependencies/UD_Danish-DDT) (Johannsen, Anders; Martínez Alonso, Héctor; Plank, Barbara)<br />[DaNE](https://github.com/alexandrainst/danlp/blob/master/docs/datasets.md
f43b59a507a77e0d6ef1d648bc7ad55c
cc-by-sa-4.0
['spacy', 'token-classification']
false
danish-dependency-treebank-dane) (Rasmus Hvingelby, Amalie B. Pauli, Maria Barrett, Christina Rosted, Lasse M. Lidegaard, Anders Søgaard)<br />[Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia)](https://spacy.io) (Explosion) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) |
f6c150a5445f8958c2e5203c6396b914
cc-by-sa-4.0
['spacy', 'token-classification']
false
Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.89 | | `TOKEN_P` | 99.78 | | `TOKEN_R` | 99.75 | | `TOKEN_F` | 99.76 | | `POS_ACC` | 96.66 | | `MORPH_ACC` | 95.74 | | `MORPH_MICRO_P` | 97.43 | | `MORPH_MICRO_R` | 96.75 | | `MORPH_MICRO_F` | 97.09 | | `SENTS_P` | 89.09 | | `SENTS_R` | 88.30 | | `SENTS_F` | 88.69 | | `DEP_UAS` | 82.25 | | `DEP_LAS` | 78.29 | | `LEMMA_ACC` | 94.84 | | `TAG_ACC` | 96.66 | | `ENTS_P` | 80.04 | | `ENTS_R` | 81.88 | | `ENTS_F` | 80.95 |
85a6d883ac9ebe34d57f6d4746be3c6a
apache-2.0
['generated_from_trainer']
false
t5-small-herblabels This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4823 - Rouge1: 3.0759 - Rouge2: 1.0495 - Rougel: 3.0758 - Rougelsum: 3.0431 - Gen Len: 18.9716
5daf22102e38b4d319a00f611b009bae
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 264 | 1.6010 | 2.4276 | 0.5658 | 2.3546 | 2.3099 | 18.9091 | | 2.5052 | 2.0 | 528 | 1.0237 | 2.9016 | 0.3395 | 2.8221 | 2.783 | 18.9673 | | 2.5052 | 3.0 | 792 | 0.7793 | 2.962 | 0.3091 | 2.9375 | 2.8786 | 18.9588 | | 1.1552 | 4.0 | 1056 | 0.6530 | 2.98 | 0.4375 | 2.9584 | 2.8711 | 18.9588 | | 1.1552 | 5.0 | 1320 | 0.5863 | 3.0023 | 0.5882 | 2.987 | 2.9155 | 18.9588 | | 0.8659 | 6.0 | 1584 | 0.5428 | 3.0576 | 0.8019 | 3.0494 | 2.9989 | 18.9716 | | 0.8659 | 7.0 | 1848 | 0.5145 | 3.0808 | 0.9476 | 3.0719 | 3.0237 | 18.9716 | | 0.747 | 8.0 | 2112 | 0.4962 | 3.0748 | 1.0032 | 3.0683 | 3.0359 | 18.9716 | | 0.747 | 9.0 | 2376 | 0.4856 | 3.0702 | 1.0196 | 3.0665 | 3.0328 | 18.9716 | | 0.6987 | 10.0 | 2640 | 0.4823 | 3.0759 | 1.0495 | 3.0758 | 3.0431 | 18.9716 |
3405af088945542fea5866aa559d3cce
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s458 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
55a3222429ec0ef0c56ffafa19d03636
apache-2.0
['bert', 'mnli', 'ax', 'glue', 'torchdistill']
false
`bert-base-uncased` fine-tuned on MNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb). The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mnli/ce/bert_base_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
3d3b10112bb3d5c870fa4c744e8e2db1
apache-2.0
['image-classification', 'generated_from_trainer']
false
vit-base-food101-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.5493 - Accuracy: 0.8539
7656cee38218e86ed7d1019c9d4eeaa9
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.657 | 1.0 | 4735 | 0.9732 | 0.7459 | | 0.9869 | 2.0 | 9470 | 0.7987 | 0.7884 | | 0.71 | 3.0 | 14205 | 0.6364 | 0.8311 | | 0.4961 | 4.0 | 18940 | 0.5595 | 0.8487 |
aa57e5c1c7e7a747ed586b89525cb1d8