license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-ner-conll2003 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0602 - Precision: 0.9342 - Recall: 0.9536 - F1: 0.9438 - Accuracy: 0.9870
b9a7ea899da4da463c8f5e4c784f73ae
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0871 | 1.0 | 1756 | 0.0728 | 0.9138 | 0.9275 | 0.9206 | 0.9811 | | 0.0331 | 2.0 | 3512 | 0.0591 | 0.9311 | 0.9514 | 0.9411 | 0.9866 | | 0.0173 | 3.0 | 5268 | 0.0602 | 0.9342 | 0.9536 | 0.9438 | 0.9870 |
5f73bd5f3c7b0cf8348d31f8b29fd19e
gpl-3.0
['object-detection', 'computer-vision', 'yolor', 'yolov4']
false
Yolov6 Inference ```python from yolor.helpers import Yolor model = Yolor( cfg='yolor/cfg/yolor_w6.cfg', weights='kadirnar/yolor-w6', imgsz=640, device='cuda:0' ) model.classes = None model.conf = 0.25 model.iou_ = 0.45 model.show = False model.save = True model.predict('yolor/data/highway.jpg') ```
871206a75e9ed1331ed1c577927d9b0b
creativeml-openrail-m
['text-to-image']
false
Asoon Dreambooth SD Model Dreambooth model trained by AlekseyCalvin with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-768 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: To generate custom images of my primary public self – one known as A.C.T. SOON® – use "asoon" or "asoon person" in your Stable Diffusion prompt (implemented via this model only). Checkpoints herein trained based on SD 2.1. [asoon:](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2812%29.jpg)![asoon 12](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2813%29.jpg)![asoon 13](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2814%29.jpg)![asoon 14](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2815%29.jpg)![asoon 10](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2811%29.jpg)![asoon 15](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2816%29.jpg)![asoon 16](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2817%29.jpg)![asoon 17](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2818%29.jpg)![asoon 18](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2819%29.jpg)![asoon 19](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2820%29.jpg)!
d99cf76e6d60c7cd16b8d3bb2f2daaab
mit
['generated_from_trainer']
false
bart-large-cnn-summarizer_03 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0999 - Rouge1: 51.6222 - Rouge2: 33.428 - Rougel: 40.2093 - Rougelsum: 47.7154 - Gen Len: 102.7962
d188a32e3f81594bf0596267e9f90041
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 0.9348 | 1.0 | 17166 | 0.9969 | 51.0763 | 32.9497 | 39.6851 | 47.0744 | 99.664 | | 0.7335 | 2.0 | 34332 | 1.0019 | 51.8002 | 33.8081 | 40.5887 | 47.9445 | 99.7884 | | 0.471 | 3.0 | 51498 | 1.0999 | 51.6222 | 33.428 | 40.2093 | 47.7154 | 102.7962 |
2b559121004873f373884316cc4a1965
apache-2.0
['italian', 'sequence-to-sequence', 'question-generation', 'squad_it', 'text2text-generation']
false
mT5 Small for Question Generation 💭 🇮🇹 This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on question generation on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
48b415caa4dc03f55ab7a2c42cc1dc7f
apache-2.0
['italian', 'sequence-to-sequence', 'question-generation', 'squad_it', 'text2text-generation']
false
Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qg = pipeline("text2text-generation", model='it5/mt5-small-question-generation') qg("Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una "grande pestilenza nell\' aria". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola "peste" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia") >>> [{"generated_text": "Per chi è stato redatto il referto medico?"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-question-generation") model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-question-generation") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
25702271b8929ce86144b6d0b36c69f1
apache-2.0
['generated_from_keras_callback']
false
krm/mt5-small-OrangeSum-Summarizer This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.7771 - Validation Loss: 2.5727 - Epoch: 7
239f9e73fa47ec1bbffc48e93eb67f9f
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 5000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
49ea8e38831d71ca58ef3931c03fa040
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 7.8346 | 3.2199 | 0 | | 5.1020 | 2.8619 | 1 | | 4.5632 | 2.7564 | 2 | | 4.2564 | 2.6726 | 3 | | 4.0501 | 2.6300 | 4 | | 3.9185 | 2.5930 | 5 | | 3.8209 | 2.5792 | 6 | | 3.7771 | 2.5727 | 7 |
667b8e4e935a701ef04078719faac836
apache-2.0
['t5', 'seq2seq']
false
t5-eff-large-8l-dutch-english-cased A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model pre-trained from scratch on [cleaned Dutch 🇳🇱🇧🇪 mC4 and cleaned English 🇬🇧 C4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned). This **t5 eff** model has **334M** parameters. It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset `mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **3d 23h**, with a sequence length of **512**, batch size **128** and **851850** total steps (**56B** tokens). Pre-training evaluation loss and accuracy are **1,15** and **0,74**. Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation. * Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off. * For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application! Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture and configs, though it must be noted that this model (t5-eff-large-8l-dutch-english-cased) is unrelated to these projects and not an 'official' checkpoint. * **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*. * **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
b369f6748d462d366f2a2fc9f3380e2d
mit
['generated_from_trainer']
false
luganda-ner-v1 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the lg-ner dataset. It achieves the following results on the evaluation set: - Loss: 1.0530 - Precision: 0.2902 - Recall: 0.2772 - F1: 0.2835 - Accuracy: 0.7298
aa5ddea1c32649c3d4e52eadd63f5ea5
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 25 | 1.2878 | 0.0 | 0.0 | 0.0 | 0.7271 | | No log | 2.0 | 50 | 1.2373 | 0.0 | 0.0 | 0.0 | 0.7271 | | No log | 3.0 | 75 | 1.2309 | 0.3542 | 0.1683 | 0.2282 | 0.7244 | | No log | 4.0 | 100 | 1.1505 | 0.2712 | 0.2376 | 0.2533 | 0.7183 | | No log | 5.0 | 125 | 1.1360 | 0.2579 | 0.2426 | 0.25 | 0.7170 | | No log | 6.0 | 150 | 1.0932 | 0.3108 | 0.2277 | 0.2629 | 0.7338 | | No log | 7.0 | 175 | 1.0761 | 0.2989 | 0.2574 | 0.2766 | 0.7298 | | No log | 8.0 | 200 | 1.0645 | 0.2805 | 0.3069 | 0.2931 | 0.7244 | | No log | 9.0 | 225 | 1.0577 | 0.3022 | 0.2723 | 0.2865 | 0.7325 | | No log | 10.0 | 250 | 1.0530 | 0.2902 | 0.2772 | 0.2835 | 0.7298 |
3cd16e5a0a8883c1c8c3b0ed6bd27eab
apache-2.0
[]
false
8209;lm|✔|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
f833e58a4ece4b4660377b7017d9a7d1
apache-2.0
[]
false
DistilBERT base cased distilled SQuAD > Note: This model is a clone of [`distilbert-base-cased-distilled-squad`](https://huggingface.co/distilbert-base-cased-distilled-squad) for internal testing. This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7). Using the question answering `Evaluator` from evaluate gives: ``` {'exact_match': 79.54588457899716, 'f1': 86.81181300991533, 'latency_in_seconds': 0.008683730778997168, 'samples_per_second': 115.15787689073015, 'total_time_in_seconds': 91.78703433400005} ``` which is roughly consistent with the official score.
437d2399ceddc2bc2bfc3b35c65b5ff7
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-Pisa This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1132
dd3cdd1147ccfb6281c808cbeb5a62a3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 9 | 1.4146 | | No log | 2.0 | 18 | 1.1013 | | No log | 3.0 | 27 | 1.1237 |
e8055a4f96ec70eb3f2e3e1463caa19c
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
Test Wav2Vec2 with egyptian arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Egyptian using the [arabicspeech.org MGB-3](https://arabicspeech.org/mgb3-asr/) When using this model, make sure that your speech input is sampled at 16kHz.
198c2bc596082a733b22570b42c55ebc
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor dataset = load_dataset("arabic_speech_corpus", split="test") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec_test") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec_test") resampler = torchaudio.transforms.Resample(48_000, 16_000)
6dc02699b0217939534e60fee980d64c
cc-by-4.0
['espnet', 'audio', 'diarization']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 0cabe65afd362122e77b04e2e967986a91de0fd8 pip install -e . cd egs2/callhome/diar1 ./run.sh --skip_data_prep false --skip_train true --download_model YushiUeda/callhome_adapt_real ``` <!-- Generated by scripts/utils/show_diar_result.sh -->
1b7dfc98b931497b29a682a584dca55a
cc-by-4.0
['espnet', 'audio', 'diarization']
false
Environments - date: `Mon Jun 20 10:30:23 EDT 2022` - python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]` - espnet version: `espnet 202205` - pytorch version: `pytorch 1.9.1+cu102` - Git hash: `fc62b1ce3e50c5ef8a2ac8cedb0d92ac41df54ca` - Commit date: `Thu Jun 9 16:29:52 2022 +0900`
d357b0f817ef699cf9203669fd5d1f1b
cc-by-4.0
['espnet', 'audio', 'diarization']
false
DER diarized_callhome2_spkall |threshold_median_collar|DER| |---|---| |result_th0.3_med11_collar0.25|22.29| |result_th0.3_med1_collar0.25|23.27| |result_th0.4_med11_collar0.25|19.85| |result_th0.4_med1_collar0.25|20.80| |result_th0.5_med11_collar0.25|19.26| |result_th0.5_med1_collar0.25|20.18| |result_th0.6_med11_collar0.25|20.24| |result_th0.6_med1_collar0.25|21.08| |result_th0.7_med11_collar0.25|22.38| |result_th0.7_med1_collar0.25|23.17|
4ca889f324e765a1cead250346a0d0e4
cc-by-4.0
['espnet', 'audio', 'diarization']
false
DIAR config <details><summary>expand</summary> ``` config: conf/tuning/train_diar_eda_adapt.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/diar_train_diar_eda_adapt_real_lr0001 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max - - train - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 16 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: - exp/diar_train_diar_eda_adapt_simu/latest.pth ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 1 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/diar_stats_8k/train/speech_shape - exp/diar_stats_8k/train/spk_labels_shape valid_shape_file: - exp/diar_stats_8k/valid/speech_shape - exp/diar_stats_8k/valid/spk_labels_shape batch_type: sorted valid_batch_type: null fold_length: - 80000 - 800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/callhome1_spkall/wav.scp - speech - sound - - dump/raw/callhome1_spkall/espnet_rttm - spk_labels - rttm valid_data_path_and_name_and_type: - - dump/raw/callhome2_spkall/wav.scp - speech - sound - - dump/raw/callhome2_spkall/espnet_rttm - spk_labels - rttm allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 scheduler: null scheduler_conf: {} num_spk: 7 init: null input_size: null model_conf: attractor_weight: 1.0 use_preprocessor: true frontend: default frontend_conf: fs: 8k hop_length: 128 specaug: specaug specaug_conf: apply_time_warp: false apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/diar_stats_8k/train/feats_stats.npz encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 4 linear_units: 512 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.1 decoder: linear decoder_conf: {} label_aggregator: label_aggregator label_aggregator_conf: win_length: 1024 hop_length: 512 attractor: rnn attractor_conf: unit: 256 layer: 1 dropout: 0.0 attractor_grad: false required: - output_dir version: '202204' distributed: false ``` </details>
de4c5f56de80aee29cef8bd194724ce5
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.1004 - Accuracy: 0.9432
b1db864a38c6c7185d020d5e969af54e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9044 | 1.0 | 318 | 0.5748 | 0.7390 | | 0.4491 | 2.0 | 636 | 0.2876 | 0.88 | | 0.2538 | 3.0 | 954 | 0.1813 | 0.9229 | | 0.1765 | 4.0 | 1272 | 0.1388 | 0.9294 | | 0.1422 | 5.0 | 1590 | 0.1214 | 0.9345 | | 0.1243 | 6.0 | 1908 | 0.1114 | 0.9406 | | 0.1138 | 7.0 | 2226 | 0.1066 | 0.94 | | 0.1076 | 8.0 | 2544 | 0.1030 | 0.9423 | | 0.104 | 9.0 | 2862 | 0.1010 | 0.9419 | | 0.1019 | 10.0 | 3180 | 0.1004 | 0.9432 |
7bbb5137879a1e0baa1f9253e2070710
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'roberta', 'pytorch']
false
Model description ![SikuBERT](https://raw.githubusercontent.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing/main/appendix/sikubert.png) Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese.
78093a1cfa4f3c20c2a0fa7de67e7031
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'roberta', 'pytorch']
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikuroberta") model = AutoModel.from_pretrained("SIKU-BERT/sikuroberta") ```
19d159242c75d7d25697bf34db215e7f
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'roberta', 'pytorch']
false
About Us We are from Nanjing Agricultural University. > Created with by SIKU-BERT [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
c99f90a7a7dbef462a88eb419e66e445
mit
['generated_from_trainer']
false
xlm-roberta-base-misogyny-sexism-out-of-sample-test-opt-bal This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2811 - Accuracy: 0.6022 - F1: 0.5689 - Precision: 0.5624 - Recall: 0.5756 - Mae: 0.3978
ae230e58775b4e23908282fddbd29071
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:| | 0.4434 | 1.0 | 1100 | 0.8792 | 0.5752 | 0.4897 | 0.5414 | 0.4469 | 0.4248 | | 0.3592 | 2.0 | 2200 | 1.0511 | 0.5882 | 0.4597 | 0.5723 | 0.3841 | 0.4118 | | 0.3351 | 3.0 | 3300 | 0.8862 | 0.5639 | 0.5437 | 0.5199 | 0.5698 | 0.4361 | | 0.2649 | 4.0 | 4400 | 1.5065 | 0.5931 | 0.5467 | 0.5556 | 0.5381 | 0.4069 | | 0.2252 | 5.0 | 5500 | 1.2637 | 0.5766 | 0.6084 | 0.5261 | 0.7212 | 0.4234 | | 0.2234 | 6.0 | 6600 | 1.6854 | 0.5832 | 0.5419 | 0.5432 | 0.5405 | 0.4168 | | 0.2288 | 7.0 | 7700 | 1.7353 | 0.5985 | 0.5917 | 0.5517 | 0.6380 | 0.4015 | | 0.2008 | 8.0 | 8800 | 1.8444 | 0.6152 | 0.5693 | 0.5814 | 0.5577 | 0.3848 | | 0.1765 | 9.0 | 9900 | 2.4235 | 0.5833 | 0.5508 | 0.5417 | 0.5601 | 0.4167 | | 0.2334 | 10.0 | 11000 | 2.0034 | 0.6002 | 0.5635 | 0.5611 | 0.5659 | 0.3998 | | 0.1561 | 11.0 | 12100 | 2.3651 | 0.5897 | 0.5772 | 0.5445 | 0.6142 | 0.4103 | | 0.1759 | 12.0 | 13200 | 2.8745 | 0.5855 | 0.5742 | 0.5402 | 0.6128 | 0.4145 | | 0.1306 | 13.0 | 14300 | 2.7506 | 0.5904 | 0.5830 | 0.5442 | 0.6278 | 0.4096 | | 0.1443 | 14.0 | 15400 | 2.7292 | 0.6061 | 0.5549 | 0.5725 | 0.5383 | 0.3939 | | 0.1124 | 15.0 | 16500 | 2.6693 | 0.6119 | 0.5744 | 0.5745 | 0.5742 | 0.3881 | | 0.0886 | 16.0 | 17600 | 2.8332 | 0.6052 | 0.5708 | 0.5661 | 0.5756 | 0.3948 | | 0.078 | 17.0 | 18700 | 3.0623 | 0.6054 | 0.5693 | 0.5668 | 0.5718 | 0.3946 | | 0.0955 | 18.0 | 19800 | 3.1543 | 0.5965 | 0.5725 | 0.5538 | 0.5925 | 0.4035 | | 0.0689 | 19.0 | 20900 | 3.3443 | 0.5971 | 0.5763 | 0.5537 | 0.6009 | 0.4029 | | 0.0669 | 20.0 | 22000 | 3.2811 | 0.6022 | 0.5689 | 0.5624 | 0.5756 | 0.3978 |
3199c428ae0c3e517478e3cfd0118400
apache-2.0
['generated_from_trainer']
false
swin-tiny-patch4-window7-224-finetuned-skin-cancer This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
e7e54218cd1a763318b11f1d449ac002
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/libritts_xvector_vits` ♻️ Imported from https://zenodo.org/record/5521416/ This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
748f8e12c437aa5e3e8e21330cba7f9a
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Tron Legacy Diffusion This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from the film **_Tron: Legacy (2010)_**. Use the token **_trnlgcy_** in your prompts to use the style. _Download the ckpt file from "files and versions" tab into the stable diffusion models folder of your web-ui of choice._ -- **Characters rendered with this model:** ![Character Samples](https://huggingface.co/dallinmackay/Tron-Legacy-diffusion/resolve/main/trnlgcy-preview.jpg) _prompt and settings used: **[person] in the style of trnlgcy** | **Steps: 25, Sampler: Euler a, CFG scale: 7.5**_ -- **Landscapes/scenes rendered with this model:** ![Landscape Samples](https://huggingface.co/dallinmackay/Tron-Legacy-diffusion/resolve/main/trnlgcy-preview2.jpg) _prompt and settings used: **city landscape in the style of trnlgcy** | **Steps: 25, Sampler: Euler a, CFG scale: 7.5**_ -- This model was trained with Dreambooth training by TheLastBen, using 30 images at 3000 steps. --
76e9f410325ae4e19579a14fa4d5a3a0
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) -- [![Become A Patreon](https://badgen.net/badge/become/a%20patron/F96854)](https://www.patreon.com/dallinmackay)
68f0ebdd4946ae7f44fbf3cdff7bf28f
cc-by-4.0
['generated_from_trainer']
false
danish-roberta-botxo-danish-finetuned-hatespeech This model is for a university project and is uploaded for sharing between students. It is training on a danish hate speech labeled training set. Feel free to use it, but as of now, we don't promise any good results ;-) This model is a fine-tuned version of [flax-community/roberta-base-danish](https://huggingface.co/flax-community/roberta-base-danish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2849
e2c81784ebc229488b13d9ad0d196442
cc-by-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 315 | 0.3074 | | 0.3016 | 2.0 | 630 | 0.3152 | | 0.3016 | 3.0 | 945 | 0.2849 |
c396df6f263342d4064f8c8295a8fcbe
cc-by-4.0
['japanese', 'ainu']
false
Examples | input | output| |---|---| |こんにちは|イランカラプテ| |ありがとうございます|イヤイライケレ| |熊は神ですか|キムンカムイアナクカムイネヤ?| |熊は怖いのか|キムンカムイアナクアシトマプネヤ?| |フクロウは鳥です|イソサンケカムイアナクチカプネ| |分かりません!|ケラムシカレ!| |勉強した?|ヤイホノッカエキプネヤ?| |してないです|クキカソモキ| |さようなら|アプンノオカヤン|
4647ab20a12438ce6a98cd0abf90c5d6
cc-by-4.0
['japanese', 'ainu']
false
License Shield: [![CC BY 4.0][cc-by-shield]][cc-by] This work is licensed under a [Creative Commons Attribution 4.0 International License][cc-by]. [![CC BY 4.0][cc-by-image]][cc-by] [cc-by]: http://creativecommons.org/licenses/by/4.0/ [cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png [cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
66754c3934c455640aa7047382c6023e
apache-2.0
['generated_from_trainer']
false
results This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5193 - F1: 0.9546
65ed472067707d0b2f086875888712a9
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
85bdeaaa14b667416064bf1194d655cb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3803 | 1.0 | 1792 | 0.5110 | 0.9546 | | 0.4129 | 2.0 | 3584 | 0.5256 | 0.9546 | | 0.4804 | 3.0 | 5376 | 0.5305 | 0.9546 | | 0.6571 | 4.0 | 7168 | 0.5583 | 0.9546 | | 0.6605 | 5.0 | 8960 | 0.5193 | 0.9546 |
efad2202197a5594c891cfaceeca9d49
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'speech-emotion-recognition']
false
Prediction ```python import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor import librosa import IPython.display as ipd import numpy as np import pandas as pd ``` ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "m3hrdadfi/wav2vec2-xlsr-greek-speech-emotion-recognition" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device) ``` ```python def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Emotion": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ``` ```python path = "/path/to/disgust.wav" outputs = predict(path, sampling_rate) ``` ```bash [ {'Emotion': 'anger', 'Score': '0.0%'}, {'Emotion': 'disgust', 'Score': '99.2%'}, {'Emotion': 'fear', 'Score': '0.1%'}, {'Emotion': 'happiness', 'Score': '0.3%'}, {'Emotion': 'sadness', 'Score': '0.5%'} ] ```
ff4972cc084944807f59d72ee5943e76
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'speech-emotion-recognition']
false
Evaluation The following tables summarize the scores obtained by model overall and per each class. | Emotions | precision | recall | f1-score | accuracy | |-----------|-----------|--------|----------|----------| | anger | 0.92 | 1.00 | 0.96 | | | disgust | 0.85 | 0.96 | 0.90 | | | fear | 0.88 | 0.88 | 0.88 | | | happiness | 0.94 | 0.71 | 0.81 | | | sadness | 0.96 | 1.00 | 0.98 | | | | | | Overal | 0.91 |
5bca26535b1fc8befbe5d312348c5994
apache-2.0
['distilbert', 'needmining']
false
Finetuned-Distilbert-needmining (uncased) This model is a finetuned version of the [Distilbert base model](https://huggingface.co/distilbert-base-uncased). It was trained to predict need-containing sentences from amazon product reviews.
351c7c7d08f6faffbbb04ea4f35954d1
apache-2.0
['distilbert', 'needmining']
false
Intended uses & limitations You can use this model to identify sentences that contain customer needs in user-generated content. This can act as a filtering process to remove uninformative content for market research.
09474094df99c4b4026262a23f0debbb
apache-2.0
['distilbert', 'needmining']
false
How to use You can use this model directly with a pipeline for text classification: ```python >>> from transformers import pipeline >>> classifier = pipeline("text-classification", model="svenstahlmann/finetuned-distilbert-needmining") >>> classifier("the plasic feels super cheap.") [{'label': 'contains need', 'score': 0.9397542476654053}] ```
a8f261c9ff252440f7e8ccc4edff9515
apache-2.0
['distilbert', 'needmining']
false
Training procedure For the training, we used [Population Based Training (PBT)](https://www.deepmind.com/blog/population-based-training-of-neural-networks) and optimized for f1 score on a validation set of 1600 sentences.
42e7e8d4480e55d88496675936c2c824
other
['generated_from_trainer']
false
6.7b-dalio-book-handwritten-io-cosine-6e-6 This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0586 - Accuracy: 0.3412
e15bb3fe038af290432478198d8c3591
other
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0
b0ed1ab502184c7ec0f74d16bca3c0fe
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6377 | 0.11 | 6 | 2.4688 | 0.3016 | | 2.5046 | 0.21 | 12 | 2.3848 | 0.3096 | | 2.4755 | 0.32 | 18 | 2.3223 | 0.3156 | | 2.459 | 0.43 | 24 | 2.2715 | 0.3201 | | 2.3602 | 0.54 | 30 | 2.2246 | 0.3243 | | 2.3829 | 0.64 | 36 | 2.1895 | 0.3275 | | 2.3188 | 0.75 | 42 | 2.1465 | 0.3315 | | 2.2895 | 0.86 | 48 | 2.1035 | 0.3365 | | 2.3062 | 0.96 | 54 | 2.0586 | 0.3412 |
5adffb3072bb572f3f6127d943cc80e4
openrail
[]
false
Model Card for LoRA Based on a 48 image dataset scraped from Danbooru and tagged with WD1.4 Tagger. Trained for 30 epochs (7200 steps), best with models and merges based on Anything v3. Not particularly prone to NSFW, as the training dataset was somewhat balanced, but is capable of it. Outfits tend to Ruby's default colors of red and black unless specified otherwise, especially all kinds of dresses. I also recommend using Latent upscaler with medium (0.4-0.5) denoise, as it can fix some small inconsistencies like wrong eye color.
58089b26dcf59822ff613621c9f5f9de
openrail
[]
false
Model Description Version 1.0 is currently the only one available and is the least prone to straying from the prompt (white dress stays white), however, may be slightly inaccurate when depicting Ruby. Best weights seem to be in the area of 0.6 to 0.7, and for best results I recommend adding in tags like "grey eyes, red hair, multicolored hair". Higher weights can sometimes lead to facial artifacts and/or weird anatomy. - **Developed by:** DarkSolus - **Model type:** LoRA - **Finetuned from model [optional]:** Anything v3
3e5677f845205ec252ab9077454d46ef
openrail
[]
false
How to Get Started with the Model Download the preferred version of the LoRA from the repo. Install Additional Networks extension: 1) via Auto1111's extension manager 2) via GitHub: https://github.com/kohya-ss/sd-webui-additional-networks Reload the UI, and place your downloaded LoRA into: .\stable-diffusion-webui\extensions\sd-webui-additional-networks\models\lora
b7e7d1bf06fa2324f46c0682dc23842a
apache-2.0
['generated_from_trainer']
false
t5-small-med-term-conditional-masking This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6808 - Rouge2 Precision: 0.6855 - Rouge2 Recall: 0.486 - Rouge2 Fmeasure: 0.5507
a9abf355b14fefd86bd6893460300244
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.9303 | 1.0 | 15827 | 0.8262 | 0.6603 | 0.4698 | 0.5318 | | 0.8677 | 2.0 | 31654 | 0.7679 | 0.6695 | 0.4762 | 0.539 | | 0.8315 | 3.0 | 47481 | 0.7393 | 0.6741 | 0.4783 | 0.5418 | | 0.7999 | 4.0 | 63308 | 0.7194 | 0.6774 | 0.4811 | 0.5448 | | 0.7746 | 5.0 | 79135 | 0.7059 | 0.6804 | 0.4815 | 0.5459 | | 0.7785 | 6.0 | 94962 | 0.6958 | 0.6827 | 0.4841 | 0.5485 | | 0.7592 | 7.0 | 110789 | 0.6893 | 0.6841 | 0.4849 | 0.5494 | | 0.745 | 8.0 | 126616 | 0.6849 | 0.6846 | 0.4852 | 0.5498 | | 0.7443 | 9.0 | 142443 | 0.6818 | 0.6854 | 0.4865 | 0.551 | | 0.7417 | 10.0 | 158270 | 0.6808 | 0.6855 | 0.486 | 0.5507 |
3fe64a1cf92c22e6f7acf95d67b40118
cc-by-4.0
['question generation']
false
Model Card of `research-backup/t5-base-squadshifts-vanilla-nyt-qg` This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: nyt) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
c481805a1be8eebe3dcbfe6b95637fda
cc-by-4.0
['question generation']
false
Overview - **Language model:** [t5-base](https://huggingface.co/t5-base) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (nyt) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
c7286c29667e6bda1e84ba56a42e9846
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/t5-base-squadshifts-vanilla-nyt-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
e0016542caf84dbdbd92781554ee1938
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-base-squadshifts-vanilla-nyt-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:---------------------------------------------------------------------------| | BERTScore | 91.86 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 22.57 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 14.54 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 10.04 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 7.23 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 22.29 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 62.68 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 22.67 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
93ecbfe93522fa49c6fbf0aa7ac21808
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: nyt - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-base - max_length: 512 - max_length_output: 32 - epoch: 8 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-base-squadshifts-vanilla-nyt-qg/raw/main/trainer_config.json).
c353b795e31f83f3d350ca19e0f738e2
apache-2.0
['part-of-speech', 'token-classification']
false
XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Marathi This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
a006ede30b070143d44f2d0c7996ddb1
apache-2.0
['part-of-speech', 'token-classification']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-mr") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-mr") ```
2a5b3e58d04835a7381515deaf422a73
cc-by-sa-4.0
['long-documents']
false
Model description This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529). The model has been warm-started re-using the weights of RoBERTa (Liu et al., 2019), BUT has not been continued pre-trained. It supports sequences of length up to 4,096. HAT uses hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think of segments as paragraphs or sentences. Note: If you wish to use a fully pre-trained HAT model, you have to use [kiddothe2b/adhoc-hat-base-4096](https://huggingface.co/kiddothe2b/adhoc-hat-base-4096).
3be6da664134b818bc196a9464bd220a
cc-by-sa-4.0
['long-documents']
false
Intended uses & limitations The model is intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=hierarchical-transformer) to look for other versions of HAT, or fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering.
29784ce6acd4a6dfdadb99baaccd496a
cc-by-sa-4.0
['long-documents']
false
How to use You can fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks: ```python from transformers import AutoTokenizer, AutoModelforSequenceClassification tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/adhoc-hierarchical-transformer-base-4096", trust_remote_code=True) doc_classifier = AutoModelforSequenceClassification("kiddothe2b/adhoc-hierarchical-transformer-base-4096", trust_remote_code=True) ``` Note: If you wish to use a fully pre-trained HAT model, you have to use [kiddothe2b/hierarchical-transformer-base-4096](https://huggingface.co/kiddothe2b/hierarchical-transformer-base-4096).
d27b4b71eed9937fb3657ccdacd1874e
apache-2.0
['generated_from_trainer']
false
resnet-50-FV2-finetuned-memes This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9263 - Accuracy: 0.6453 - Precision: 0.5728 - Recall: 0.6453 - F1: 0.5964
0418f25a658f93349920c83705a38ac9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.5763 | 0.99 | 20 | 1.5575 | 0.4281 | 0.2966 | 0.4281 | 0.2669 | | 1.4761 | 1.99 | 40 | 1.4424 | 0.4343 | 0.1886 | 0.4343 | 0.2630 | | 1.3563 | 2.99 | 60 | 1.3240 | 0.4343 | 0.1886 | 0.4343 | 0.2630 | | 1.2824 | 3.99 | 80 | 1.2636 | 0.4389 | 0.3097 | 0.4389 | 0.2734 | | 1.2315 | 4.99 | 100 | 1.2119 | 0.4529 | 0.3236 | 0.4529 | 0.3042 | | 1.1956 | 5.99 | 120 | 1.1764 | 0.4900 | 0.3731 | 0.4900 | 0.3692 | | 1.1452 | 6.99 | 140 | 1.1424 | 0.5147 | 0.3963 | 0.5147 | 0.4090 | | 1.1076 | 7.99 | 160 | 1.1190 | 0.5371 | 0.4121 | 0.5371 | 0.4392 | | 1.0679 | 8.99 | 180 | 1.0825 | 0.5719 | 0.4465 | 0.5719 | 0.4831 | | 1.0432 | 9.99 | 200 | 1.0482 | 0.5750 | 0.5404 | 0.5750 | 0.4930 | | 0.9903 | 10.99 | 220 | 1.0275 | 0.5958 | 0.5459 | 0.5958 | 0.5241 | | 0.9675 | 11.99 | 240 | 1.0145 | 0.6051 | 0.5350 | 0.6051 | 0.5379 | | 0.9335 | 12.99 | 260 | 0.9860 | 0.6175 | 0.5537 | 0.6175 | 0.5527 | | 0.9157 | 13.99 | 280 | 0.9683 | 0.6105 | 0.5386 | 0.6105 | 0.5504 | | 0.8901 | 14.99 | 300 | 0.9558 | 0.6352 | 0.5686 | 0.6352 | 0.5833 | | 0.8722 | 15.99 | 320 | 0.9382 | 0.6345 | 0.5657 | 0.6345 | 0.5807 | | 0.854 | 16.99 | 340 | 0.9322 | 0.6376 | 0.5623 | 0.6376 | 0.5856 | | 0.8494 | 17.99 | 360 | 0.9287 | 0.6422 | 0.6675 | 0.6422 | 0.5918 | | 0.8652 | 18.99 | 380 | 0.9212 | 0.6399 | 0.5640 | 0.6399 | 0.5863 | | 0.846 | 19.99 | 400 | 0.9263 | 0.6453 | 0.5728 | 0.6453 | 0.5964 |
7682f92b9a19806ec7eb2c7f58b5454f
apache-2.0
['summarization', 'generated_from_trainer']
false
bart-base-finetuned-summarization-cnn-ver1.2 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 2.2476 - Bertscore-mean-precision: 0.8904 - Bertscore-mean-recall: 0.8611 - Bertscore-mean-f1: 0.8753 - Bertscore-median-precision: 0.8891 - Bertscore-median-recall: 0.8600 - Bertscore-median-f1: 0.8741
e12db6418c4a490ca043ec1701b4a61c
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
9287973ca7c221cc61373051a8dc1e91
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bertscore-mean-precision | Bertscore-mean-recall | Bertscore-mean-f1 | Bertscore-median-precision | Bertscore-median-recall | Bertscore-median-f1 | |:-------------:|:-----:|:-----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:| | 2.3305 | 1.0 | 5742 | 2.2125 | 0.8845 | 0.8587 | 0.8713 | 0.8840 | 0.8577 | 0.8706 | | 1.7751 | 2.0 | 11484 | 2.2028 | 0.8910 | 0.8616 | 0.8759 | 0.8903 | 0.8603 | 0.8744 | | 1.4564 | 3.0 | 17226 | 2.2476 | 0.8904 | 0.8611 | 0.8753 | 0.8891 | 0.8600 | 0.8741 |
30f86d63c6e48aaa9977229e3708661a
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0587 - Precision: 0.9333 - Recall: 0.9515 - F1: 0.9423 - Accuracy: 0.9871
39d6ef4517572e76c5d242b581015092
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.086 | 1.0 | 1756 | 0.0634 | 0.9186 | 0.9364 | 0.9274 | 0.9829 | | 0.0372 | 2.0 | 3512 | 0.0598 | 0.9328 | 0.9478 | 0.9402 | 0.9860 | | 0.0217 | 3.0 | 5268 | 0.0587 | 0.9333 | 0.9515 | 0.9423 | 0.9871 |
eb5d1f41499b3d847b0973f9c12985ea
apache-2.0
['lexical normalization']
false
Fine-tuned ByT5-small for MultiLexNorm (Croatian version) ![model image](https://github.com/ufal/multilexnorm2021/raw/master/img/overall.png) This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages. Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
73ba140ea7d6fcd1d36f7136d52e6f5a
mit
[]
false
Description A fine-tuned regression model that assigns a functioning level to Dutch sentences describing exercise tolerance functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about exercise tolerance functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
25d02cdb45689fd46b129342487cccd0
mit
[]
false
Functioning levels Level | Meaning ---|--- 5 | MET&gt;6. Can tolerate jogging, hard exercises, running, climbing stairs fast, sports. 4 | 4&le;MET&le;6. Can tolerate walking / cycling at a brisk pace, considerable effort (e.g. cycling from 16 km/h), heavy housework. 3 | 3&le;MET&lt;4. Can tolerate walking / cycling at a normal pace, gardening, exercises without equipment. 2 | 2&le;MET&lt;3. Can tolerate walking at a slow to moderate pace, grocery shopping, light housework. 1 | 1&le;MET&lt;2. Can tolerate sitting activities. 0 | 0&le;MET&lt;1. Can physically tolerate only recumbent activities. The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
46cbfe2cbbeb514bbfa4c81a2f94c7ff
mit
[]
false
How to use To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library: ``` from simpletransformers.classification import ClassificationModel model = ClassificationModel( 'roberta', 'CLTL/icf-levels-ins', use_cuda=False, ) example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona' _, raw_outputs = model.predict([example]) predictions = np.squeeze(raw_outputs) ``` The prediction on the example is: ``` 3.13 ``` The raw outputs look like this: ``` [[3.1300993]] ```
57d7d5cd3442572c0280b49fdef63e3d
mit
[]
false
Evaluation results The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). | | Sentence-level | Note-level |---|---|--- mean absolute error | 0.69 | 0.61 mean squared error | 0.80 | 0.64 root mean squared error | 0.89 | 0.80
240edb40579afbebb099278fccecc400
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-53-Total_2e-4_2 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2733 - Wer: 0.2116
841179107c85286ae71f33b55201c1f0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.2741 | 0.1 | 200 | 2.9070 | 0.9707 | | 2.034 | 0.2 | 400 | 0.7240 | 0.6798 | | 1.0037 | 0.3 | 600 | 0.5651 | 0.5368 | | 0.8834 | 0.4 | 800 | 0.4709 | 0.4669 | | 0.7973 | 0.5 | 1000 | 0.4305 | 0.4261 | | 0.7489 | 0.6 | 1200 | 0.4017 | 0.3763 | | 0.7507 | 0.7 | 1400 | 0.3662 | 0.3481 | | 0.7108 | 0.8 | 1600 | 0.3604 | 0.3513 | | 0.7151 | 0.9 | 1800 | 0.3563 | 0.3406 | | 0.6755 | 1.0 | 2000 | 0.3365 | 0.3210 | | 0.6038 | 1.1 | 2200 | 0.3394 | 0.3053 | | 0.6109 | 1.2 | 2400 | 0.3179 | 0.2844 | | 0.5999 | 1.3 | 2600 | 0.3166 | 0.2773 | | 0.6291 | 1.4 | 2800 | 0.3134 | 0.2733 | | 0.626 | 1.5 | 3000 | 0.3060 | 0.2690 | | 0.6188 | 1.6 | 3200 | 0.3038 | 0.2644 | | 0.5757 | 1.7 | 3400 | 0.3015 | 0.2566 | | 0.5943 | 1.8 | 3600 | 0.2925 | 0.2494 | | 0.6043 | 1.9 | 3800 | 0.2858 | 0.2491 | | 0.5874 | 2.0 | 4000 | 0.2874 | 0.2452 | | 0.5263 | 2.1 | 4200 | 0.2800 | 0.2364 | | 0.5282 | 2.2 | 4400 | 0.2848 | 0.2387 | | 0.4953 | 2.3 | 4600 | 0.2793 | 0.2360 | | 0.5428 | 2.4 | 4800 | 0.2863 | 0.2414 | | 0.5618 | 2.5 | 5000 | 0.2788 | 0.2350 | | 0.5395 | 2.6 | 5200 | 0.2765 | 0.2325 | | 0.5178 | 2.7 | 5400 | 0.2787 | 0.2351 | | 0.5264 | 2.8 | 5600 | 0.2755 | 0.2312 | | 0.5222 | 2.9 | 5800 | 0.2692 | 0.2258 | | 0.5184 | 3.0 | 6000 | 0.2681 | 0.2242 | | 0.4826 | 3.1 | 6200 | 0.2736 | 0.2224 | | 0.479 | 3.2 | 6400 | 0.2896 | 0.2353 | | 0.4938 | 3.3 | 6600 | 0.2744 | 0.2252 | | 0.4772 | 3.4 | 6800 | 0.2735 | 0.2242 | | 0.4831 | 3.5 | 7000 | 0.2721 | 0.2225 | | 0.4869 | 3.6 | 7200 | 0.2710 | 0.2194 | | 0.4515 | 3.7 | 7400 | 0.2692 | 0.2196 | | 0.4732 | 3.8 | 7600 | 0.2729 | 0.2269 | | 0.4683 | 3.9 | 7800 | 0.2713 | 0.2211 | | 0.4674 | 4.0 | 8000 | 0.2642 | 0.2116 | | 0.4239 | 4.1 | 8200 | 0.2773 | 0.2176 | | 0.4306 | 4.2 | 8400 | 0.2779 | 0.2191 | | 0.441 | 4.3 | 8600 | 0.2758 | 0.2136 | | 0.4343 | 4.4 | 8800 | 0.2797 | 0.2203 | | 0.4059 | 4.5 | 9000 | 0.2763 | 0.2159 | | 0.4399 | 4.6 | 9200 | 0.2755 | 0.2123 | | 0.4131 | 4.7 | 9400 | 0.2741 | 0.2124 | | 0.4331 | 4.8 | 9600 | 0.2728 | 0.2101 | | 0.4288 | 4.9 | 9800 | 0.2730 | 0.2110 | | 0.4341 | 5.0 | 10000 | 0.2733 | 0.2116 |
9269bbb01d7cda96c0bd88a78c5b2e7f
mit
['keyphrase-extraction']
false
🔑 Keyphrase Extraction Model: KBIR-KPCrowd Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳. Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text.
48ba9167928337a07190ae6f654f4f4d
mit
['keyphrase-extraction']
false
📓 Model Description This model uses [KBIR](https://huggingface.co/bloomberg/KBIR) as its base model and fine-tunes it on the [KPCrowd dataset](https://huggingface.co/datasets/midas/kpcrowd). KBIR or Keyphrase Boundary Infilling with Replacement is a pre-trained model which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC). You can find more information about the architecture in this [paper](https://arxiv.org/abs/2112.08547). Keyphrase extraction models are transformer models fine-tuned as a token classification problem where each word in the document is classified as being part of a keyphrase or not. | Label | Description | | ----- | ------------------------------- | | B-KEY | At the beginning of a keyphrase | | I-KEY | Inside a keyphrase | | O | Outside a keyphrase | Kulkarni, Mayank, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. "Learning Rich Representation of Keyphrases from Text." arXiv preprint arXiv:2112.08547 (2021). Sahrawat, Dhruva, Debanjan Mahata, Haimin Zhang, Mayank Kulkarni, Agniv Sharma, Rakesh Gosangi, Amanda Stent, Yaman Kumar, Rajiv Ratn Shah, and Roger Zimmermann. "Keyphrase extraction as sequence labeling using contextualized embeddings." In European Conference on Information Retrieval, pp. 328-335. Springer, Cham, 2020.
137f21f21151c68ef64533a4ea793a26
mit
['keyphrase-extraction']
false
🛑 Limitations * This keyphrase extraction model is very dataset-specific. It's not recommended to use this model for other domains, but you are free to test it out. * Only works for English documents. * Large number of annotated keyphrases.
af1785cf86fc2eb97b1d2f65a5c56ded
mit
['keyphrase-extraction']
false
Define keyphrase extraction pipeline class KeyphraseExtractionPipeline(TokenClassificationPipeline): def __init__(self, model, *args, **kwargs): super().__init__( model=AutoModelForTokenClassification.from_pretrained(model), tokenizer=AutoTokenizer.from_pretrained(model), *args, **kwargs ) def postprocess(self, model_outputs): results = super().postprocess( model_outputs=model_outputs, aggregation_strategy=AggregationStrategy.SIMPLE, ) return np.unique([result.get("word").strip() for result in results]) ``` ```python
8271ec0b0611e4b129d05ed8e6a91b76
mit
['keyphrase-extraction']
false
Inference text = """ Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time. Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text. """.replace("\n", " ") keyphrases = extractor(text) print(keyphrases) ``` ```
a93be4a58f57b02a319c835996c12006
mit
['keyphrase-extraction']
false
Output ['Artificial Intelligence' 'Classical' 'Keyphrase' 'Keyphrase extraction' 'classical' 'content' 'context' 'disadvantage' 'document' 'documents' 'extract' 'extraction' 'extraction process' 'frequency' 'human' 'humans' 'important' 'keyphrases' 'learning' 'linguistic' 'long-term' 'machine learning' 'meaning' 'methods' 'neural approaches' 'occurrence' 'process' 'quickly' 'semantic' 'statistical' 'technique' 'text' 'text analysis' 'understand' 'widely' 'words' 'work'] ```
628aae91a40203a4ac76acc2053485cf
mit
['keyphrase-extraction']
false
📚 Training Dataset [KPCrowd](https://huggingface.co/datasets/midas/kpcrowd) is a broadcast news transcription dataset consisting of 500 English broadcast news stories from 10 different categories (art and culture, business, crime, fashion, health, politics us, politics world, science, sports, technology) with 50 docs per category. This dataset is annotated by multiple annotators that were required to look at the same news story and assign a set of keyphrases from the text itself. You can find more information in the [paper](https://arxiv.org/abs/1306.4606).
92c0deea25cce59b0919df78dc297c0f
mit
['keyphrase-extraction']
false
Dataset parameters dataset_full_name = "midas/kpcrowd" dataset_subset = "raw" dataset_document_column = "document" dataset_biotags_column = "doc_bio_tags" def preprocess_fuction(all_samples_per_split): tokenized_samples = tokenizer.batch_encode_plus( all_samples_per_split[dataset_document_column], padding="max_length", truncation=True, is_split_into_words=True, max_length=max_length, ) total_adjusted_labels = [] for k in range(0, len(tokenized_samples["input_ids"])): prev_wid = -1 word_ids_list = tokenized_samples.word_ids(batch_index=k) existing_label_ids = all_samples_per_split[dataset_biotags_column][k] i = -1 adjusted_label_ids = [] for wid in word_ids_list: if wid is None: adjusted_label_ids.append(lbl2idx["O"]) elif wid != prev_wid: i = i + 1 adjusted_label_ids.append(lbl2idx[existing_label_ids[i]]) prev_wid = wid else: adjusted_label_ids.append( lbl2idx[ f"{'I' if existing_label_ids[i] == 'B' else existing_label_ids[i]}" ] ) total_adjusted_labels.append(adjusted_label_ids) tokenized_samples["labels"] = total_adjusted_labels return tokenized_samples
467f95ab1cc63e31b2879ac596e4225e
mit
['keyphrase-extraction']
false
📝 Evaluation results Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases. The model achieves the following results on the Inspec test set: | Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M | |:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:| | Inspec Test Set | 0.47 | 0.07 | 0.12 | 0.46 | 0.13 | 0.20 | 0.37 | 0.33 | 0.33 |
5cee6ac402c81f6457cfc1c2927af21c
mit
[]
false
Model description **bert-base-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC). Specifically, this model is a *bert-base-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset. If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a [**bert-large-NER**](https://huggingface.co/dslim/bert-large-NER/) version is also available.
f48f6327d8023c8f682bcf8cef2277da
mit
[]
false
Eval results metric|dev|test -|-|- f1 |95.1 |91.3 precision |95.0 |90.7 recall |95.3 |91.9 The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](https://github.com/google-research/bert/issues/223).
a24d6b00aed9b9960da522f99c94a73d
other
['generated_from_trainer']
false
nvidia-segformer-b0-finetuned-ade-512-512-finetuned-ISIC17 This model is a fine-tuned version of [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1948 - Mean Iou: 0.8064 - Mean Accuracy: 0.8726 - Overall Accuracy: 0.9381 - Per Category Iou: [0.6841604127643356, 0.9285439643646547] - Per Category Accuracy: [0.7721651141608432, 0.9729809595315688]
dcfd12e6948ea7ea78f0911b33a3c08d
other
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
a73446e59f616035353e4d048d000bce
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------:|:-----------------------------------------:| | 0.481 | 0.16 | 10 | 0.4235 | 0.6191 | 0.6970 | 0.8761 | [0.3719409076673884, 0.8662862424406493] | [0.42270204900152314, 0.9713331864930521] | | 0.4147 | 0.32 | 20 | 0.3894 | 0.7067 | 0.8502 | 0.8853 | [0.5464942438498753, 0.8668431573745645] | [0.7965579529885418, 0.9038859083170013] | | 0.356 | 0.48 | 30 | 0.3148 | 0.7467 | 0.8513 | 0.9107 | [0.5963581593534901, 0.897077797385972] | [0.7603709174964982, 0.9422313184595918] | | 0.3039 | 0.63 | 40 | 0.3024 | 0.7620 | 0.8671 | 0.9162 | [0.6211722830632663, 0.9028139512386881] | [0.7918407335685692, 0.9422883932404167] | | 0.2545 | 0.79 | 50 | 0.2849 | 0.7766 | 0.8898 | 0.9201 | [0.6468577863419183, 0.9063792530493855] | [0.8432862096150755, 0.9362151542385662] | | 0.2635 | 0.95 | 60 | 0.2504 | 0.7828 | 0.8644 | 0.9279 | [0.6487213857926865, 0.9168129696986418] | [0.7671470887645524, 0.9616549114054705] | | 0.2175 | 1.11 | 70 | 0.2497 | 0.7849 | 0.8682 | 0.9283 | [0.6526705030304356, 0.9171225024239068] | [0.7762677096648272, 0.9602225755678137] | | 0.2025 | 1.27 | 80 | 0.2400 | 0.7840 | 0.8632 | 0.9288 | [0.6501844204669202, 0.9178944798865282] | [0.7627291445016801, 0.9636411137781736] | | 0.2035 | 1.43 | 90 | 0.2288 | 0.7931 | 0.8749 | 0.9313 | [0.6657367286733036, 0.9203778068784213] | [0.7885027822639286, 0.9612655167036179] | | 0.2488 | 1.59 | 100 | 0.2110 | 0.7978 | 0.8719 | 0.9341 | [0.6717638717220313, 0.923859975121704] | [0.7766611302038285, 0.9672003292652145] | | 0.1954 | 1.75 | 110 | 0.2067 | 0.7962 | 0.8597 | 0.9354 | [0.666599427783381, 0.9258672754383861] | [0.7436428904928473, 0.9757231213956472] | | 0.1806 | 1.9 | 120 | 0.2047 | 0.7926 | 0.8525 | 0.9349 | [0.6596059897565958, 0.925563006736469] | [0.726197674685608, 0.9787940661520825] | | 0.161 | 2.06 | 130 | 0.2047 | 0.7903 | 0.8505 | 0.9342 | [0.6558737849234609, 0.9247714617107691] | [0.7223974159771602, 0.9786951901233297] | | 0.1736 | 2.22 | 140 | 0.2023 | 0.7948 | 0.8588 | 0.9349 | [0.6643652721485811, 0.9252950591002775] | [0.742124317828686, 0.9754152391272543] | | 0.1947 | 2.38 | 150 | 0.2077 | 0.7985 | 0.8656 | 0.9355 | [0.6712414223331253, 0.9257326708494226] | [0.7585178608332249, 0.9726888331181641] | | 0.1464 | 2.54 | 160 | 0.1960 | 0.8030 | 0.8680 | 0.9373 | [0.678274892507806, 0.9276935390097538] | [0.7620104248788739, 0.9740685958478499] | | 0.1644 | 2.7 | 170 | 0.1964 | 0.8064 | 0.8751 | 0.9377 | [0.6847175060674714, 0.9279857318627613] | [0.7791196258677832, 0.9710404169835255] | | 0.1803 | 2.86 | 180 | 0.1948 | 0.8064 | 0.8726 | 0.9381 | [0.6841604127643356, 0.9285439643646547] | [0.7721651141608432, 0.9729809595315688] |
2c704e8b8bafd0ba3e5beb4bd1dc097e
other
[]
false
DistilROBERTA fine-tuned for bias detection This model is based on [distilroberta-base](https://huggingface.co/distilroberta-base) pretrained weights, with a classification head fine-tuned to classify text into 2 categories (neutral, biased).
64817a0de47f5f97deaef6f6e0c269cd
other
[]
false
Training data The dataset used to fine-tune the model is [wikirev-bias](https://huggingface.co/datasets/valurank/wikirev-bias), extracted from English wikipedia revisions, see https://github.com/rpryzant/neutralizing-bias for details on the WNC wiki edits corpus.
d175b7c93b981e1eb0534cc0636af73e
apache-2.0
['generated_from_trainer']
false
data-augmentation-whitenoise-timit-2310 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5916 - Wer: 0.3408
cd9407439bcce52fbcbdf7c0b4b89ced
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.6731 | 0.67 | 500 | 2.7553 | 1.0 | | 1.0656 | 1.34 | 1000 | 0.5963 | 0.5297 | | 0.5065 | 2.01 | 1500 | 0.4898 | 0.4654 | | 0.3212 | 2.68 | 2000 | 0.4265 | 0.4331 | | 0.2492 | 3.35 | 2500 | 0.4020 | 0.4073 | | 0.2116 | 4.02 | 3000 | 0.4152 | 0.3935 | | 0.1719 | 4.69 | 3500 | 0.4258 | 0.3858 | | 0.1544 | 5.36 | 4000 | 0.4542 | 0.3818 | | 0.1474 | 6.03 | 4500 | 0.4612 | 0.3821 | | 0.1248 | 6.7 | 5000 | 0.4813 | 0.3749 | | 0.1148 | 7.37 | 5500 | 0.5131 | 0.3772 | | 0.1145 | 8.04 | 6000 | 0.5383 | 0.3714 | | 0.0986 | 8.71 | 6500 | 0.5288 | 0.3777 | | 0.091 | 9.38 | 7000 | 0.5071 | 0.3869 | | 0.0789 | 10.05 | 7500 | 0.5256 | 0.3819 | | 0.0747 | 10.72 | 8000 | 0.5287 | 0.3711 | | 0.0687 | 11.39 | 8500 | 0.5179 | 0.3754 | | 0.072 | 12.06 | 9000 | 0.7438 | 0.3702 | | 0.0646 | 12.73 | 9500 | 0.5293 | 0.3777 | | 0.0621 | 13.4 | 10000 | 0.5536 | 0.3692 | | 0.0587 | 14.08 | 10500 | 0.5214 | 0.3712 | | 0.0538 | 14.75 | 11000 | 0.4853 | 0.3694 | | 0.0614 | 15.42 | 11500 | 0.5439 | 0.3637 | | 0.0493 | 16.09 | 12000 | 0.5087 | 0.3649 | | 0.0441 | 16.76 | 12500 | 0.5736 | 0.3621 | | 0.038 | 17.43 | 13000 | 0.7295 | 0.3650 | | 0.0397 | 18.1 | 13500 | 0.5722 | 0.3586 | | 0.0357 | 18.77 | 14000 | 0.5701 | 0.3616 | | 0.0349 | 19.44 | 14500 | 0.5661 | 0.3599 | | 0.0318 | 20.11 | 15000 | 0.5346 | 0.3572 | | 0.0288 | 20.78 | 15500 | 0.6972 | 0.3597 | | 0.0331 | 21.45 | 16000 | 0.5288 | 0.3576 | | 0.0304 | 22.12 | 16500 | 0.5813 | 0.3551 | | 0.0268 | 22.79 | 17000 | 0.5439 | 0.3557 | | 0.0255 | 23.46 | 17500 | 0.5790 | 0.3531 | | 0.0244 | 24.13 | 18000 | 0.5794 | 0.3493 | | 0.0335 | 24.8 | 18500 | 0.5943 | 0.3515 | | 0.026 | 25.47 | 19000 | 0.5737 | 0.3462 | | 0.0199 | 26.14 | 19500 | 0.5794 | 0.3469 | | 0.0213 | 26.81 | 20000 | 0.5955 | 0.3448 | | 0.0199 | 27.48 | 20500 | 0.5927 | 0.3407 | | 0.0143 | 28.15 | 21000 | 0.5975 | 0.3415 | | 0.0167 | 28.82 | 21500 | 0.5835 | 0.3411 | | 0.0141 | 29.49 | 22000 | 0.5916 | 0.3408 |
78355aefbc616ec630156e22b75f4c48
apache-2.0
['automatic-speech-recognition', 'th']
false
exp_w2v2t_th_wavlm_s108 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
446b9cce9573ad75c451684c4c981ca2
mit
['generated_from_trainer']
false
roberta-base-finetuned-cola This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.9395 - Matthews Correlation: 0.6295
06d8e06f8c3e8539e4772bde559505e4