license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['automatic-speech-recognition', 'en'] | false | exp_w2v2r_en_vp-100k_gender_male-8_female-2_s859 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | e69451dc2fa503c0c4bdbf403a8ac34e |
apache-2.0 | ['whisper', 'generated_from_trainer'] | false | Whisper Small Turkish This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 tr dataset. It achieves the following results on the evaluation set: - Loss: 0.2799 - Wer: 17.2753 - Cer: 4.5335 | 5b76279c957fce7dfa8fca32d4b3a4b2 |
apache-2.0 | ['whisper', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:| | 0.1044 | 1.07 | 1000 | 0.2777 | 18.4046 | 4.8810 | | 0.0469 | 3.02 | 2000 | 0.2799 | 17.2753 | 4.5335 | | 0.014 | 4.09 | 3000 | 0.3202 | 18.0800 | 4.9039 | | 0.0039 | 6.04 | 4000 | 0.3326 | 18.2964 | 5.0192 | | 0.0022 | 7.11 | 5000 | 0.3453 | 18.0307 | 4.9470 | | db3030036a60cbac90feda8df4d02b48 |
mit | ['generated_from_trainer'] | false | predict-perception-bertino-focus-object This model is a fine-tuned version of [indigo-ai/BERTino](https://huggingface.co/indigo-ai/BERTino) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2766 - R2: 0.5460 | 310bef0e9fa6b56609c1ee47e75b1e7c |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 47 | 189b38413e3abfc77fb8b259d75f7296 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | R2 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4798 | 1.0 | 14 | 0.4519 | 0.2581 | | 0.2481 | 2.0 | 28 | 0.3042 | 0.5007 | | 0.12 | 3.0 | 42 | 0.3746 | 0.3851 | | 0.0969 | 4.0 | 56 | 0.3186 | 0.4770 | | 0.0907 | 5.0 | 70 | 0.3727 | 0.3882 | | 0.0673 | 6.0 | 84 | 0.2847 | 0.5327 | | 0.0457 | 7.0 | 98 | 0.3141 | 0.4844 | | 0.0431 | 8.0 | 112 | 0.3369 | 0.4470 | | 0.028 | 9.0 | 126 | 0.3039 | 0.5012 | | 0.0244 | 10.0 | 140 | 0.2964 | 0.5135 | | 0.0201 | 11.0 | 154 | 0.3072 | 0.4958 | | 0.0153 | 12.0 | 168 | 0.3049 | 0.4995 | | 0.0155 | 13.0 | 182 | 0.2924 | 0.5201 | | 0.015 | 14.0 | 196 | 0.2585 | 0.5757 | | 0.0181 | 15.0 | 210 | 0.3258 | 0.4652 | | 0.0136 | 16.0 | 224 | 0.3142 | 0.4842 | | 0.0105 | 17.0 | 238 | 0.2536 | 0.5837 | | 0.0104 | 18.0 | 252 | 0.2407 | 0.6050 | | 0.0107 | 19.0 | 266 | 0.2727 | 0.5524 | | 0.0084 | 20.0 | 280 | 0.3117 | 0.4883 | | 0.0102 | 21.0 | 294 | 0.2999 | 0.5078 | | 0.0074 | 22.0 | 308 | 0.3018 | 0.5047 | | 0.0068 | 23.0 | 322 | 0.2826 | 0.5361 | | 0.0054 | 24.0 | 336 | 0.2804 | 0.5398 | | 0.0044 | 25.0 | 350 | 0.2912 | 0.5220 | | 0.0048 | 26.0 | 364 | 0.2813 | 0.5382 | | 0.005 | 27.0 | 378 | 0.2933 | 0.5186 | | 0.0046 | 28.0 | 392 | 0.2820 | 0.5371 | | 0.004 | 29.0 | 406 | 0.2717 | 0.5541 | | 0.0054 | 30.0 | 420 | 0.2717 | 0.5540 | | 0.0042 | 31.0 | 434 | 0.2699 | 0.5570 | | 0.0033 | 32.0 | 448 | 0.2630 | 0.5684 | | 0.0038 | 33.0 | 462 | 0.2578 | 0.5767 | | 0.0032 | 34.0 | 476 | 0.2687 | 0.5589 | | 0.004 | 35.0 | 490 | 0.2737 | 0.5507 | | 0.0031 | 36.0 | 504 | 0.2753 | 0.5481 | | 0.0037 | 37.0 | 518 | 0.2819 | 0.5373 | | 0.0034 | 38.0 | 532 | 0.2759 | 0.5471 | | 0.0034 | 39.0 | 546 | 0.2835 | 0.5347 | | 0.0029 | 40.0 | 560 | 0.2814 | 0.5381 | | 0.0033 | 41.0 | 574 | 0.2801 | 0.5403 | | 0.0025 | 42.0 | 588 | 0.2759 | 0.5472 | | 0.0029 | 43.0 | 602 | 0.2790 | 0.5421 | | 0.0028 | 44.0 | 616 | 0.2801 | 0.5401 | | 0.003 | 45.0 | 630 | 0.2772 | 0.5451 | | 0.0028 | 46.0 | 644 | 0.2764 | 0.5463 | | 0.0026 | 47.0 | 658 | 0.2766 | 0.5460 | | 09e4f8f0d21580e600aad8171497f9c2 |
mit | [] | false | kaleido on Stable Diffusion This is the `<kaleido>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:      | 07e02bf43d8cb63ffd5cc9b32c3e9572 |
apache-2.0 | ['automatic-speech-recognition', 'pt'] | false | exp_w2v2t_pt_vp-100k_s69 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 6e6f30ceda38cdef26bb207ffeecff5f |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 | 74c400e38ae75f6a9e7d72f023325e89 |
apache-2.0 | ['translation'] | false | opus-mt-fi-hil * source languages: fi * target languages: hil * OPUS readme: [fi-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-hil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-hil/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-hil/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-hil/opus-2020-01-24.eval.txt) | 1d70e9dd2987051f4b0d9cd134ffcfff |
mit | ['zero-shot-classification', 'sentence-similarity', 'nli'] | false | DistilCamemBERT-NLI =================== We present DistilCamemBERT-NLI, which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the Natural Language Inference (NLI) task for the french language, also known as recognizing textual entailment (RTE). This model is constructed on the XNLI dataset, which determines whether a premise entails, contradicts or neither entails or contradicts a hypothesis. This modelization is close to [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue especially in the context of cross-encoding like this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power, thanks to DistilCamemBERT. Dataset ------- The dataset XNLI from [FLUE](https://huggingface.co/datasets/flue) comprises 392,702 premises with their hypothesis for the train and 5,010 couples for the test. The goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B?) and is a classification task (given two sentences, predict one of three labels). Sentence A is called *premise*, and sentence B is called *hypothesis*, then the goal of modelization is determined as follows: $$P(premise=c\in\{contradiction, entailment, neutral\}\vert hypothesis)$$ Evaluation results ------------------ | **class** | **precision (%)** | **f1-score (%)** | **support** | | :----------------: | :---------------: | :--------------: | :---------: | | **global** | 77.70 | 77.45 | 5,010 | | **contradiction** | 78.00 | 79.54 | 1,670 | | **entailment** | 82.90 | 78.87 | 1,670 | | **neutral** | 72.18 | 74.04 | 1,670 | Benchmark --------- We compare the [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) model to 2 other modelizations working on the french language. The first one [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) is based on well named [CamemBERT](https://huggingface.co/camembert-base), the french RoBERTa model and the second one [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) based on [mDeBERTav3](https://huggingface.co/microsoft/mdeberta-v3-base) a multilingual model. To compare the performances, the metrics of accuracy and [MCC (Matthews Correlation Coefficient)](https://en.wikipedia.org/wiki/Phi_coefficient) were used. We used an **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** for mean inference time measure. | **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** | | :--------------: | :-----------: | :--------------: | :------------: | | [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **51.35** | 77.45 | 66.24 | | [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 105.0 | 81.72 | 72.67 | | [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 299.18 | **83.43** | **75.15** | Zero-shot classification ------------------------ The main advantage of such modelization is to create a zero-shot classifier allowing text classification without training. This task can be summarized by: $$P(hypothesis=i\in\mathcal{C}|premise)=\frac{e^{P(premise=entailment\vert hypothesis=i)}}{\sum_{j\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis=j)}}$$ For this part, we use two datasets, the first one: [allocine](https://huggingface.co/datasets/allocine) used to train the sentiment analysis models. The dataset comprises two classes: "positif" and "négatif" appreciation of movie reviews. Here we use "Ce commentaire est {}." as the hypothesis template and "positif" and "négatif" as candidate labels. | **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** | | :--------------: | :-----------: | :--------------: | :------------: | | [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **195.54** | 80.59 | 63.71 | | [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 378.39 | **86.37** | **73.74** | | [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 520.58 | 84.97 | 70.05 | The second one: [mlsum](https://huggingface.co/datasets/mlsum) used to train the summarization models. In this aim, we aggregate sub-topics and select a few of them. We use the articles summary part to predict their topics. In this case, the hypothesis template used is "C'est un article traitant de {}." and the candidate labels are: "économie", "politique", "sport" and "science". | **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** | | :--------------: | :-----------: | :--------------: | :------------: | | [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **217.77** | **79.30** | **70.55** | | [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 448.27 | 70.7 | 64.10 | | [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 591.34 | 64.45 | 58.67 | How to use DistilCamemBERT-NLI ------------------------------ ```python from transformers import pipeline classifier = pipeline( task='zero-shot-classification', model="cmarkea/distilcamembert-base-nli", tokenizer="cmarkea/distilcamembert-base-nli" ) result = classifier ( sequences="Le style très cinéphile de Quentin Tarantino " "se reconnaît entre autres par sa narration postmoderne " "et non linéaire, ses dialogues travaillés souvent " "émaillés de références à la culture populaire, et ses " "scènes hautement esthétiques mais d'une violence " "extrême, inspirées de films d'exploitation, d'arts " "martiaux ou de western spaghetti.", candidate_labels="cinéma, technologie, littérature, politique", hypothesis_template="Ce texte parle de {}." ) result {"labels": ["cinéma", "littérature", "technologie", "politique"], "scores": [0.7164115309715271, 0.12878799438476562, 0.1092301607131958, 0.0455702543258667]} ``` | e80e4fd17edb566bd9cb5366847cc714 |
mit | ['zero-shot-classification', 'sentence-similarity', 'nli'] | false | Optimum + ONNX ```python from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import AutoTokenizer, pipeline HUB_MODEL = "cmarkea/distilcamembert-base-nli" tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL) model = ORTModelForSequenceClassification.from_pretrained(HUB_MODEL) onnx_qa = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer) | bbeb7b3c772b3ef9f574685bf32b541e |
mit | ['zero-shot-classification', 'sentence-similarity', 'nli'] | false | Quantized onnx model quantized_model = ORTModelForSequenceClassification.from_pretrained( HUB_MODEL, file_name="model_quantized.onnx" ) ``` Citation -------- ```bibtex @inproceedings{delestre:hal-03674695, TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}}, AUTHOR = {Delestre, Cyrile and Amar, Abibatou}, URL = {https://hal.archives-ouvertes.fr/hal-03674695}, BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}}, ADDRESS = {Vannes, France}, YEAR = {2022}, MONTH = Jul, KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation}, PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf}, HAL_ID = {hal-03674695}, HAL_VERSION = {v1}, } ``` | faadb7ed0d4220b3416d176fe58ed638 |
apache-2.0 | ['automatic-speech-recognition', 'CTC', 'Attention', 'Transformer', 'pytorch', 'speechbrain', 'hf-asr-leaderboard'] | false | Transformer for LibriSpeech (with Transformer LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on LibriSpeech (EN) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test clean WER | Test other WER | GPUs | |:-------------:|:--------------:|:--------------:|:--------:| | 24-03-22 | 2.27 | 5.53 | 4xV100 32GB | | 3712c2dbc587a5fdd61e1c342b7cb18b |
apache-2.0 | ['automatic-speech-recognition', 'CTC', 'Attention', 'Transformer', 'pytorch', 'speechbrain', 'hf-asr-leaderboard'] | false | Pipeline description This ASR system is composed of 3 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of LibriSpeech. - Neural language model (Transformer LM) trained on the full 10M words dataset. - Acoustic model made of a transformer encoder and a joint decoder with CTC + transformer. Hence, the decoding also incorporates the CTC probabilities. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed. | 6e4f1a8cb81eae8d047be60a8b2fc85a |
apache-2.0 | ['automatic-speech-recognition', 'CTC', 'Attention', 'Transformer', 'pytorch', 'speechbrain', 'hf-asr-leaderboard'] | false | Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-transformer-transformerlm-librispeech", savedir="pretrained_models/asr-transformer-transformerlm-librispeech") asr_model.transcribe_file("speechbrain/asr-transformer-transformerlm-librispeech/example.wav") ``` | 09bb17ebb8a4b4104362399262421b21 |
apache-2.0 | ['automatic-speech-recognition', 'CTC', 'Attention', 'Transformer', 'pytorch', 'speechbrain', 'hf-asr-leaderboard'] | false | Parallel Inference on a Batch Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model. | 8361f97461edfc039fa15a16b315b676 |
apache-2.0 | ['automatic-speech-recognition', 'CTC', 'Attention', 'Transformer', 'pytorch', 'speechbrain', 'hf-asr-leaderboard'] | false | Training The model was trained with SpeechBrain (Commit hash: 'f73fcc35'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/LibriSpeech/ASR/transformer python train.py hparams/transformer.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1Nv1OLbHLqVeShyZ8LY9gjhYGE1DBFzFf?usp=sharing). | 43b6f3dd992aca29be66222474206ff4 |
apache-2.0 | ['automatic-speech-recognition', 'CTC', 'Attention', 'Transformer', 'pytorch', 'speechbrain', 'hf-asr-leaderboard'] | false | **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` | 1e3fdbd6e47dbe994dea1bd530cf78fe |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal'] | false | DreamBooth model for the terrier concept trained by bobber on the bobber/Terrier-images dataset. This is a Stable Diffusion model fine-tuned on the terrier concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of terrier dog** This model was created as part of the DreamBooth Hackathon 🔥. My daughter helped me selecting 18 images about Terriers from petfind. Hope you enjoy it. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! | 3c8ebbcf96ad17b1c2abf8965718539f |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal'] | false | Examples <table> <tr> <td>Generated Image of "a photo of terrier dog <br>in space suit walking in the mars"</td> <td>Generated Image of "a photo of terrier dog <br>in the background of chinese new year"</td> <td>Generated Image of "a photo of terrier dog <br>swimming in the pool"</td> </tr> <tr> <td align="center"><img src="https://i.imgur.com/YW483rm.jpg" style="height:200px"> </td> <td align="center"><img src="https://i.imgur.com/4m5Fv86.jpg" style="height:200px"> </td> <td align="center"><img src="https://i.imgur.com/ZCdapRU.jpg" style="height:200px"> </td> </tr> <tr> <td>Generated Image of "a photo of terrier dog <br>walking in Paris by Van Gogh"</td> <td>Generated Image of "a photo of terrier dog <br>with The Great Wave by Katsushika Hokusai"</td> <td>Generated Image of "a photo of terrier dog <br>by Leonardo da Vinci"</td> </tr> <tr> <td align="center"><img src="https://i.imgur.com/uzYLctu.jpg" style="height:200px"> </td> <td align="center"><img src="https://i.imgur.com/9wxxyD4.jpg" style="height:200px"> </td> <td align="center"><img src="https://i.imgur.com/xufDxxD.jpg" style="height:200px"> </td> </tr> </table> | 5cd899510b160e597b9f8006f4c1a0f5 |
cc-by-sa-4.0 | ['finance'] | false | ELECTRA small Japanese finance discriminator This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language. The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0). | bb161b21aaa288fa0f9fdd84e32cee25 |
cc-by-sa-4.0 | ['finance'] | false | Training Data The models are trained on the Japanese version of Wikipedia. The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021. The Wikipedia corpus file is 2.9GB, consisting of approximately 20M sentences. The financial corpus consists of 2 corpora: - Summaries of financial results from October 9, 2012, to December 31, 2020 - Securities reports from February 8, 2018, to December 31, 2020 The financial corpus file is 5.2GB, consisting of approximately 27M sentences. | 386416badef2bfb3fe903a94150a06d9 |
cc-by-sa-4.0 | ['finance'] | false | Training The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 128 tokens per instance, 128 instances per batch, and 1M training steps. | c5a7ee508502f9a9ced955d5ba7e9a6e |
apache-2.0 | ['generated_from_trainer'] | false | Tagged_One_50v7_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v7_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6441 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7785 | 6f6fda9dfc08f36ae7dc4d468ebef7db |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 13 | 0.7609 | 0.0 | 0.0 | 0.0 | 0.7783 | | No log | 2.0 | 26 | 0.6742 | 0.0 | 0.0 | 0.0 | 0.7783 | | No log | 3.0 | 39 | 0.6441 | 0.0 | 0.0 | 0.0 | 0.7785 | | 245c08233cba73bd443e08d2aa9e8786 |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | deployment-with-nvidia-riva) | This model was trained on a composite dataset comprising of over 1500 hours of French speech. It is a non-autoregressive "large" variant of Conformer, with around 120 million parameters. See the [model architecture]( | a598681511eb65044e916f97e1f7f61f |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | Usage The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version. ``` pip install nemo_toolkit['all'] ``` | 1064279c0339705af049d045bc5ce00c |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | Transcribing using Python First, let's get a sample ``` wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav ``` Then simply do: ``` asr_model.transcribe(['2086-149220-0033.wav']) ``` | f7aaeef797528be7560cb1cec6b3d20a |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_fr_conformer_ctc_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ``` | d7e7fad450871322d413cdb1ee0834f5 |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | Model Architecture Conformer-CTC model is a non-autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: [Conformer-CTC Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html | f13420de332e2a73e017c70d4e45fdf1 |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_ctc_bpe.yaml). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). The checkpoint of the language model used for rescoring can be found [here]( https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_fr_conformer_ctc_large). You may find more info on how to train and use language models for ASR models here: [ASR Language Modeling](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html) | c9426eb5c0413cf9275febdd6b9915f7 |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | Datasets All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of over a thousand hours of French speech: - MozillaCommonVoice 7.0 - 356 hours - Multilingual LibriSpeech - 1036 hours - VoxPopuli - 182 hours Both models use same dataset, excluding a preprocessing step to strip hyphen from data for secondary model's training. | 955b6921dc70f13a0c94a73caa11f12d |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | Performance The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general. The latest model obtains the following greedy scores on the following evaluation datasets - 8.35 % on MCV7.0 dev - 9.63 % on MCV7.0 test - 5.88 % on MLS dev - 4.91 % on MLS test With 128 beam search and 4gram KenLM model: - 7.95 % on MCV7.0 dev - 9.16 % on MCV7.0 test - 5.57 % on MLS dev - 4.66 % on MLS test Note that these evaluation datasets have been filtered and preprocessed to only contain French alphabet characters and are removed of punctuation outside of hyphenation and apostrophe. | 82564f6f8d3af619a47c81217b79967f |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | Limitations Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech. Further, since portions of the training set contain text from both pre- and post- 1990 orthographic reform, regularity of punctuation may vary between the two styles. For downstream tasks requiring more consistency, finetuning or downstream processing may be required. If exact orthography is not necessary, then using secondary model is advised. | 84bbd1f39206cab8ac7e73af364247ef |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | Deployment with NVIDIA Riva For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded. Additionally, Riva provides: * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization * Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support Check out [Riva live demo](https://developer.nvidia.com/riva | 2e95434d459e176f7abed7bed71d8b18 |
cc-by-4.0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | false | References - [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100) - [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) - [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) | a92b8b9c448222ef76fd9db0a6e48a8d |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2337 | cf7e8360c8e240ea3706eaadb67100f3 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 | 3f4d9efdf99f1ed2fe1585db261523b4 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3389 | 1.0 | 73 | 1.7400 | | 1.8014 | 2.0 | 146 | 1.4690 | | 1.634 | 3.0 | 219 | 1.4783 | | 1.5461 | 4.0 | 292 | 1.3912 | | 1.4706 | 5.0 | 365 | 1.3109 | | 1.4161 | 6.0 | 438 | 1.3405 | | 1.3664 | 7.0 | 511 | 1.3459 | | 1.332 | 8.0 | 584 | 1.2745 | | 1.3029 | 9.0 | 657 | 1.2633 | | 1.2871 | 10.0 | 730 | 1.2336 | | 1.2807 | 11.0 | 803 | 1.2966 | | 1.2569 | 12.0 | 876 | 1.1508 | | 1.2392 | 13.0 | 949 | 1.2530 | | 1.237 | 14.0 | 1022 | 1.2485 | | 1.2169 | 15.0 | 1095 | 1.2592 | | 1.2272 | 16.0 | 1168 | 1.2337 | | 669f20fb44f9c133f0af004ae1bd7198 |
apache-2.0 | ['generated_from_trainer'] | false | small-mlm-squad-plain_text This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0085 | 754dc2aa42665860efdd813e2ac78975 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9733 | 0.4 | 500 | 2.9009 | | 2.6978 | 0.8 | 1000 | 2.9560 | | 2.5783 | 1.2 | 1500 | 2.9081 | | 2.4382 | 1.6 | 2000 | 3.0085 | | 89e617d96af5a2d2bd377c0827dbbee1 |
mit | ['object-detection', 'computer-vision', 'sort', 'tracker', 'ocsort'] | false | Model Description [Sort](https://arxiv.org/abs/1602.00763): A simple online and realtime tracking algorithm for 2D multiple object tracking in video sequences<img src="https://raw.githubusercontent.com/noahcao/OC_SORT/master/assets/teaser.png" width="600"/> | 7c8b505045c6e37b853880db149624e7 |
mit | ['object-detection', 'computer-vision', 'sort', 'tracker', 'ocsort'] | false | BibTeX Entry and Citation Info ``` @inproceedings{Bewley2016_sort, author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben}, booktitle={2016 IEEE International Conference on Image Processing (ICIP)}, title={Simple online and realtime tracking}, year={2016}, pages={3464-3468}, keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking}, doi={10.1109/ICIP.2016.7533003} } ``` | d774c0abec7d9e5358c882b15c6e7bda |
apache-2.0 | ['exbert'] | false | ALBERT Base v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. | 877f3b5ec84988e1f7d424e2880a2b23 |
apache-2.0 | ['exbert'] | false | Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters | 1b5084d904038e79c8c82588d2156f45 |
apache-2.0 | ['exbert'] | false | How to use You can use this model directly with a pipeline for masked language modeling: In tf_transformers ```python from tf_transformers.models import AlbertModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1') model = AlbertModel.from_pretrained("albert-base-v1") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] inputs_tf["input_type_ids"] = inputs["token_type_ids"] inputs_tf["input_mask"] = inputs["attention_mask"] outputs_tf = model(inputs_tf) ``` This bias will also affect all fine-tuned versions of this model. | 323473a3396a65da5f6c043a3ad67662 |
apache-2.0 | ['exbert'] | false | BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=albert-base-v1"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> | e24ca2c18b4e12cec591a7402c018899 |
cc-by-4.0 | ['espnet', 'audio', 'audio-to-audio'] | false | Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd egs2/chime4/enh1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw ``` | ae68dd534095198b803126578b329cdd |
cc-by-4.0 | ['espnet', 'audio', 'audio-to-audio'] | false | ENH config <details><summary>expand</summary> ``` config: conf/tuning/train_enh_beamformer_mvdr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/enh_train_enh_beamformer_mvdr_raw ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 35841 dist_launcher: null multiprocessing_distributed: true cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 70 patience: 4 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - si_snr - max - - valid - loss - min keep_nbest_models: 1 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null unused_parameters: false use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null pretrain_path: null init_param: [] freeze_param: [] num_iters_per_epoch: null batch_size: 8 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/enh_stats_16k/train/speech_mix_shape - exp/enh_stats_16k/train/speech_ref1_shape - exp/enh_stats_16k/train/noise_ref1_shape valid_shape_file: - exp/enh_stats_16k/valid/speech_mix_shape - exp/enh_stats_16k/valid/speech_ref1_shape - exp/enh_stats_16k/valid/noise_ref1_shape batch_type: folded valid_batch_type: null fold_length: - 80000 - 80000 - 80000 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/tr05_simu_isolated_6ch_track/wav.scp - speech_mix - sound - - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp - speech_ref1 - sound - - dump/raw/tr05_simu_isolated_6ch_track/noise1.scp - noise_ref1 - sound valid_data_path_and_name_and_type: - - dump/raw/dt05_simu_isolated_6ch_track/wav.scp - speech_mix - sound - - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp - speech_ref1 - sound - - dump/raw/dt05_simu_isolated_6ch_track/noise1.scp - noise_ref1 - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-08 weight_decay: 0 scheduler: reducelronplateau scheduler_conf: mode: min factor: 0.5 patience: 1 init: xavier_uniform model_conf: loss_type: mask_mse mask_type: PSM^2 use_preprocessor: false encoder: stft encoder_conf: n_fft: 512 hop_length: 128 separator: wpe_beamformer separator_conf: num_spk: 1 loss_type: mask_mse use_wpe: false wnet_type: blstmp wlayers: 3 wunits: 300 wprojs: 320 wdropout_rate: 0.0 taps: 5 delay: 3 use_dnn_mask_for_wpe: true use_beamformer: true bnet_type: blstmp blayers: 3 bunits: 512 bprojs: 512 badim: 320 ref_channel: 3 use_noise_mask: true beamformer_type: mvdr_souden bdropout_rate: 0.0 decoder: stft decoder_conf: n_fft: 512 hop_length: 128 required: - output_dir version: 0.9.7 distributed: true ``` </details> | 469680fa9fc71aff93e16524283d89d2 |
cc-by-4.0 | ['espnet', 'audio', 'audio-to-audio'] | false | Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{li2021espnetse, title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration}, author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji}, booktitle={Proc. IEEE Spoken Language Technology Workshop (SLT)}, pages={785--792}, year={2021}, } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{li2021espnetse, title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration}, author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji}, year={2020}, eprint={2011.03706}, archivePrefix={arXiv}, primaryClass={eess.AS} } ``` | 8767633273fd223add29e331824d3979 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2131 - Accuracy: 0.9265 - F1: 0.9269 | 79c51e4065ea55d839ecef8b04ad44ec |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8031 | 1.0 | 250 | 0.2973 | 0.9125 | 0.9110 | | 0.2418 | 2.0 | 500 | 0.2131 | 0.9265 | 0.9269 | | b0f033de51b56b30920711e61d0df2dc |
apache-2.0 | ['generated_from_trainer'] | false | dark-bert-finetuned-ner1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0833 - Precision: 0.9337 - Recall: 0.9487 - F1: 0.9411 - Accuracy: 0.9861 | 29c7f785409d9e78c352d8e557f84217 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0358 | 1.0 | 1756 | 0.0780 | 0.9283 | 0.9409 | 0.9346 | 0.9844 | | 0.0172 | 2.0 | 3512 | 0.0708 | 0.9375 | 0.9488 | 0.9431 | 0.9860 | | 0.0056 | 3.0 | 5268 | 0.0833 | 0.9337 | 0.9487 | 0.9411 | 0.9861 | | 9753c52aac86f589fc435a86e0144b83 |
apache-2.0 | ['generated_from_trainer'] | false | swin-base-patch4-window7-224-in22k-finetuned This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0021 - Accuracy: 0.9993 | 2dd1b8c05b6d5a51051937e71707e746 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 | ee3ca27a711daa9b204506565651f1d9 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0253 | 1.0 | 889 | 0.0060 | 0.9980 | | 0.0134 | 2.0 | 1778 | 0.0031 | 0.9989 | | 0.0118 | 3.0 | 2667 | 0.0021 | 0.9993 | | 0e98eed0fe73e2dd0a0944c549b9d1f2 |
apache-2.0 | ['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1800k'] | false | MultiBERTs, Intermediate Checkpoint - Seed 2, Step 1800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model | dd82a926952c5302e6a71d94e398c424 |
apache-2.0 | ['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1800k'] | false | How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1800k') model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_1800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1800k') model = BertModel.from_pretrained("google/multiberts-seed_2-step_1800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | a567afde0a0b93b92c6eea9f925216f0 |
apache-2.0 | ['generated_from_trainer'] | false | distilroberta-base-OLID-MLM This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0021 | 4509429cffc6dcec19a1a0c598f579c0 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 398 | 0.0143 | | 1.0511 | 2.0 | 796 | 0.0031 | | 0.0256 | 3.0 | 1194 | 0.0021 | | e734e79c4d2f1c69189530734d938d6f |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 0.6045 - Accuracy: 0.7960 | 52166d3e88e4a7421bb75bcad6b9accc |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 | 2a8bdb7d5d609b1751c85e7016dc8bfa |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7494 | 1.0 | 4597 | 0.5942 | 0.7716 | | 0.3499 | 2.0 | 9194 | 0.6045 | 0.7960 | | 974b04198f6516369f7c02d1a1367748 |
mit | ['timelms', 'twitter'] | false | Twitter September 2021 (RoBERTa-base, 120M) This is a RoBERTa-base model trained on 119.66M tweets until the end of September 2021. More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829). Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms). For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms | ce422d4a84d939686180fe0133c0356e |
mit | ['timelms', 'twitter'] | false | Preprocess Text Replace usernames and links for placeholders: "@user" and "http". If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data). ```python def preprocess(text): preprocessed_text = [] for t in text.split(): if len(t) > 1: t = '@user' if t[0] == '@' and t.count('@') == 1 else t t = 'http' if t.startswith('http') else t preprocessed_text.append(t) return ' '.join(preprocessed_text) ``` | 14d59a6746326837a25f1451aa9509f9 |
mit | ['timelms', 'twitter'] | false | Example Masked Language Model ```python from transformers import pipeline, AutoTokenizer MODEL = "cardiffnlp/twitter-roberta-base-sep2021" fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL) tokenizer = AutoTokenizer.from_pretrained(MODEL) def pprint(candidates, n): for i in range(n): token = tokenizer.decode(candidates[i]['token']) score = candidates[i]['score'] print("%d) %.5f %s" % (i+1, score, token)) texts = [ "So glad I'm <mask> vaccinated.", "I keep forgetting to bring a <mask>.", "Looking forward to watching <mask> Game tonight!", ] for text in texts: t = preprocess(text) print(f"{'-'*30}\n{t}") candidates = fill_mask(t) pprint(candidates, 5) ``` Output: ``` ------------------------------ So glad I'm <mask> vaccinated. 1) 0.39329 fully 2) 0.26694 getting 3) 0.17438 not 4) 0.03422 still 5) 0.01845 all ------------------------------ I keep forgetting to bring a <mask>. 1) 0.06773 mask 2) 0.04548 book 3) 0.03826 charger 4) 0.03506 backpack 5) 0.02997 bag ------------------------------ Looking forward to watching <mask> Game tonight! 1) 0.63009 the 2) 0.16154 The 3) 0.02110 this 4) 0.01903 End 5) 0.00810 Championship ``` | c1c8b6c40e57c133144016d92feb534f |
mit | ['timelms', 'twitter'] | false | Example Tweet Embeddings ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel import numpy as np from scipy.spatial.distance import cosine from collections import Counter def get_embedding(text): | ae2b2509bd4e27ed1057c78e9664e2ab |
mit | ['timelms', 'twitter'] | false | naive approach for demonstration text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') features = model(**encoded_input) features = features[0].detach().cpu().numpy() return np.mean(features[0], axis=0) MODEL = "cardiffnlp/twitter-roberta-base-sep2021" tokenizer = AutoTokenizer.from_pretrained(MODEL) model = AutoModel.from_pretrained(MODEL) query = "The book was awesome" tweets = ["I just ordered fried chicken 🐣", "The movie was great", "What time is the next game?", "Just finished reading 'Embeddings in NLP'"] sims = Counter() for tweet in tweets: sim = 1 - cosine(get_embedding(query), get_embedding(tweet)) sims[tweet] = sim print('Most similar to: ', query) print(f"{'-'*30}") for idx, (tweet, sim) in enumerate(sims.most_common()): print("%d) %.5f %s" % (idx+1, sim, tweet)) ``` Output: ``` Most similar to: The book was awesome ------------------------------ 1) 0.99022 The movie was great 2) 0.96274 Just finished reading 'Embeddings in NLP' 3) 0.96006 I just ordered fried chicken 🐣 4) 0.95725 What time is the next game? ``` | eaf44cc7d57f7e886007315915f31a14 |
mit | ['timelms', 'twitter'] | false | Example Feature Extraction ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel import numpy as np MODEL = "cardiffnlp/twitter-roberta-base-sep2021" tokenizer = AutoTokenizer.from_pretrained(MODEL) text = "Good night 😊" text = preprocess(text) | d74ac01b6a1e5792ca3224943ca10fd7 |
mit | ['timelms', 'twitter'] | false | Pytorch model = AutoModel.from_pretrained(MODEL) encoded_input = tokenizer(text, return_tensors='pt') features = model(**encoded_input) features = features[0].detach().cpu().numpy() features_mean = np.mean(features[0], axis=0) | 815859b01b8b56b646567fead98c9ee7 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | Stable Diffusion v1-3 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with D🧨iffusers blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-3** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). For more information, please refer to [Training]( | 9936ffac0b101b8ec28cf12fda3ae5b4 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | training). This weights here are intended to be used with the D🧨iffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-3-original) | a6a2a6a4756d9d6004fca93a1bd955ad |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | Examples We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion. ```bash pip install --upgrade diffusers transformers scipy ``` Running the pipeline with the default PNDM scheduler: ```python import torch from torch import autocast from diffusers import StableDiffusionPipeline model_id = "CompVis/stable-diffusion-v1-3" device = "cuda" pipe = StableDiffusionPipeline.from_pretrained(model_id) pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" with autocast("cuda"): image = pipe(prompt, guidance_scale=7.5)["sample"][0] image.save("astronaut_rides_horse.png") ``` **Note**: If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision: ```py import torch pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" with autocast("cuda"): image = pipe(prompt, guidance_scale=7.5)["sample"][0] image.save("astronaut_rides_horse.png") ``` To swap out the noise scheduler, pass it to `from_pretrained`: ```python from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler model_id = "CompVis/stable-diffusion-v1-3" | 55a4a2197ebc0c673bd02c2378ba2e81 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | Use the K-LMS scheduler here instead scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000) pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" with autocast("cuda"): image = pipe(prompt, guidance_scale=7.5)["sample"][0] image.save("astronaut_rides_horse.png") ``` | 373fb8b62830277c005d9e4f0153e51d |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | Training Procedure Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We currently provide four checkpoints, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [**`stable-diffusion-v1-4`**](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). | 5f75d9a779ac55cbb62a69cd1e62b0db |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | Training details - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant | 97acab22a7a41897c7efb78a81fe8446 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of the checkpoints:  Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. | bad7af116386d66a0befa55e25b35b00 |
apache-2.0 | ['vision', 'image-classification'] | false | RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. | 343940fd9be60303bc2647c12b309f96 |
apache-2.0 | ['vision', 'image-classification'] | false | Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.  | e2ecbc837177d2ba6cd0950dfa4a33f4 |
apache-2.0 | ['vision', 'image-classification'] | false | Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. | 3bccff046b9f85892aca9a85583b07cb |
apache-2.0 | ['vision', 'image-classification'] | false | How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> | 092806f2040e515ce185ddefbba0087b |
apache-2.0 | ['vision', 'image-classification'] | false | model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). | bea4ca041e0d4215122e04b737ccbe23 |
apache-2.0 | ['automatic-speech-recognition', 'common_voice', 'fr', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event'] | false | wav2vec2-cls-r-300m-fr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - FR dataset. It achieves the following results on the evaluation set: - Loss: 0.6521 - Wer: 0.4330 | 473b899f73d8d4cce724f5540e5e2b81 |
apache-2.0 | ['automatic-speech-recognition', 'common_voice', 'fr', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 - mixed_precision_training: Native AMP | 74c02f26aa5f2d97fc70ab759032e129 |
apache-2.0 | ['automatic-speech-recognition', 'common_voice', 'fr', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.6773 | 0.8 | 500 | 1.3907 | 0.9864 | | 0.9526 | 1.6 | 1000 | 0.7760 | 0.6448 | | 0.6418 | 2.4 | 1500 | 0.7605 | 0.6194 | | 0.5028 | 3.2 | 2000 | 0.6516 | 0.5322 | | 0.4133 | 4.0 | 2500 | 0.6303 | 0.5097 | | 0.3285 | 4.8 | 3000 | 0.6422 | 0.5062 | | 0.2764 | 5.6 | 3500 | 0.5936 | 0.4748 | | 0.2361 | 6.4 | 4000 | 0.6486 | 0.4683 | | 0.2049 | 7.2 | 4500 | 0.6321 | 0.4532 | | 0.176 | 8.0 | 5000 | 0.6230 | 0.4482 | | 0.1393 | 8.8 | 5500 | 0.6595 | 0.4403 | | 0.1141 | 9.6 | 6000 | 0.6552 | 0.4348 | | e014d12f5d5d862bb14089e2c8663a47 |
apache-2.0 | ['generated_from_trainer'] | false | tiny-mlm-glue-mrpc-custom-tokenizer-expand-vocab This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.4922 | 4d4f0993db3583ce2228e6a32ec6f1a3 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.1957 | 1.09 | 500 | 5.5172 | | 5.5021 | 2.18 | 1000 | 5.1265 | | 5.2379 | 3.27 | 1500 | 5.0413 | | 5.1491 | 4.36 | 2000 | 4.9136 | | 5.014 | 5.45 | 2500 | 4.8558 | | 4.9507 | 6.54 | 3000 | 4.7338 | | 4.7924 | 7.63 | 3500 | 4.6922 | | 4.7739 | 8.71 | 4000 | 4.6100 | | 4.6749 | 9.8 | 4500 | 4.6575 | | 4.6135 | 10.89 | 5000 | 4.4922 | | c529d8903b793f07d5c5b505280c22a6 |
mit | ['generated_from_keras_callback'] | false | amitjohn007/second-mobil-bert-finetuned-squad This model is a fine-tuned version of [csarron/mobilebert-uncased-squad-v2](https://huggingface.co/csarron/mobilebert-uncased-squad-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4587 - Epoch: 2 | acb5d041cc38936db800a1ac31c4834e |
mit | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16599, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 | 464ebae3933ce546aef2dbf7969c61ad |
apache-2.0 | [] | false | A Chinese MRC model built on Chinese PERT-large **Please use `BertForQuestionAnswering` to load this model!** This is a Chinese machine reading comprehension (MRC) model built on PERT-large and fine-tuned on a mixture of Chinese MRC datasets. PERT is a pre-trained model based on permuted language model (PerLM) to learn text semantic information in a self-supervised manner without introducing the mask tokens [MASK]. It yields competitive results on in tasks such as reading comprehension and sequence labeling. Results on Chinese MRC datasets (EM/F1): (We report the checkpoint that has the best AVG score) | | CMRC 2018 Dev | DRCD Dev | SQuAD-Zen Dev (Answerable) | AVG | | :-------: | :-----------: | :-------: | :------------------------: | :-------: | | PERT-large | 73.5/90.8 | 91.2/95.7 | 63.0/79.3 | 75.9/88.6 | Please visit our GitHub repo for more information: https://github.com/ymcui/PERT You may also be interested in, Chinese Minority Languages CINO: https://github.com/ymcui/Chinese-Minority-PLM Chinese MacBERT: https://github.com/ymcui/MacBERT Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA Chinese XLNet: https://github.com/ymcui/Chinese-XLNet Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology | 949a96ef0454ecbaa6595092a8d49b32 |
cc-by-4.0 | ['generated_from_trainer'] | false | hing-roberta-CM-run-5 This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6447 - Accuracy: 0.7525 - Precision: 0.7030 - Recall: 0.7120 - F1: 0.7064 | d474502de146e8d7ba7e784b826d5f33 |
cc-by-4.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 | 1982653a578d2c6e9d1b418d473cabda |
cc-by-4.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.9492 | 1.0 | 497 | 0.7476 | 0.6157 | 0.6060 | 0.6070 | 0.5171 | | 0.7013 | 2.0 | 994 | 0.7093 | 0.6982 | 0.6716 | 0.6864 | 0.6663 | | 0.4871 | 3.0 | 1491 | 0.8294 | 0.7284 | 0.6714 | 0.6867 | 0.6723 | | 0.3838 | 4.0 | 1988 | 1.1275 | 0.7505 | 0.6969 | 0.7025 | 0.6994 | | 0.254 | 5.0 | 2485 | 1.3831 | 0.7264 | 0.6781 | 0.6975 | 0.6850 | | 0.1765 | 6.0 | 2982 | 2.0625 | 0.7384 | 0.7068 | 0.6948 | 0.6896 | | 0.1127 | 7.0 | 3479 | 1.9691 | 0.7425 | 0.6925 | 0.7065 | 0.6982 | | 0.0757 | 8.0 | 3976 | 2.3871 | 0.7425 | 0.7183 | 0.6926 | 0.6924 | | 0.0572 | 9.0 | 4473 | 2.4037 | 0.7344 | 0.6916 | 0.6929 | 0.6882 | | 0.0458 | 10.0 | 4970 | 2.3062 | 0.7586 | 0.7174 | 0.7219 | 0.7164 | | 0.0405 | 11.0 | 5467 | 2.5591 | 0.7445 | 0.6925 | 0.6964 | 0.6942 | | 0.0292 | 12.0 | 5964 | 2.5215 | 0.7384 | 0.6875 | 0.6998 | 0.6917 | | 0.0264 | 13.0 | 6461 | 2.7551 | 0.7586 | 0.7122 | 0.7035 | 0.7037 | | 0.0299 | 14.0 | 6958 | 2.6536 | 0.7465 | 0.7114 | 0.7088 | 0.7035 | | 0.0208 | 15.0 | 7455 | 2.5190 | 0.7505 | 0.6989 | 0.7083 | 0.7030 | | 0.0263 | 16.0 | 7952 | 2.7092 | 0.7485 | 0.7076 | 0.6998 | 0.6962 | | 0.0077 | 17.0 | 8449 | 2.5933 | 0.7525 | 0.7042 | 0.7143 | 0.7081 | | 0.009 | 18.0 | 8946 | 2.5831 | 0.7485 | 0.6991 | 0.7152 | 0.7050 | | 0.0108 | 19.0 | 9443 | 2.6360 | 0.7545 | 0.7050 | 0.7167 | 0.7098 | | 0.0077 | 20.0 | 9940 | 2.6447 | 0.7525 | 0.7030 | 0.7120 | 0.7064 | | ef963ab23f7e2b3051ab752a14156080 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1080 | d4f2285f431f187a5530d64dc27b3ca1 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 | 8e016d18c8fab886d6ea7b88310f0b28 |
apache-2.0 | ['chinese', 'token-classification', 'pos', 'dependency-parsing'] | false | Model Description This is a DeBERTa(V2) model pre-trained on Chinese texts (both simplified and traditional) for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [deberta-large-chinese-erlangshen-upos](https://huggingface.co/KoichiYasuoka/deberta-large-chinese-erlangshen-upos). | d3e13e3fe664c4b2ca89ee88063e6a33 |
apache-2.0 | ['chinese', 'token-classification', 'pos', 'dependency-parsing'] | false | text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/deberta-large-chinese-erlangshen-ud-goeswith") print(nlp("我把这本书看完了")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/deberta-large-chinese-erlangshen-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("我把这本书看完了")) ``` | 21f284f69b1b15b7a347a8496b7c9fd9 |
apache-2.0 | ['automatic-speech-recognition', 'es'] | false | exp_w2v2r_es_xls-r_age_teens-10_sixties-0_s900 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 8cc8b9d367fbd4a93507292c91d6c2d3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.