license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['feature-extraction', 'sentence-similarity']
false
List of sentences for comparison sentences_1 = ["This is a sentence for testing miCSE.", "This is using mutual information Contrastive Sentence Embeddings model."] sentences_2 = ["This is testing miCSE.", "Similarity with miCSE"]
5540e5451520ade378928a48bb2abcb2
apache-2.0
['feature-extraction', 'sentence-similarity']
false
Benchmark Model results on SentEval Benchmark: <details> <summary> Click to expand </summary> ```shell +-------+-------+-------+-------+-------+--------------+-----------------+--------+ | STS12 | STS13 | STS14 | STS15 | STS16 | STSBenchmark | SICKRelatedness | S.Avg. | +-------+-------+-------+-------+-------+--------------+-----------------+--------+ | 71.71 | 83.09 | 75.46 | 83.13 | 80.22 | 79.70 | 73.62 | 78.13 | +-------+-------+-------+-------+-------+--------------+-----------------+--------+ ``` </details>
a69be4dd909f6b219e0f47a27cae50aa
apache-2.0
['feature-extraction', 'sentence-similarity']
false
Citations If you use this code in your research or want to refer to our work, please cite: ``` @article{Klein2022miCSEMI, title={miCSE: Mutual Information Contrastive Learning for Low-shot Sentence Embeddings}, author={Tassilo Klein and Moin Nabi}, journal={ArXiv}, year={2022}, volume={abs/2211.04928} } ```
038cdffe214bbc49d406b3646064b3b5
creativeml-openrail-m
[]
false
**UPDATE 9/NOV/2022: added 2 additional versions trained from Trinart Characters base. The 5000 steps version is probably the better one for most people as it is much more editable than the 6000 steps version. 6000 steps may be good for merging with other models.** waifu diffusion 1.3 base model with dreambooth training on images of Yume from Hai to Gensou no Grimgar (Grimgar of Ashes and Fantasies) Can be used in StableDiffusion, including the extremely popular Web UI by Automatic1111, like any other model by placing the .CKPT file in the correct directory. Please consult the documentation for your installation of StableDiffusion for more specific instructions. Use "m_yumegirl" to activate For stronger effect add "1girl, red hair, single braid, brown eyes" to prompt
86911d4154efe1a1664972f1034cf0a2
apache-2.0
[]
false
Vision-and-Language Transformer (ViLT), fine-tuned on VSR random split Vision-and-Language Transformer (ViLT) model fine-tuned on random split of [Visual Spatial Reasoning (VSR)](https://arxiv.org/abs/2205.00363). ViLT was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
784d0d4790ce4e22b1d68ed3bfafe7c3
apache-2.0
[]
false
How to use Here is how to use the model in PyTorch: ``` from transformers import ViltProcessor, ViltForImagesAndTextClassification import requests from PIL import Image image = Image.open(requests.get("https://camo.githubusercontent.com/ffcbeada14077b8e6d4b16817c91f78ba50aace210a1e4754418f1413d99797f/687474703a2f2f696d616765732e636f636f646174617365742e6f72672f747261696e323031372f3030303030303038303333362e6a7067", stream=True).raw) text = "The person is ahead of the cow." processor = ViltProcessor.from_pretrained("juletxara/vilt-vsr-random") model = ViltForImagesAndTextClassification.from_pretrained("juletxara/vilt-vsr-random")
ec1338d11bff19510e805785c2e80943
apache-2.0
[]
false
BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } @article{liu2022visual, title={Visual Spatial Reasoning}, author={Liu, Fangyu and Emerson, Guy and Collier, Nigel}, journal={arXiv preprint arXiv:2205.00363}, year={2022} } ```
3a19e851b80e310955402f451cf27a43
mit
['generated_from_trainer']
false
camembert-base-mrpc This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.4286 - Accuracy: 0.8505 - F1: 0.8928 - Combined Score: 0.8716
28f556d801cc069f2519b603366e3330
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
cv1.2 Dreambooth model trained by ukeeba with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
ba605fd9c04734b83a6ceae2d1568b8e
odc-by
[]
false
Basically, generate the images by saying "dnd[RACE] person" I know some arent people, but it's what I've got to work with. ;) Make sure there are no spaces, or punctuation in the "dnd[RACE HERE]" section, so "a portrait of dndYuanTi person, intricate, elegant, highly detailed, digital painting, artstation, trending, Volumetric lighting" Here is a list of all of them (Autognome is VERY undertrained...): * dndAarakocra * dndAasimar * dndAirGenasi * dndAstralElf * dndAutognome * dndBugbear * dndCentaur * dndChangeling * dndDeepGnome * dndDragonborn * dndDwarf * dndEarthGenasi * dndEladrin * dndElf * dndFairy * dndFirbolg * dndFireGenasi * dndGenasi * dndGiff * dndGith * dndGnome * dndGoblin * dndGoliath * dndGrung * dndHadozee * dndHalfElf * dndHalfling * dndHalfOrc * dndHarengon * dndHobgoblin * dndHuman * dndKalashtar * dndKenku * dndKobold * dndLeonin * dndLizardfolk * dndLocathah * dndLoxodon * dndMinotaur * dndOrc * dndOwlin * dndPlasmoid * dndRebornLineage * dndSatyr * dndSeaElf * dndShadarKai * dndShifter * dndSimicHybrid * dndTabaxi * dndThriKreen * dndTiefling * dndTortle * dndTriton * dndVedalken * dndVerdan * dndWarforged * dndWaterGenasi * dndYuanTi
eab46641407abbcf92343a1877439b81
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4526 - Wer: 0.3411
100f866e42dc6403b9f32711d37a7072
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.7503 | 4.0 | 500 | 2.4125 | 1.0006 | | 0.9595 | 8.0 | 1000 | 0.4833 | 0.4776 | | 0.3018 | 12.0 | 1500 | 0.4333 | 0.4062 | | 0.1751 | 16.0 | 2000 | 0.4474 | 0.3697 | | 0.1288 | 20.0 | 2500 | 0.4445 | 0.3558 | | 0.1073 | 24.0 | 3000 | 0.4695 | 0.3464 | | 0.0816 | 28.0 | 3500 | 0.4526 | 0.3411 |
baeffd258e3773ce15499a918df6da1d
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_logit_kd_pretrain_wnli This model is a fine-tuned version of [gokuls/distilbert_add_pre-training-complete](https://huggingface.co/gokuls/distilbert_add_pre-training-complete) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3435 - Accuracy: 0.5634
3fc357ad13793874cd35e53b4b875345
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3566 | 1.0 | 3 | 0.3453 | 0.5634 | | 0.347 | 2.0 | 6 | 0.3435 | 0.5634 | | 0.3501 | 3.0 | 9 | 0.3465 | 0.5775 | | 0.3482 | 4.0 | 12 | 0.3435 | 0.5634 | | 0.3484 | 5.0 | 15 | 0.3458 | 0.5634 | | 0.3481 | 6.0 | 18 | 0.3478 | 0.5070 | | 0.3493 | 7.0 | 21 | 0.3444 | 0.5634 | | 0.3477 | 8.0 | 24 | 0.3446 | 0.5634 | | 0.3473 | 9.0 | 27 | 0.3456 | 0.5634 |
cc42ae18e1c8acb0ad3638b985b1a86f
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-rte This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3557
e717cb5d45b7d983628429195759ba0d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5903 | 1.6 | 500 | 2.1820 | | 2.4763 | 3.21 | 1000 | 2.4737 | | 2.3778 | 4.81 | 1500 | 2.2902 | | 2.2735 | 6.41 | 2000 | 2.3557 |
2eed95f3435728344e46a9d0796af701
mit
['generated_from_keras_callback']
false
ishaankul67/Warsaw_Pact-clustered This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1010 - Train End Logits Accuracy: 0.9653 - Train Start Logits Accuracy: 0.9826 - Validation Loss: 0.0420 - Validation End Logits Accuracy: 1.0 - Validation Start Logits Accuracy: 1.0 - Epoch: 0
a6a7f2879cd818a386e8ffb9efaaf364
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.1010 | 0.9653 | 0.9826 | 0.0420 | 1.0 | 1.0 | 0 |
618e5d129d205f98bbcbe7054338059c
apache-2.0
['generated_from_trainer']
false
Tagged_Uni_250v0_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3679 - Precision: 0.4748 - Recall: 0.3732 - F1: 0.4179 - Accuracy: 0.8847
2f66e4e73880d99bea486ed032f0c5fd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 91 | 0.4333 | 0.2856 | 0.1851 | 0.2246 | 0.8440 | | No log | 2.0 | 182 | 0.3466 | 0.3907 | 0.3038 | 0.3418 | 0.8794 | | No log | 3.0 | 273 | 0.3679 | 0.4748 | 0.3732 | 0.4179 | 0.8847 |
52c4ca1f8784660513ec2d320aa8dfc6
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
false
LoRA DreamBooth - margret-stalizburg-v1-lora These are LoRA adaption weights for [andite/anything-v4.0](https://huggingface.co/andite/anything-v4.0). The weights were trained on the instance prompt "margret stalizburg" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. Test prompt: margret stalizburg ![image_0](test_images/image_0.png) ![image_1](test_images/image_1.png) ![image_2](test_images/image_2.png) ![image_3](test_images/image_3.png)
15c2794d634cac8052c951626ab35c86
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
wedadams_pyros_bj Dreambooth model trained by tftgregrge with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(24).jpg) ![1](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(13).jpg) ![2](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(4).jpg) ![3](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(39).jpg) ![4](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(34).jpg) ![5](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(42).jpg) ![6](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(10).jpg) ![7](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(26).jpg) ![8](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(23).jpg) ![9](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(25).jpg) ![10](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(30).jpg) ![11](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(14).jpg) ![12](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(40).jpg) ![13](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(19).jpg) ![14](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(11).jpg) ![15](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(3).jpg) ![16](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(9).jpg) ![17](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(18).jpg) ![18](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(38).jpg) ![19](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(8).jpg) ![20](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(32).jpg) ![21](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(21).jpg) ![22](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(33).jpg) ![23](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(35).jpg) ![24](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(22).jpg) ![25](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(29).jpg) ![26](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(2).jpg) ![27](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(1).jpg) ![28](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(16).jpg) ![29](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(17).jpg) ![30](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(5).jpg) ![31](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(41).jpg) ![32](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(31).jpg) ![33](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(27).jpg) ![34](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(6).jpg) ![35](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(7).jpg) ![36](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(36).jpg) ![37](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(37).jpg) ![38](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(28).jpg) ![39](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(12).jpg) ![40](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(15).jpg) ![41](https://huggingface.co/tftgregrge/wedadams-pyros-bj/resolve/main/sample_images/wedadams_(20).jpg)
b890bef9664d7df6f5d31fe815fb5f55
apache-2.0
['thai', 'masked-lm', 'wikipedia']
false
Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts. You can fine-tune `roberta-base-thai-spm` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-ud-head), and so on.
6c684c2d81b570f737c8bf781bdeed54
apache-2.0
['thai', 'masked-lm', 'wikipedia']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-spm") ```
947f625db2b8924f3f5d691cc6534499
mit
['generated_from_keras_callback']
false
Sushant45/Warsaw_Pact-clustered This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0812 - Train End Logits Accuracy: 0.9757 - Train Start Logits Accuracy: 0.9826 - Validation Loss: 0.1680 - Validation End Logits Accuracy: 1.0 - Validation Start Logits Accuracy: 1.0 - Epoch: 0
0b03d31a833b37bfc23648a4f0d59645
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.0812 | 0.9757 | 0.9826 | 0.1680 | 1.0 | 1.0 | 0 |
80fdeaee5c24fd158c239d989a64b7c7
mit
[]
false
[![Darknet Continuous Integration](https://github.com/AlexeyAB/darknet/workflows/Darknet%20Continuous%20Integration/badge.svg)](https://github.com/AlexeyAB/darknet/actions?query=workflow%3A%22Darknet+Continuous+Integration%22)
7f2cd47c403296a776a444862d18f2ac
mit
[]
false
Model YOLOv7 surpasses all known object detectors in both speed and accuracy in the range from 5 FPS to 160 FPS and has the highest accuracy 56.8% AP among all known real-time object detectors with 30 FPS or higher on GPU V100. YOLOv7-E6 object detector (56 FPS V100, 55.9% AP) outperforms both transformer-based detector SWIN-L Cascade-Mask R-CNN (9.2 FPS A100, 53.9% AP) by 509% in speed and 2% in accuracy, and convolutional-based detector ConvNeXt-XL Cascade-Mask R-CNN (8.6 FPS A100, 55.2% AP) by 551% in speed and 0.7% AP in accuracy, as well as YOLOv7 outperforms: YOLOR, YOLOX, Scaled-YOLOv4, YOLOv5, DETR, Deformable DETR, DINO-5scale-R50, ViT-Adapter-B and many other object detectors in speed and accuracy.
6eb93f26a2f461a9263d6ce0f86f2a32
mit
[]
false
Citation ``` @misc{bochkovskiy2020yolov4, title={YOLOv4: Optimal Speed and Accuracy of Object Detection}, author={Alexey Bochkovskiy and Chien-Yao Wang and Hong-Yuan Mark Liao}, year={2020}, eprint={2004.10934}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ``` @InProceedings{Wang_2021_CVPR, author = {Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark}, title = {{Scaled-YOLOv4}: Scaling Cross Stage Partial Network}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {13029-13038} } ```
e9af6c70ac5342486a78b3332ed0bf05
cc-by-sa-4.0
['chinese', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a DeBERTa(V2) model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [deberta-base-chinese](https://huggingface.co/KoichiYasuoka/deberta-base-chinese). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
a007b3ecdf798aa91711eb72bbe3bd6e
cc-by-sa-4.0
['chinese', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-chinese-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-base-chinese-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/deberta-base-chinese-upos") ```
0ca572cd3156e8f87b08840db3372d5f
mit
[]
false
Model description It is BERT-base model pre-trained with indonesian Wikipedia and indonesian newspapers using a masked language modeling (MLM) objective. This model is uncased. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers)
b8c2ddb7d4d45f073e3c8da7c90b9b42
mit
[]
false
How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/bert-base-indonesian-1.5G') >>> unmasker("Ibu ku sedang bekerja [MASK] supermarket") [{'sequence': '[CLS] ibu ku sedang bekerja di supermarket [SEP]', 'score': 0.7983310222625732, 'token': 1495}, {'sequence': '[CLS] ibu ku sedang bekerja. supermarket [SEP]', 'score': 0.090003103017807, 'token': 17}, {'sequence': '[CLS] ibu ku sedang bekerja sebagai supermarket [SEP]', 'score': 0.025469014421105385, 'token': 1600}, {'sequence': '[CLS] ibu ku sedang bekerja dengan supermarket [SEP]', 'score': 0.017966199666261673, 'token': 1555}, {'sequence': '[CLS] ibu ku sedang bekerja untuk supermarket [SEP]', 'score': 0.016971781849861145, 'token': 1572}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel model_name='cahya/bert-base-indonesian-1.5G' tokenizer = BertTokenizer.from_pretrained(model_name) model = BertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import BertTokenizer, TFBertModel model_name='cahya/bert-base-indonesian-1.5G' tokenizer = BertTokenizer.from_pretrained(model_name) model = TFBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
801a76db231ead9836ea6025890359c4
mit
[]
false
Training data This model was pre-trained with 522MB of indonesian Wikipedia and 1GB of [indonesian newspapers](https://huggingface.co/datasets/id_newspapers_2018). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```[CLS] Sentence A [SEP] Sentence B [SEP]```
8980bd1c1392d1d9fca213ead6ba9fe2
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
glpn-nyu-finetuned-diode-230113-130735 This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset. It achieves the following results on the evaluation set: - Loss: 0.4320 - Mae: 0.4213 - Rmse: 0.6133 - Abs Rel: 0.4298 - Log Mae: 0.1697 - Log Rmse: 0.2216 - Delta1: 0.3800 - Delta2: 0.6396 - Delta3: 0.8189
dcf9ea0b03840e767e9a2e0157ecbd88
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:| | 1.0073 | 1.0 | 72 | 0.4927 | 0.4684 | 0.6425 | 0.5680 | 0.1955 | 0.2515 | 0.3154 | 0.5289 | 0.7834 | | 0.4694 | 2.0 | 144 | 0.4560 | 0.4425 | 0.6285 | 0.4674 | 0.1818 | 0.2341 | 0.3395 | 0.6061 | 0.7873 | | 0.4632 | 3.0 | 216 | 0.4817 | 0.4646 | 0.6341 | 0.5412 | 0.1930 | 0.2453 | 0.3181 | 0.5368 | 0.7491 | | 0.4363 | 4.0 | 288 | 0.4589 | 0.4379 | 0.6228 | 0.4880 | 0.1793 | 0.2348 | 0.3588 | 0.6025 | 0.7952 | | 0.4636 | 5.0 | 360 | 0.4767 | 0.4545 | 0.6301 | 0.5367 | 0.1878 | 0.2430 | 0.3279 | 0.5716 | 0.7705 | | 0.4642 | 6.0 | 432 | 0.4437 | 0.4185 | 0.6200 | 0.4405 | 0.1689 | 0.2283 | 0.4071 | 0.6531 | 0.8091 | | 0.409 | 7.0 | 504 | 0.4787 | 0.4542 | 0.6291 | 0.5399 | 0.1873 | 0.2430 | 0.3345 | 0.5679 | 0.7648 | | 0.4081 | 8.0 | 576 | 0.4545 | 0.4359 | 0.6258 | 0.4554 | 0.1779 | 0.2311 | 0.3717 | 0.6035 | 0.7952 | | 0.4146 | 9.0 | 648 | 0.4726 | 0.4523 | 0.6293 | 0.5108 | 0.1870 | 0.2403 | 0.3394 | 0.5692 | 0.7571 | | 0.392 | 10.0 | 720 | 0.4643 | 0.4453 | 0.6249 | 0.5081 | 0.1831 | 0.2372 | 0.3380 | 0.5881 | 0.7917 | | 0.3722 | 11.0 | 792 | 0.4670 | 0.4475 | 0.6245 | 0.4957 | 0.1838 | 0.2355 | 0.3413 | 0.5739 | 0.7689 | | 0.4397 | 12.0 | 864 | 0.4548 | 0.4367 | 0.6262 | 0.4604 | 0.1780 | 0.2319 | 0.3664 | 0.6081 | 0.7903 | | 0.43 | 13.0 | 936 | 0.4281 | 0.4223 | 0.6230 | 0.3974 | 0.1691 | 0.2207 | 0.3975 | 0.6426 | 0.7943 | | 0.3976 | 14.0 | 1008 | 0.4592 | 0.4470 | 0.6249 | 0.4759 | 0.1827 | 0.2321 | 0.3482 | 0.5784 | 0.7507 | | 0.4251 | 15.0 | 1080 | 0.4515 | 0.4366 | 0.6205 | 0.4589 | 0.1773 | 0.2285 | 0.3689 | 0.5990 | 0.7785 | | 0.4007 | 16.0 | 1152 | 0.4859 | 0.4668 | 0.6347 | 0.5570 | 0.1939 | 0.2467 | 0.3156 | 0.5378 | 0.7265 | | 0.376 | 17.0 | 1224 | 0.4529 | 0.4331 | 0.6195 | 0.4421 | 0.1752 | 0.2260 | 0.3795 | 0.6016 | 0.7702 | | 0.4028 | 18.0 | 1296 | 0.5027 | 0.4775 | 0.6420 | 0.6169 | 0.1993 | 0.2569 | 0.3098 | 0.5228 | 0.7035 | | 0.3816 | 19.0 | 1368 | 0.4869 | 0.4634 | 0.6342 | 0.5565 | 0.1924 | 0.2473 | 0.3276 | 0.5448 | 0.7370 | | 0.4092 | 20.0 | 1440 | 0.4317 | 0.4155 | 0.6164 | 0.4083 | 0.1661 | 0.2218 | 0.4003 | 0.6569 | 0.8123 | | 0.3673 | 21.0 | 1512 | 0.4433 | 0.4326 | 0.6208 | 0.4295 | 0.1750 | 0.2244 | 0.3751 | 0.6068 | 0.7879 | | 0.3698 | 22.0 | 1584 | 0.4607 | 0.4322 | 0.6216 | 0.4981 | 0.1758 | 0.2354 | 0.3831 | 0.6163 | 0.7906 | | 0.3771 | 23.0 | 1656 | 0.4668 | 0.4478 | 0.6255 | 0.5075 | 0.1841 | 0.2373 | 0.3390 | 0.5819 | 0.7697 | | 0.4343 | 24.0 | 1728 | 0.4532 | 0.4331 | 0.6203 | 0.4722 | 0.1767 | 0.2312 | 0.3587 | 0.6166 | 0.8087 | | 0.4011 | 25.0 | 1800 | 0.4499 | 0.4327 | 0.6213 | 0.4519 | 0.1755 | 0.2279 | 0.3716 | 0.6152 | 0.7844 | | 0.3714 | 26.0 | 1872 | 0.4460 | 0.4254 | 0.6188 | 0.4495 | 0.1716 | 0.2278 | 0.3932 | 0.6352 | 0.7916 | | 0.3436 | 27.0 | 1944 | 0.4360 | 0.4182 | 0.6165 | 0.4192 | 0.1682 | 0.2224 | 0.3894 | 0.6524 | 0.8145 | | 0.3698 | 28.0 | 2016 | 0.4694 | 0.4536 | 0.6274 | 0.5040 | 0.1863 | 0.2369 | 0.3356 | 0.5667 | 0.7469 | | 0.365 | 29.0 | 2088 | 0.4288 | 0.4139 | 0.6156 | 0.4025 | 0.1655 | 0.2199 | 0.4028 | 0.6623 | 0.8109 | | 0.3723 | 30.0 | 2160 | 0.4337 | 0.4148 | 0.6141 | 0.4192 | 0.1661 | 0.2215 | 0.4044 | 0.6578 | 0.8073 | | 0.365 | 31.0 | 2232 | 0.4529 | 0.4309 | 0.6192 | 0.4751 | 0.1755 | 0.2314 | 0.3770 | 0.6115 | 0.7909 | | 0.3571 | 32.0 | 2304 | 0.4302 | 0.4151 | 0.6170 | 0.4134 | 0.1663 | 0.2227 | 0.4089 | 0.6611 | 0.8078 | | 0.3727 | 33.0 | 2376 | 0.4599 | 0.4352 | 0.6214 | 0.4937 | 0.1776 | 0.2348 | 0.3659 | 0.6120 | 0.7949 | | 0.3538 | 34.0 | 2448 | 0.4391 | 0.4257 | 0.6161 | 0.4404 | 0.1720 | 0.2248 | 0.3768 | 0.6317 | 0.8042 | | 0.3306 | 35.0 | 2520 | 0.4393 | 0.4223 | 0.6198 | 0.4328 | 0.1702 | 0.2262 | 0.3886 | 0.6493 | 0.8062 | | 0.3369 | 36.0 | 2592 | 0.4496 | 0.4316 | 0.6182 | 0.4642 | 0.1751 | 0.2289 | 0.3712 | 0.6124 | 0.8005 | | 0.3389 | 37.0 | 2664 | 0.4573 | 0.4376 | 0.6213 | 0.4897 | 0.1787 | 0.2338 | 0.3628 | 0.6014 | 0.7932 | | 0.3767 | 38.0 | 2736 | 0.4558 | 0.4366 | 0.6216 | 0.4840 | 0.1786 | 0.2334 | 0.3566 | 0.6064 | 0.7973 | | 0.3462 | 39.0 | 2808 | 0.4580 | 0.4380 | 0.6221 | 0.4815 | 0.1785 | 0.2328 | 0.3640 | 0.6020 | 0.7850 | | 0.3834 | 40.0 | 2880 | 0.4664 | 0.4459 | 0.6245 | 0.5155 | 0.1836 | 0.2385 | 0.3426 | 0.5782 | 0.7944 | | 0.3564 | 41.0 | 2952 | 0.4452 | 0.4271 | 0.6175 | 0.4563 | 0.1733 | 0.2282 | 0.3749 | 0.6269 | 0.8081 | | 0.3571 | 42.0 | 3024 | 0.4357 | 0.4189 | 0.6151 | 0.4360 | 0.1686 | 0.2243 | 0.3947 | 0.6482 | 0.8163 | | 0.345 | 43.0 | 3096 | 0.4285 | 0.4130 | 0.6114 | 0.4173 | 0.1653 | 0.2202 | 0.4034 | 0.6611 | 0.8223 | | 0.3163 | 44.0 | 3168 | 0.4473 | 0.4274 | 0.6176 | 0.4624 | 0.1732 | 0.2288 | 0.3790 | 0.6245 | 0.8095 | | 0.3331 | 45.0 | 3240 | 0.4392 | 0.4214 | 0.6139 | 0.4429 | 0.1699 | 0.2244 | 0.3887 | 0.6388 | 0.8081 | | 0.3574 | 46.0 | 3312 | 0.4487 | 0.4230 | 0.6156 | 0.4608 | 0.1710 | 0.2282 | 0.3860 | 0.6431 | 0.8063 | | 0.3703 | 47.0 | 3384 | 0.4342 | 0.4176 | 0.6179 | 0.4286 | 0.1678 | 0.2247 | 0.3918 | 0.6668 | 0.8098 | | 0.325 | 48.0 | 3456 | 0.4390 | 0.4238 | 0.6150 | 0.4500 | 0.1715 | 0.2256 | 0.3695 | 0.6334 | 0.8216 | | 0.3494 | 49.0 | 3528 | 0.4364 | 0.4182 | 0.6165 | 0.4348 | 0.1680 | 0.2248 | 0.4041 | 0.6539 | 0.8104 | | 0.3439 | 50.0 | 3600 | 0.4401 | 0.4252 | 0.6156 | 0.4414 | 0.1716 | 0.2243 | 0.3831 | 0.6260 | 0.8042 | | 0.3235 | 51.0 | 3672 | 0.4459 | 0.4258 | 0.6173 | 0.4607 | 0.1728 | 0.2287 | 0.3819 | 0.6272 | 0.8106 | | 0.3197 | 52.0 | 3744 | 0.4341 | 0.4205 | 0.6153 | 0.4291 | 0.1691 | 0.2226 | 0.3874 | 0.6429 | 0.8173 | | 0.3231 | 53.0 | 3816 | 0.4499 | 0.4297 | 0.6180 | 0.4654 | 0.1745 | 0.2290 | 0.3730 | 0.6166 | 0.8053 | | 0.3182 | 54.0 | 3888 | 0.4407 | 0.4242 | 0.6145 | 0.4501 | 0.1714 | 0.2252 | 0.3762 | 0.6366 | 0.8124 | | 0.334 | 55.0 | 3960 | 0.4518 | 0.4335 | 0.6176 | 0.4773 | 0.1768 | 0.2304 | 0.3591 | 0.6065 | 0.8111 | | 0.3198 | 56.0 | 4032 | 0.4505 | 0.4322 | 0.6173 | 0.4725 | 0.1760 | 0.2298 | 0.3637 | 0.6131 | 0.8025 | | 0.3165 | 57.0 | 4104 | 0.4378 | 0.4248 | 0.6174 | 0.4369 | 0.1720 | 0.2246 | 0.3729 | 0.6377 | 0.8137 | | 0.3269 | 58.0 | 4176 | 0.4372 | 0.4275 | 0.6156 | 0.4415 | 0.1730 | 0.2240 | 0.3675 | 0.6276 | 0.8095 | | 0.3224 | 59.0 | 4248 | 0.4359 | 0.4244 | 0.6149 | 0.4351 | 0.1711 | 0.2231 | 0.3721 | 0.6366 | 0.8090 | | 0.3104 | 60.0 | 4320 | 0.4317 | 0.4209 | 0.6146 | 0.4284 | 0.1696 | 0.2220 | 0.3799 | 0.6395 | 0.8179 | | 0.3248 | 61.0 | 4392 | 0.4323 | 0.4207 | 0.6138 | 0.4268 | 0.1694 | 0.2216 | 0.3864 | 0.6386 | 0.8148 | | 0.303 | 62.0 | 4464 | 0.4309 | 0.4189 | 0.6126 | 0.4264 | 0.1685 | 0.2213 | 0.3853 | 0.6453 | 0.8194 | | 0.3126 | 63.0 | 4536 | 0.4308 | 0.4206 | 0.6141 | 0.4229 | 0.1693 | 0.2208 | 0.3783 | 0.6447 | 0.8162 | | 0.3099 | 64.0 | 4608 | 0.4330 | 0.4239 | 0.6149 | 0.4298 | 0.1709 | 0.2218 | 0.3709 | 0.6323 | 0.8182 | | 0.3075 | 65.0 | 4680 | 0.4322 | 0.4222 | 0.6144 | 0.4276 | 0.1701 | 0.2217 | 0.3784 | 0.6374 | 0.8159 | | 0.3024 | 66.0 | 4752 | 0.4393 | 0.4269 | 0.6155 | 0.4456 | 0.1729 | 0.2249 | 0.3722 | 0.6245 | 0.8100 | | 0.3319 | 67.0 | 4824 | 0.4385 | 0.4273 | 0.6155 | 0.4402 | 0.1728 | 0.2238 | 0.3722 | 0.6244 | 0.8085 | | 0.3163 | 68.0 | 4896 | 0.4334 | 0.4215 | 0.6128 | 0.4305 | 0.1699 | 0.2216 | 0.3814 | 0.6379 | 0.8145 | | 0.3219 | 69.0 | 4968 | 0.4298 | 0.4197 | 0.6131 | 0.4215 | 0.1688 | 0.2203 | 0.3821 | 0.6453 | 0.8170 | | 0.3155 | 70.0 | 5040 | 0.4295 | 0.4199 | 0.6134 | 0.4219 | 0.1687 | 0.2204 | 0.3846 | 0.6453 | 0.8164 | | 0.3265 | 71.0 | 5112 | 0.4294 | 0.4194 | 0.6123 | 0.4232 | 0.1687 | 0.2203 | 0.3804 | 0.6468 | 0.8203 | | 0.3231 | 72.0 | 5184 | 0.4338 | 0.4231 | 0.6138 | 0.4333 | 0.1707 | 0.2222 | 0.3775 | 0.6340 | 0.8166 | | 0.3077 | 73.0 | 5256 | 0.4327 | 0.4221 | 0.6134 | 0.4315 | 0.1702 | 0.2219 | 0.3800 | 0.6361 | 0.8185 | | 0.3178 | 74.0 | 5328 | 0.4312 | 0.4203 | 0.6126 | 0.4278 | 0.1693 | 0.2212 | 0.3813 | 0.6417 | 0.8194 | | 0.3157 | 75.0 | 5400 | 0.4320 | 0.4213 | 0.6133 | 0.4298 | 0.1697 | 0.2216 | 0.3800 | 0.6396 | 0.8189 |
16b05284aff4f90fd4b494a195d0a8ee
apache-2.0
['generated_from_trainer']
false
stance_detection This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) towards 26 US SPAC stock mergers on Twitter. It achieves the following results on the evaluation set: - Loss: 0.4906 - Accuracy: 0.8409 - F1w: 0.8574 - Acc0: 0.8293 - Acc1: 0.6 - Acc2: 0.7652 - Acc3: 0.8637
4441dc54e3b9bd3465ce730cb66f59c6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1w | Acc0 | Acc1 | Acc2 | Acc3 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:----:|:------:|:------:| | 0.7748 | 1.0 | 194 | 0.5172 | 0.8158 | 0.8297 | 0.8699 | 0.0 | 0.7429 | 0.8248 | | 0.5181 | 2.0 | 388 | 0.4692 | 0.8509 | 0.8587 | 0.8699 | 0.4 | 0.7429 | 0.8743 | | 0.3868 | 3.0 | 582 | 0.4906 | 0.8409 | 0.8574 | 0.8293 | 0.6 | 0.7652 | 0.8637 |
193738740dee1bc1f0f6d7986c1061df
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
messy_sketch_art_style Dreambooth model trained by apurik-parv with [Shivamshri rao's DreamBooth implementation] Instance prompt:**meartsty** As the name implies the the model is trained on messy art style sketch /doodle images for 50000 steps. Simple prompts can replicate faithfully. complicated and contradicting prompts will add elements of noise to the image. Feel free to experiment with it.
fc8269cb46d6bb82a9d54821a51f8a4d
apache-2.0
['generated_from_trainer']
false
text-to-sparql-t5-base-2021-10-18_16-15 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1294 - Gen Len: 19.0 - Bertscorer-p: 0.5827 - Bertscorer-r: 0.0812 - Bertscorer-f1: 0.3202 - Sacrebleu-score: 5.9410 - Sacrebleu-precisions: [92.24641734333713, 84.24354361048307, 78.78523204758982, 75.43428275229601] - Bleu-bp: 0.0721
bc897b980515a5d5cbb8979e5216e12b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:| | nan | 1.0 | 4772 | 0.1294 | 19.0 | 0.5827 | 0.0812 | 0.3202 | 5.9410 | [92.24641734333713, 84.24354361048307, 78.78523204758982, 75.43428275229601] | 0.0721 |
342c9f754b5c088fbbffd4b473441ba8
apache-2.0
['generated_from_trainer']
false
t5_8_3e-5_datav2_min30_lp2_sample This model is a fine-tuned version of [KETI-AIR/ke-t5-large-ko](https://huggingface.co/KETI-AIR/ke-t5-large-ko) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.2375 - Rouge1: 24.1102 - Rouge2: 5.3137 - Rougel: 16.1086 - Bleu1: 18.6424 - Bleu2: 8.0483 - Bleu3: 2.7046 - Bleu4: 0.7308 - Gen Len: 36.4012
48b8398311266e9bc7d469f56ff17de7
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0
6f6d641c848d10479285b78216f89050
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:-------:|:------:|:------:|:------:|:-------:| | 4.1641 | 1.04 | 5000 | 6.8094 | 21.6187 | 4.959 | 14.8344 | 16.9553 | 7.4791 | 2.8017 | 1.1852 | 38.0426 | | 3.1804 | 2.08 | 10000 | 5.6664 | 22.2631 | 5.127 | 15.5533 | 16.881 | 7.515 | 2.8628 | 1.0614 | 33.7325 | | 2.779 | 3.12 | 15000 | 5.3350 | 22.5781 | 5.1137 | 15.7717 | 16.8632 | 7.3067 | 2.7117 | 0.9906 | 31.459 | | 2.4111 | 4.15 | 20000 | 5.2687 | 24.4915 | 6.003 | 16.8096 | 18.5998 | 8.54 | 3.4084 | 1.1511 | 32.7477 | | 2.2192 | 5.19 | 25000 | 5.3300 | 24.9661 | 6.0773 | 16.8486 | 19.0105 | 8.6794 | 3.4052 | 1.3281 | 32.9696 | | 1.9306 | 6.23 | 30000 | 5.4806 | 24.8662 | 5.9711 | 16.235 | 19.2093 | 8.7044 | 3.2412 | 1.0675 | 35.0973 | | 1.6696 | 7.27 | 35000 | 5.6865 | 24.3913 | 5.6936 | 16.4663 | 18.5884 | 8.3035 | 2.9593 | 1.0997 | 34.617 | | 1.4566 | 8.31 | 40000 | 5.8677 | 24.9166 | 5.8251 | 16.647 | 19.0703 | 8.5159 | 3.3477 | 1.1257 | 35.1763 | | 1.2808 | 9.35 | 45000 | 6.2375 | 24.1102 | 5.3137 | 16.1086 | 18.6424 | 8.0483 | 2.7046 | 0.7308 | 36.4012 |
e959f6e9389363f84a2e0d663c41c290
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-OnionOrNot This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2039 - Accuracy: 0.9224 - F1: 0.9218
7186973c15ac27407088fef4cf9e9dba
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3334 | 1.0 | 300 | 0.2382 | 0.9024 | 0.9011 | | 0.1822 | 2.0 | 600 | 0.2039 | 0.9224 | 0.9218 |
91f94898bf8eb1c8eb1a749ccf762d35
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
fastbooth-jsjessy-1400 Dreambooth model trained by eicu with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
44b387e16c9b6e6c709b50e07facef84
mit
[]
false
MSG on Stable Diffusion This is the `<MSG69>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<MSG69> 0](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/5.jpeg) ![<MSG69> 1](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/8.jpeg) ![<MSG69> 2](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/12.jpeg) ![<MSG69> 3](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/15.jpeg) ![<MSG69> 4](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/17.jpeg) ![<MSG69> 5](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/24.jpeg) ![<MSG69> 6](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/3.jpeg) ![<MSG69> 7](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/0.jpeg) ![<MSG69> 8](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/22.jpeg) ![<MSG69> 9](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/18.jpeg) ![<MSG69> 10](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/25.jpeg) ![<MSG69> 11](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/23.jpeg) ![<MSG69> 12](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/9.jpeg) ![<MSG69> 13](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/16.jpeg) ![<MSG69> 14](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/6.jpeg) ![<MSG69> 15](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/19.jpeg) ![<MSG69> 16](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/2.jpeg) ![<MSG69> 17](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/13.jpeg) ![<MSG69> 18](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/14.jpeg) ![<MSG69> 19](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/1.jpeg) ![<MSG69> 20](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/26.jpeg) ![<MSG69> 21](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/10.jpeg) ![<MSG69> 22](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/20.jpeg) ![<MSG69> 23](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/4.jpeg) ![<MSG69> 24](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/7.jpeg) ![<MSG69> 25](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/11.jpeg) ![<MSG69> 26](https://huggingface.co/sd-concepts-library/msg/resolve/main/concept_images/21.jpeg)
13558a88218deb824aa7c6bfb06e6112
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-credit_cards-4-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3376 - Accuracy: 0.3186
629864f8214bbe89b6c8f3460571740d
creativeml-openrail-m
['text-to-image', 'art', 'digital art', 'stable diffusion']
false
[![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/MultiversexPeeps/duskfall-s-general-digital-art-model)
37f76b1def2e5361172870030fabdeff
creativeml-openrail-m
['text-to-image', 'art', 'digital art', 'stable diffusion']
false
Duskfall's General Digital Art Model Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Information on this model will be here: https://civitai.com/user/duskfallcrew If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk gendigi (use that on your prompt)
2a8ea3997f2a98b2bb33434926498ff4
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-j-phoneme-common-test This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Wer: 0.0001
d8321534e1ba71e11c1020c177916eae
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP
3d733b17fec203b6ee6368a6c33fe8ae
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.1488 | 7.14 | 2000 | 0.0788 | 0.0919 | | 0.0308 | 14.28 | 4000 | 0.0155 | 0.0271 | | 0.0121 | 21.43 | 6000 | 0.0070 | 0.0103 | | 0.0067 | 28.57 | 8000 | 0.0059 | 0.0067 | | 0.0025 | 35.71 | 10000 | 0.0143 | 0.0180 | | 0.0001 | 42.85 | 12000 | 0.0000 | 0.0001 | | 0.0 | 50.0 | 14000 | 0.0000 | 0.0001 |
8af7429a5f2aef955ede0d479adb771c
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Medium Slovak CV11 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 sk dataset. It achieves the following results on the evaluation set: - Loss: 0.3982 - Wer: 23.1437
86a0bbc41eb4d7e1b285f883dbbe8095
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.001 | 14.29 | 1000 | 0.3982 | 23.1437 | | 0.0013 | 28.57 | 2000 | 0.4343 | 24.0362 | | 0.0001 | 42.86 | 3000 | 0.4565 | 23.3222 | | 0.0001 | 57.14 | 4000 | 0.4700 | 23.3936 | | 0.0001 | 71.43 | 5000 | 0.4753 | 23.4531 |
f1f353dcd2679d3b8e9e69d871b2ec20
apache-2.0
['translation']
false
opus-mt-ng-en * source languages: ng * target languages: en * OPUS readme: [ng-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ng-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.eval.txt)
6d99a487028fa55970fd4cb2f7cecbf5
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run tpkify-v1: [Open in Spaces](https://huggingface.co/spaces/akhaliq/tpkify-v1) thelastben fast-dreambooth sd1.5 model for turning things into toothpick art. use trigger tpkify. ex: a photo of a tpkify dog, sitting on the beach ex: oil painting of a tpkify corvette, by claude monet this v1 iteration was trained on 40 images for 3200 steps with 20% text encoder training 40 512x512 training .png images included in train_images40.zip
5b020cfab4f88f5b6b5aa8d1651c869a
apache-2.0
['generated_from_trainer']
false
20NG_ALBERT_5E This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2209 - Accuracy: 0.6067
bc6e6a4cbebe1bf0d71542e74fb3029d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
b1096e6ac244cfb2c5762fc167347955
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8602 | 0.07 | 50 | 2.5794 | 0.2133 | | 2.3635 | 0.14 | 100 | 2.0956 | 0.38 | | 2.1526 | 0.21 | 150 | 1.9011 | 0.4467 | | 1.9014 | 0.28 | 200 | 1.6340 | 0.5067 | | 1.6736 | 0.35 | 250 | 1.5457 | 0.5467 | | 1.5563 | 0.42 | 300 | 1.5041 | 0.5533 | | 1.4338 | 0.49 | 350 | 1.3933 | 0.5933 | | 1.3348 | 0.56 | 400 | 1.4123 | 0.54 | | 1.2879 | 0.64 | 450 | 1.3352 | 0.6333 | | 1.2864 | 0.71 | 500 | 1.3027 | 0.62 | | 1.2162 | 0.78 | 550 | 1.2734 | 0.6267 | | 1.1786 | 0.85 | 600 | 1.2695 | 0.5933 | | 1.1702 | 0.92 | 650 | 1.2379 | 0.5933 | | 1.2338 | 0.99 | 700 | 1.2209 | 0.6067 |
ccf374a6586e7f34ef4037bb7c504908
mit
['text-classification']
false
Multi2ConvAI-Quality: finetuned Bert for French This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: French (fr) - model type: finetuned Bert
a1a7cc1145ab23062e947f740708d753
mit
['text-classification']
false
Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-fr-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-fr-bert") ````
e6c18d30506e157310e8ad3006cb5c00
cc0-1.0
['stable-diffusion', 'text-to-image']
false
Stable Diffusion fine tuned on art by [Björn Hurri](https://www.artstation.com/bjornhurri) This model is fine tuned on some of his "shiny"-style paintings. I also have a version for his "matte" works.
1c3be4d46b5ba4f0c8ba8aaa82877d6b
cc0-1.0
['stable-diffusion', 'text-to-image']
false
Samples For this model I made two checkpoints. The "hurrishiny monster x2" model is trained for twice as long as the regular checkpoint, meaning it should be more fine tuned on the style but also more rigid. The top 4 images are from the regular version, the rest are from the x2 version. I hope it gives you an idea of what kind of styles can be created with this model. <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/1700_1.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/1700_2.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/1700_3.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/1700_4.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/3400_1.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/3400_2.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/3400_3.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/3400_4.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/index1.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/index3.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/index5.png" width="256px"/> <img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/index6.png" width="256px"/>
63572db22518feb9c56bde085d4dd6fe
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0649 - Precision: 0.9330 - Recall: 0.9485 - F1: 0.9407 - Accuracy: 0.9854
c7bc7a677c724e79fe8da1f15f967599
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0871 | 1.0 | 1756 | 0.0672 | 0.9209 | 0.9387 | 0.9297 | 0.9834 | | 0.0394 | 2.0 | 3512 | 0.0584 | 0.9311 | 0.9505 | 0.9407 | 0.9857 | | 0.0201 | 3.0 | 5268 | 0.0649 | 0.9330 | 0.9485 | 0.9407 | 0.9854 |
e60f72d216a2f90a71bc7f6459050d58
mit
['generated_from_keras_callback']
false
nandysoham/Cardinal__Catholicism_-clustered This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2081 - Train End Logits Accuracy: 0.9444 - Train Start Logits Accuracy: 0.9549 - Validation Loss: 1.0270 - Validation End Logits Accuracy: 0.75 - Validation Start Logits Accuracy: 0.75 - Epoch: 0
c0d46db8ca5699f70b3c01d432ff7685
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.2081 | 0.9444 | 0.9549 | 1.0270 | 0.75 | 0.75 | 0 |
b09b760a8265754d53d3fc2c40723c72
apache-2.0
['translation']
false
opus-mt-ee-sv * source languages: ee * target languages: sv * OPUS readme: [ee-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-sv/opus-2020-01-08.eval.txt)
7a3d1686a8dd3769dc3d96c243077a6d
apache-2.0
[]
false
Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 24 repeating layers - 128 embedding dimension - 2048 hidden dimension - 16 attention heads - 58M parameters
eff4bb373c7e8ece41d85ea22c437b86
apache-2.0
[]
false
How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1') model = AlbertModel.from_pretrained("albert-xlarge-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1') model = TFAlbertModel.from_pretrained("albert-xlarge-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
4ebe5ecd53f5393db4e95f8f6899bf60
apache-2.0
[]
false
Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] ``` This bias will also affect all fine-tuned versions of this model.
03478e39951373e6c775afd91287c521
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Japanese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 ja dataset. It achieves the following results on the evaluation set: - Loss: 0.3617 - Wer: 68.9459
8d4eefdbb8d4497e79c5fc91d8e525f1
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1938 | 1.09 | 1000 | 0.2841 | 74.6631 | | 0.0466 | 3.06 | 2000 | 0.2996 | 72.0953 | | 0.005 | 5.04 | 3000 | 0.3376 | 70.4355 | | 0.0021 | 7.01 | 4000 | 0.3617 | 68.9459 | | 0.002 | 8.1 | 5000 | 0.3735 | 71.4711 |
a94fd221f82efd8130330f72175f0e53
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 49a284e69308d81c142b89795de255b4ce290c54 pip install -e . cd egs2/talromur/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/GunnarThor_talromur_f_fastspeech2 ```
674cc804b609e8e3940646de9b1a6f61
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
TTS config <details><summary>expand</summary> ``` config: conf/tuning/train_fastspeech2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/f/tts_train_fastspeech2_raw_phn_none ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 8 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 800 batch_size: 20 valid_batch_size: null batch_bins: 2500000 valid_batch_bins: null train_shape_file: - exp/f/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.phn - exp/f/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape valid_shape_file: - exp/f/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.phn - exp/f/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_f_phn/text - text - text - - exp/f/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/train_f_phn/durations - durations - text_int - - dump/raw/train_f_phn/wav.scp - speech - sound valid_data_path_and_name_and_type: - - dump/raw/dev_f_phn/text - text - text - - exp/f/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/dev_f_phn/durations - durations - text_int - - dump/raw/dev_f_phn/wav.scp - speech - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: model_size: 384 warmup_steps: 4000 token_list: - <blank> - <unk> - ',' - . - r - t - n - a0 - s - I0 - D - l - m - Y0 - v - h - E1 - k - a:1 - E:1 - G - f - j - T - a1 - p - c - au:1 - i:1 - O:1 - I:1 - E0 - I1 - r_0 - t_h - k_h - Y1 - ei1 - i0 - ou:1 - ei:1 - u:1 - O1 - N - l_0 - '91' - ai0 - au1 - ou0 - n_0 - ei0 - ai:1 - O0 - ou1 - ai1 - i1 - '9:1' - '90' - au0 - x - c_h - 9i:1 - C - p_h - u0 - Y:1 - J - 9i1 - u1 - 9i0 - N_0 - m_0 - J_0 - Yi0 - Oi1 - Yi1 - Oi0 - au:0 - '9:0' - E:0 - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 22050 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/f/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz tts: fastspeech2 tts_conf: adim: 384 aheads: 2 elayers: 4 eunits: 1536 dlayers: 4 dunits: 1536 positionwise_layer_type: conv1d positionwise_conv_kernel_size: 3 duration_predictor_layers: 2 duration_predictor_chans: 256 duration_predictor_kernel_size: 3 postnet_layers: 5 postnet_filts: 5 postnet_chans: 256 use_masking: true use_scaled_pos_enc: true encoder_normalize_before: true decoder_normalize_before: true reduction_factor: 1 init_type: xavier_uniform init_enc_alpha: 1.0 init_dec_alpha: 1.0 transformer_enc_dropout_rate: 0.2 transformer_enc_positional_dropout_rate: 0.2 transformer_enc_attn_dropout_rate: 0.2 transformer_dec_dropout_rate: 0.2 transformer_dec_positional_dropout_rate: 0.2 transformer_dec_attn_dropout_rate: 0.2 pitch_predictor_layers: 5 pitch_predictor_chans: 256 pitch_predictor_kernel_size: 5 pitch_predictor_dropout: 0.5 pitch_embed_kernel_size: 1 pitch_embed_dropout: 0.0 stop_gradient_from_pitch_predictor: true energy_predictor_layers: 2 energy_predictor_chans: 256 energy_predictor_kernel_size: 3 energy_predictor_dropout: 0.5 energy_embed_kernel_size: 1 energy_embed_dropout: 0.0 stop_gradient_from_energy_predictor: false pitch_extract: dio pitch_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 f0max: 400 f0min: 80 reduction_factor: 1 pitch_normalize: global_mvn pitch_normalize_conf: stats_file: exp/f/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/pitch_stats.npz energy_extract: energy energy_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 win_length: null reduction_factor: 1 energy_normalize: global_mvn energy_normalize_conf: stats_file: exp/f/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/energy_stats.npz required: - output_dir - token_list version: 0.10.7a1 distributed: false ``` </details>
1498215c309123d9d76afc81301fc1d7
mit
[]
false
Overview **Model Description:** roberta-large-faithcritic is the [RoBERTa large model](https://huggingface.co/roberta-large) fine-tuned on FaithCritic, a derivative of the [FaithDial](https://huggingface.co/datasets/McGill-NLP/FaithDial) dataset. The objective is to predict whether an utterance is faithful or not, given the source knowledge. The hyperparameters are provided in [hparams.yml](https://huggingface.co/McGill-NLP/roberta-large-faithcritic/blob/main/hparams.yaml). To know more about how to train a critic model, visit [our repo](https://github.com/McGill-NLP/FaithDial).
a40bb0222000e91f349aeb24c951d770
mit
[]
false
Usage ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("McGill-NLP/roberta-large-faithcritic") model = AutoModel.from_pretrained("McGill-NLP/roberta-large-faithcritic") knowledge = "A cardigan is a type of knitted garment (sweater) that has an open front." response = "The old version is the regular one, knitted garment that has open front and buttons!" input = tokenizer(knowledge, response) output = model(**input) ```
5d05127fd02c4857ee339bda33573ab1
mit
[]
false
Citation Information ```bibtex @article{dziri2022faithdial, title={FaithDial: A Faithful Benchmark for Information-Seeking Dialogue}, author={Dziri, Nouha and Kamalloo, Ehsan and Milton, Sivan and Zaiane, Osmar and Yu, Mo and Ponti, Edoardo and Reddy, Siva}, journal={arXiv preprint, arXiv:2204.10757}, year={2022}, url={https://arxiv.org/abs/2204.10757} } ```
2e0d0b3ff6ea92a6d59faf53fbf4e376
apache-2.0
['speech', 'audio', 'hubert', 'audio-classification']
false
Model description This is a ported version of [S3PRL's Hubert for the SUPERB Keyword Spotting task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/speech_commands). The base model is [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
08a820864f0d185595a234d29b70cd01
apache-2.0
['speech', 'audio', 'hubert', 'audio-classification']
false
Task and dataset description Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and inference time are all crucial. SUPERB uses the widely used [Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task. The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the false positive. For the original model's training and evaluation instructions refer to the [S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream
0bd46adde3af247ac820ef64f4450f72
apache-2.0
['speech', 'audio', 'hubert', 'audio-classification']
false
Usage examples You can use the model via the Audio Classification pipeline: ```python from datasets import load_dataset from transformers import pipeline dataset = load_dataset("anton-l/superb_demo", "ks", split="test") classifier = pipeline("audio-classification", model="superb/hubert-large-superb-ks") labels = classifier(dataset[0]["file"], top_k=5) ``` Or use the model directly: ```python import torch from datasets import load_dataset from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor from torchaudio.sox_effects import apply_effects_file effects = [["channels", "1"], ["rate", "16000"], ["gain", "-3.0"]] def map_to_array(example): speech, _ = apply_effects_file(example["file"], effects) example["speech"] = speech.squeeze(0).numpy() return example
48540e10cede3202691aeaf9a68b233c
apache-2.0
['speech', 'audio', 'hubert', 'audio-classification']
false
load a demo dataset and read audio files dataset = load_dataset("anton-l/superb_demo", "ks", split="test") dataset = dataset.map(map_to_array) model = HubertForSequenceClassification.from_pretrained("superb/hubert-large-superb-ks") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-large-superb-ks")
63803ddd1251cb02db5fa213e7426b08
creativeml-openrail-m
['text-to-image']
false
Sample images: ![PaperCut.jpg](https://s3.amazonaws.com/moonup/production/uploads/1667910351389-635749860725c2f190a76e88.jpeg) ![PaperCut.jpg](https://s3.amazonaws.com/moonup/production/uploads/1667912285222-635749860725c2f190a76e88.jpeg) Based on StableDiffusion 1.5 model
79b7f9d15403b92e799478eab64676ea
apache-2.0
['generated_from_trainer']
false
bert-base-uncased.CEBaB_confounding.food_service_positive.absa.5-class.seed_42 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the OpenTable OPENTABLE-ABSA dataset. It achieves the following results on the evaluation set: - Loss: 0.7699 - Accuracy: 0.8050 - Macro-f1: 0.8026 - Weighted-macro-f1: 0.8053
f75eb557ad2a7b6a9e79639450a6cbea
apache-2.0
['stanza', 'token-classification']
false
Stanza model for Kazakh (kk) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-09-25 01:39:09.527
f0beed34e5538e18f07da9779ea66443
mit
[]
false
Chillpill on Stable Diffusion This is the `<Chillpill>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![Chillpill 0](https://huggingface.co/sd-concepts-library/chillpill/resolve/main/concept_images/1.jpeg) ![Chillpill 1](https://huggingface.co/sd-concepts-library/chillpill/resolve/main/concept_images/0.jpeg) ![Chillpill 2](https://huggingface.co/sd-concepts-library/chillpill/resolve/main/concept_images/4.jpeg) ![Chillpill 3](https://huggingface.co/sd-concepts-library/chillpill/resolve/main/concept_images/2.jpeg) ![Chillpill 4](https://huggingface.co/sd-concepts-library/chillpill/resolve/main/concept_images/3.jpeg)
9f515e6eb870732b06e3146bdabaca29
mit
['generated_from_trainer']
false
rte_roberta-base_125_v2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7551 - Accuracy: 0.6715
ecaa645ec7981a1659bb3e09b346b48e
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset. It achieves the following results on the evaluation set: - Loss: 0.8004 - Wer: 0.7139
7ffbc0eb06b69bafa7212780c0bc0587
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP
d415aaae0abe2e2936aef5ad4255dac1
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.6683 | 1.45 | 500 | 1.7698 | 1.0041 | | 1.9548 | 2.91 | 1000 | 1.0890 | 0.8602 | | 1.9568 | 4.36 | 1500 | 1.0878 | 0.8680 | | 1.9497 | 5.81 | 2000 | 1.1501 | 0.8838 | | 1.8453 | 7.27 | 2500 | 1.0452 | 0.8418 | | 1.6952 | 8.72 | 3000 | 0.9153 | 0.7823 |
e7469ad18d581741bd2a03775d4f5fae
apache-2.0
['generated_from_trainer']
false
recipe-lr2e05-wd0.005-bs32 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2862 - Rmse: 0.5350 - Mse: 0.2862 - Mae: 0.4436
b209ef8dd931372ce6847bd51993c31b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2774 | 1.0 | 623 | 0.2746 | 0.5240 | 0.2746 | 0.4160 | | 0.274 | 2.0 | 1246 | 0.2738 | 0.5233 | 0.2738 | 0.4166 | | 0.2724 | 3.0 | 1869 | 0.2862 | 0.5350 | 0.2862 | 0.4436 |
bdce93891f83052d42ba50aee9489241
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-3']
false
MultiBERTs Seed 3 Checkpoint 80k (uncased) Seed 3 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
ea7c6ad649667c54c7b63f43d7b0554b
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-3']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-80k') model = BertModel.from_pretrained("multiberts-seed-3-80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
26234078712580d7bf1d5842cb3d10a6
mit
['generated_from_trainer']
false
xlm-roberta-base-banking77-classification This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the banking77 dataset. It achieves the following results on the evaluation set: - Loss: 0.3034 - Accuracy: 0.9321 - F1 Score: 0.9321
d5812a8aeb5aeac15d3cd655e8d8289f
mit
['generated_from_trainer']
false
Training and evaluation data The dataset used is [banking77](https://huggingface.co/datasets/banking77) The 77 labels are: |label|intent| |:---:|:----:| |0|activate_my_card| |1|age_limit| |2|apple_pay_or_google_pay| |3|atm_support| |4|automatic_top_up| |5|balance_not_updated_after_bank_transfer| |6|balance_not_updated_after_cheque_or_cash_deposit| |7|beneficiary_not_allowed| |8|cancel_transfer| |9|card_about_to_expire| |10|card_acceptance| |11|card_arrival| |12|card_delivery_estimate| |13|card_linking| |14|card_not_working| |15|card_payment_fee_charged| |16|card_payment_not_recognised| |17|card_payment_wrong_exchange_rate| |18|card_swallowed| |19|cash_withdrawal_charge| |20|cash_withdrawal_not_recognised| |21|change_pin| |22|compromised_card| |23|contactless_not_working| |24|country_support| |25|declined_card_payment| |26|declined_cash_withdrawal| |27|declined_transfer| |28|direct_debit_payment_not_recognised| |29|disposable_card_limits| |30|edit_personal_details| |31|exchange_charge| |32|exchange_rate| |33|exchange_via_app| |34|extra_charge_on_statement| |35|failed_transfer| |36|fiat_currency_support| |37|get_disposable_virtual_card| |38|get_physical_card| |39|getting_spare_card| |40|getting_virtual_card| |41|lost_or_stolen_card| |42|lost_or_stolen_phone| |43|order_physical_card| |44|passcode_forgotten| |45|pending_card_payment| |46|pending_cash_withdrawal| |47|pending_top_up| |48|pending_transfer| |49|pin_blocked| |50|receiving_money| |51|Refund_not_showing_up| |52|request_refund| |53|reverted_card_payment?| |54|supported_cards_and_currencies| |55|terminate_account| |56|top_up_by_bank_transfer_charge| |57|top_up_by_card_charge| |58|top_up_by_cash_or_cheque| |59|top_up_failed| |60|top_up_limits| |61|top_up_reverted| |62|topping_up_by_card| |63|transaction_charged_twice| |64|transfer_fee_charged| |65|transfer_into_account| |66|transfer_not_received_by_recipient| |67|transfer_timing| |68|unable_to_verify_identity| |69|verify_my_identity| |70|verify_source_of_funds| |71|verify_top_up| |72|virtual_card_not_working| |73|visa_or_mastercard| |74|why_verify_identity| |75|wrong_amount_of_cash_received| |76|wrong_exchange_rate_for_cash_withdrawal|
66349781857a20340f467e9d0f48a2a7
mit
['generated_from_trainer']
false
Training procedure ``` from transformers import pipeline pipe = pipeline("text-classification", model="nickprock/xlm-roberta-base-banking77-classification") pipe("Non riesco a pagare con la carta di credito") ```
265cdf0ececa6c172ff52a2ba86e9f65
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 3.8002 | 1.0 | 157 | 2.7771 | 0.5159 | 0.4483 | | 2.4006 | 2.0 | 314 | 1.6937 | 0.7140 | 0.6720 | | 1.4633 | 3.0 | 471 | 1.0385 | 0.8308 | 0.8153 | | 0.9234 | 4.0 | 628 | 0.7008 | 0.8789 | 0.8761 | | 0.6163 | 5.0 | 785 | 0.5029 | 0.9068 | 0.9063 | | 0.4282 | 6.0 | 942 | 0.4084 | 0.9123 | 0.9125 | | 0.3203 | 7.0 | 1099 | 0.3515 | 0.9253 | 0.9253 | | 0.245 | 8.0 | 1256 | 0.3295 | 0.9227 | 0.9225 | | 0.1863 | 9.0 | 1413 | 0.3092 | 0.9269 | 0.9269 | | 0.1518 | 10.0 | 1570 | 0.2901 | 0.9338 | 0.9338 | | 0.1179 | 11.0 | 1727 | 0.2938 | 0.9318 | 0.9319 | | 0.0969 | 12.0 | 1884 | 0.2906 | 0.9328 | 0.9328 | | 0.0805 | 13.0 | 2041 | 0.2963 | 0.9295 | 0.9295 | | 0.063 | 14.0 | 2198 | 0.2998 | 0.9289 | 0.9288 | | 0.0554 | 15.0 | 2355 | 0.2933 | 0.9351 | 0.9349 | | 0.046 | 16.0 | 2512 | 0.2960 | 0.9328 | 0.9326 | | 0.04 | 17.0 | 2669 | 0.3032 | 0.9318 | 0.9318 | | 0.035 | 18.0 | 2826 | 0.3061 | 0.9312 | 0.9312 | | 0.0317 | 19.0 | 2983 | 0.3030 | 0.9331 | 0.9330 | | 0.0315 | 20.0 | 3140 | 0.3034 | 0.9321 | 0.9321 |
97896799c78769e3444a29c913613ccc