license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
[]
false
Pre-trained baseline model - Pre-trained model: [BERTweet](https://github.com/VinAIResearch/BERTweet) - trained based on the RoBERTa pre-training procedure - 850M General English Tweets (Jan 2012 to Aug 2019) - 23M COVID-19 English Tweets - Size of the model: >134M parameters - Further training - Pre-training with recent COVID-19/vaccine tweets and fine-tuning for fact classification
770e0f49d2e86e9431787c3559a6838d
apache-2.0
[]
false
1) Pre-training language model - The model was pre-trained on COVID-19/vaccined related tweets using a masked language modeling (MLM) objective starting from BERTweet. - Following datasets on English tweets were used: - Tweets with trending
a6c2c7639190819546c0e9708a757320
apache-2.0
[]
false
CovidVaccine hashtag, 207,000 tweets uploaded across Aug 2020 to Apr 2021 ([kaggle](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets)) - Tweets about all COVID-19 vaccines, 78,000 tweets uploaded across Dec 2020 to May 2021 ([kaggle](https://www.kaggle.com/gpreda/all-covid19-vaccines-tweets)) - COVID-19 Twitter chatter dataset, 590,000 tweets uploaded across Mar 2021 to May 2021 ([github](https://github.com/thepanacealab/covid19_twitter))
a751f8476c0d54d93431b6f7c2fd7ec1
apache-2.0
[]
false
2) Fine-tuning for fact classification - A fine-tuned model from pre-trained language model (1) for fact-classification task on COVID-19/vaccine. - COVID-19/vaccine-related statements were collected from [Poynter](https://www.poynter.org/ifcn-covid-19-misinformation/) and [Snopes](https://www.snopes.com/) using Selenium resulting in over 14,000 fact-checked statements from Jan 2020 to May 2021. - Original labels were divided within following three categories: - `False`: includes false, no evidence, manipulated, fake, not true, unproven and unverified - `Misleading`: includes misleading, exaggerated, out of context and needs context - `True`: includes true and correct
91d1a7ec17f48b4235b347baa02b6c5b
apache-2.0
[]
false
Contributors - This model is a part of final team project from MLDL for DS class at SNU. - Team BIBI - Vaccinating COVID-NineTweets - Team members: Ahn, Hyunju; An, Jiyong; An, Seungchan; Jeong, Seokho; Kim, Jungmin; Kim, Sangbeom - Advisor: Prof. Wen-Syan Li <a href="https://gsds.snu.ac.kr/"><img src="https://gsds.snu.ac.kr/wp-content/uploads/sites/50/2021/04/GSDS_logo2-e1619068952717.png" width="200" height="80"></a>
ebea189fbb2519c23bb4fe9acb866f34
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__hate_speech_offensive__train-16-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0424 - Accuracy: 0.5355
bc384baee86a9b84b2f326bbd444a046
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0989 | 1.0 | 10 | 1.1049 | 0.1 | | 1.0641 | 2.0 | 20 | 1.0768 | 0.3 | | 0.9742 | 3.0 | 30 | 1.0430 | 0.4 | | 0.8765 | 4.0 | 40 | 1.0058 | 0.4 | | 0.6979 | 5.0 | 50 | 0.8488 | 0.7 | | 0.563 | 6.0 | 60 | 0.7221 | 0.7 | | 0.4135 | 7.0 | 70 | 0.6587 | 0.8 | | 0.2509 | 8.0 | 80 | 0.5577 | 0.7 | | 0.0943 | 9.0 | 90 | 0.5840 | 0.7 | | 0.0541 | 10.0 | 100 | 0.6959 | 0.7 | | 0.0362 | 11.0 | 110 | 0.6884 | 0.6 | | 0.0254 | 12.0 | 120 | 0.9263 | 0.6 | | 0.0184 | 13.0 | 130 | 0.7992 | 0.6 | | 0.0172 | 14.0 | 140 | 0.7351 | 0.6 | | 0.0131 | 15.0 | 150 | 0.7664 | 0.6 | | 0.0117 | 16.0 | 160 | 0.8262 | 0.6 | | 0.0101 | 17.0 | 170 | 0.8839 | 0.6 | | 0.0089 | 18.0 | 180 | 0.9018 | 0.6 |
a002217a3888369d10441e14f58ecd9d
mit
['generated_from_trainer']
false
mdeberta_all This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2148 - Aerospacemanufacturer Precision: 0.7073 - Aerospacemanufacturer Recall: 0.8406 - Aerospacemanufacturer F1: 0.7682 - Aerospacemanufacturer Number: 138 - Anatomicalstructure Precision: 0.6762 - Anatomicalstructure Recall: 0.7269 - Anatomicalstructure F1: 0.7006 - Anatomicalstructure Number: 227 - Artwork Precision: 0.5802 - Artwork Recall: 0.5802 - Artwork F1: 0.5802 - Artwork Number: 131 - Artist Precision: 0.7565 - Artist Recall: 0.7938 - Artist F1: 0.7747 - Artist Number: 1722 - Athlete Precision: 0.7195 - Athlete Recall: 0.7636 - Athlete F1: 0.7409 - Athlete Number: 719 - Carmanufacturer Precision: 0.6806 - Carmanufacturer Recall: 0.8176 - Carmanufacturer F1: 0.7429 - Carmanufacturer Number: 159 - Cleric Precision: 0.6867 - Cleric Recall: 0.5124 - Cleric F1: 0.5869 - Cleric Number: 201 - Clothing Precision: 0.5797 - Clothing Recall: 0.625 - Clothing F1: 0.6015 - Clothing Number: 128 - Disease Precision: 0.6262 - Disease Recall: 0.6768 - Disease F1: 0.6505 - Disease Number: 198 - Drink Precision: 0.7296 - Drink Recall: 0.8112 - Drink F1: 0.7682 - Drink Number: 143 - Facility Precision: 0.6439 - Facility Recall: 0.7203 - Facility F1: 0.6800 - Facility Number: 497 - Food Precision: 0.6786 - Food Recall: 0.5327 - Food F1: 0.5969 - Food Number: 214 - Humansettlement Precision: 0.8594 - Humansettlement Recall: 0.8792 - Humansettlement F1: 0.8692 - Humansettlement Number: 1689 - Medicalprocedure Precision: 0.6545 - Medicalprocedure Recall: 0.7606 - Medicalprocedure F1: 0.7036 - Medicalprocedure Number: 142 - Medication/vaccine Precision: 0.7183 - Medication/vaccine Recall: 0.765 - Medication/vaccine F1: 0.7409 - Medication/vaccine Number: 200 - Musicalgrp Precision: 0.7132 - Musicalgrp Recall: 0.7688 - Musicalgrp F1: 0.7400 - Musicalgrp Number: 372 - Musicalwork Precision: 0.7513 - Musicalwork Recall: 0.7052 - Musicalwork F1: 0.7275 - Musicalwork Number: 407 - Org Precision: 0.6335 - Org Recall: 0.6117 - Org F1: 0.6224 - Org Number: 667 - Otherloc Precision: 0.7514 - Otherloc Recall: 0.6205 - Otherloc F1: 0.6797 - Otherloc Number: 224 - Otherper Precision: 0.4558 - Otherper Recall: 0.5821 - Otherper F1: 0.5112 - Otherper Number: 859 - Otherprod Precision: 0.6076 - Otherprod Recall: 0.5543 - Otherprod F1: 0.5797 - Otherprod Number: 433 - Politician Precision: 0.6228 - Politician Recall: 0.4793 - Politician F1: 0.5417 - Politician Number: 603 - Privatecorp Precision: 0.7159 - Privatecorp Recall: 0.4884 - Privatecorp F1: 0.5806 - Privatecorp Number: 129 - Publiccorp Precision: 0.56 - Publiccorp Recall: 0.6914 - Publiccorp F1: 0.6188 - Publiccorp Number: 243 - Scientist Precision: 0.4545 - Scientist Recall: 0.4497 - Scientist F1: 0.4521 - Scientist Number: 189 - Software Precision: 0.7159 - Software Recall: 0.8046 - Software F1: 0.7577 - Software Number: 307 - Sportsgrp Precision: 0.7845 - Sportsgrp Recall: 0.8701 - Sportsgrp F1: 0.8251 - Sportsgrp Number: 385 - Sportsmanager Precision: 0.6667 - Sportsmanager Recall: 0.5361 - Sportsmanager F1: 0.5943 - Sportsmanager Number: 194 - Station Precision: 0.7406 - Station Recall: 0.8093 - Station F1: 0.7734 - Station Number: 194 - Symptom Precision: 0.6316 - Symptom Recall: 0.5581 - Symptom F1: 0.5926 - Symptom Number: 129 - Vehicle Precision: 0.5514 - Vehicle Recall: 0.6505 - Vehicle F1: 0.5969 - Vehicle Number: 206 - Visualwork Precision: 0.7538 - Visualwork Recall: 0.7951 - Visualwork F1: 0.7739 - Visualwork Number: 693 - Writtenwork Precision: 0.6913 - Writtenwork Recall: 0.6803 - Writtenwork F1: 0.6858 - Writtenwork Number: 563 - Overall Precision: 0.6928 - Overall Recall: 0.7142 - Overall F1: 0.7033 - Overall Accuracy: 0.9355
2a4d553bc53ef1d17833a36d8a54f2fa
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0
b8a821a64bfe42e1eef12a14f69dd055
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Aerospacemanufacturer Precision | Aerospacemanufacturer Recall | Aerospacemanufacturer F1 | Aerospacemanufacturer Number | Anatomicalstructure Precision | Anatomicalstructure Recall | Anatomicalstructure F1 | Anatomicalstructure Number | Artwork Precision | Artwork Recall | Artwork F1 | Artwork Number | Artist Precision | Artist Recall | Artist F1 | Artist Number | Athlete Precision | Athlete Recall | Athlete F1 | Athlete Number | Carmanufacturer Precision | Carmanufacturer Recall | Carmanufacturer F1 | Carmanufacturer Number | Cleric Precision | Cleric Recall | Cleric F1 | Cleric Number | Clothing Precision | Clothing Recall | Clothing F1 | Clothing Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Drink Precision | Drink Recall | Drink F1 | Drink Number | Facility Precision | Facility Recall | Facility F1 | Facility Number | Food Precision | Food Recall | Food F1 | Food Number | Humansettlement Precision | Humansettlement Recall | Humansettlement F1 | Humansettlement Number | Medicalprocedure Precision | Medicalprocedure Recall | Medicalprocedure F1 | Medicalprocedure Number | Medication/vaccine Precision | Medication/vaccine Recall | Medication/vaccine F1 | Medication/vaccine Number | Musicalgrp Precision | Musicalgrp Recall | Musicalgrp F1 | Musicalgrp Number | Musicalwork Precision | Musicalwork Recall | Musicalwork F1 | Musicalwork Number | Org Precision | Org Recall | Org F1 | Org Number | Otherloc Precision | Otherloc Recall | Otherloc F1 | Otherloc Number | Otherper Precision | Otherper Recall | Otherper F1 | Otherper Number | Otherprod Precision | Otherprod Recall | Otherprod F1 | Otherprod Number | Politician Precision | Politician Recall | Politician F1 | Politician Number | Privatecorp Precision | Privatecorp Recall | Privatecorp F1 | Privatecorp Number | Publiccorp Precision | Publiccorp Recall | Publiccorp F1 | Publiccorp Number | Scientist Precision | Scientist Recall | Scientist F1 | Scientist Number | Software Precision | Software Recall | Software F1 | Software Number | Sportsgrp Precision | Sportsgrp Recall | Sportsgrp F1 | Sportsgrp Number | Sportsmanager Precision | Sportsmanager Recall | Sportsmanager F1 | Sportsmanager Number | Station Precision | Station Recall | Station F1 | Station Number | Symptom Precision | Symptom Recall | Symptom F1 | Symptom Number | Vehicle Precision | Vehicle Recall | Vehicle F1 | Vehicle Number | Visualwork Precision | Visualwork Recall | Visualwork F1 | Visualwork Number | Writtenwork Precision | Writtenwork Recall | Writtenwork F1 | Writtenwork Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:------:|:---------------:|:-------------------------------:|:----------------------------:|:------------------------:|:----------------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:-----------------:|:--------------:|:----------:|:--------------:|:----------------:|:-------------:|:---------:|:-------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-------------------------:|:----------------------:|:------------------:|:----------------------:|:----------------:|:-------------:|:---------:|:-------------:|:------------------:|:---------------:|:-----------:|:---------------:|:-----------------:|:--------------:|:----------:|:--------------:|:---------------:|:------------:|:--------:|:------------:|:------------------:|:---------------:|:-----------:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------------------:|:----------------------:|:------------------:|:----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:-------------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:-------------:|:----------:|:------:|:----------:|:------------------:|:---------------:|:-----------:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:------------------:|:---------------:|:-----------:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------:|:--------------:|:----------:|:--------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.2881 | 1.0 | 21353 | 0.2534 | 0.5191 | 0.6884 | 0.5919 | 138 | 0.5693 | 0.6872 | 0.6228 | 227 | 0.4366 | 0.4733 | 0.4542 | 131 | 0.7109 | 0.8055 | 0.7552 | 1722 | 0.6782 | 0.6801 | 0.6792 | 719 | 0.6552 | 0.7170 | 0.6847 | 159 | 0.4874 | 0.4826 | 0.485 | 201 | 0.4639 | 0.6016 | 0.5238 | 128 | 0.4799 | 0.7222 | 0.5766 | 198 | 0.55 | 0.6923 | 0.6130 | 143 | 0.5305 | 0.6640 | 0.5898 | 497 | 0.4042 | 0.7196 | 0.5176 | 214 | 0.8164 | 0.8792 | 0.8466 | 1689 | 0.5799 | 0.6901 | 0.6302 | 142 | 0.5945 | 0.755 | 0.6652 | 200 | 0.7445 | 0.6425 | 0.6898 | 372 | 0.6667 | 0.6634 | 0.6650 | 407 | 0.5585 | 0.5652 | 0.5618 | 667 | 0.6724 | 0.5223 | 0.5879 | 224 | 0.3835 | 0.4773 | 0.4253 | 859 | 0.5237 | 0.4596 | 0.4895 | 433 | 0.5176 | 0.4643 | 0.4895 | 603 | 0.4688 | 0.1163 | 0.1863 | 129 | 0.4207 | 0.6008 | 0.4949 | 243 | 0.4099 | 0.3492 | 0.3771 | 189 | 0.6686 | 0.7362 | 0.7008 | 307 | 0.7688 | 0.7948 | 0.7816 | 385 | 0.5136 | 0.5825 | 0.5459 | 194 | 0.6667 | 0.8041 | 0.7290 | 194 | 0.5588 | 0.1473 | 0.2331 | 129 | 0.4910 | 0.5291 | 0.5093 | 206 | 0.6662 | 0.7489 | 0.7052 | 693 | 0.6180 | 0.5861 | 0.6016 | 563 | 0.6136 | 0.6640 | 0.6378 | 0.9220 | | 0.2195 | 2.0 | 42706 | 0.2288 | 0.6409 | 0.8406 | 0.7273 | 138 | 0.6908 | 0.6300 | 0.6590 | 227 | 0.5625 | 0.5496 | 0.5560 | 131 | 0.7336 | 0.8124 | 0.7710 | 1722 | 0.6466 | 0.8067 | 0.7178 | 719 | 0.6095 | 0.8050 | 0.6938 | 159 | 0.5848 | 0.4975 | 0.5376 | 201 | 0.5260 | 0.6328 | 0.5745 | 128 | 0.5427 | 0.6414 | 0.5880 | 198 | 0.7042 | 0.6993 | 0.7018 | 143 | 0.6157 | 0.7123 | 0.6604 | 497 | 0.6140 | 0.4907 | 0.5455 | 214 | 0.8454 | 0.8745 | 0.8597 | 1689 | 0.6125 | 0.6901 | 0.6490 | 142 | 0.6898 | 0.745 | 0.7163 | 200 | 0.7299 | 0.7339 | 0.7319 | 372 | 0.6901 | 0.7224 | 0.7059 | 407 | 0.6224 | 0.5907 | 0.6062 | 667 | 0.7312 | 0.6071 | 0.6634 | 224 | 0.4851 | 0.4750 | 0.4800 | 859 | 0.5994 | 0.4804 | 0.5333 | 433 | 0.5675 | 0.5091 | 0.5367 | 603 | 0.6905 | 0.4496 | 0.5446 | 129 | 0.5516 | 0.6379 | 0.5916 | 243 | 0.4570 | 0.3651 | 0.4059 | 189 | 0.7508 | 0.7459 | 0.7484 | 307 | 0.7833 | 0.8260 | 0.8040 | 385 | 0.7071 | 0.5103 | 0.5928 | 194 | 0.6920 | 0.7990 | 0.7416 | 194 | 0.4590 | 0.4341 | 0.4462 | 129 | 0.4717 | 0.7282 | 0.5725 | 206 | 0.7275 | 0.7821 | 0.7538 | 693 | 0.6618 | 0.6430 | 0.6523 | 563 | 0.6711 | 0.6946 | 0.6827 | 0.9317 | | 0.1965 | 3.0 | 64059 | 0.2148 | 0.7073 | 0.8406 | 0.7682 | 138 | 0.6762 | 0.7269 | 0.7006 | 227 | 0.5802 | 0.5802 | 0.5802 | 131 | 0.7565 | 0.7938 | 0.7747 | 1722 | 0.7195 | 0.7636 | 0.7409 | 719 | 0.6806 | 0.8176 | 0.7429 | 159 | 0.6867 | 0.5124 | 0.5869 | 201 | 0.5797 | 0.625 | 0.6015 | 128 | 0.6262 | 0.6768 | 0.6505 | 198 | 0.7296 | 0.8112 | 0.7682 | 143 | 0.6439 | 0.7203 | 0.6800 | 497 | 0.6786 | 0.5327 | 0.5969 | 214 | 0.8594 | 0.8792 | 0.8692 | 1689 | 0.6545 | 0.7606 | 0.7036 | 142 | 0.7183 | 0.765 | 0.7409 | 200 | 0.7132 | 0.7688 | 0.7400 | 372 | 0.7513 | 0.7052 | 0.7275 | 407 | 0.6335 | 0.6117 | 0.6224 | 667 | 0.7514 | 0.6205 | 0.6797 | 224 | 0.4558 | 0.5821 | 0.5112 | 859 | 0.6076 | 0.5543 | 0.5797 | 433 | 0.6228 | 0.4793 | 0.5417 | 603 | 0.7159 | 0.4884 | 0.5806 | 129 | 0.56 | 0.6914 | 0.6188 | 243 | 0.4545 | 0.4497 | 0.4521 | 189 | 0.7159 | 0.8046 | 0.7577 | 307 | 0.7845 | 0.8701 | 0.8251 | 385 | 0.6667 | 0.5361 | 0.5943 | 194 | 0.7406 | 0.8093 | 0.7734 | 194 | 0.6316 | 0.5581 | 0.5926 | 129 | 0.5514 | 0.6505 | 0.5969 | 206 | 0.7538 | 0.7951 | 0.7739 | 693 | 0.6913 | 0.6803 | 0.6858 | 563 | 0.6928 | 0.7142 | 0.7033 | 0.9355 | | 0.1665 | 4.0 | 85412 | 0.2193 | 0.7917 | 0.8261 | 0.8085 | 138 | 0.7069 | 0.7225 | 0.7146 | 227 | 0.5867 | 0.6718 | 0.6263 | 131 | 0.7710 | 0.7938 | 0.7823 | 1722 | 0.6962 | 0.7650 | 0.7290 | 719 | 0.7904 | 0.8302 | 0.8098 | 159 | 0.6221 | 0.5323 | 0.5737 | 201 | 0.5743 | 0.6641 | 0.6159 | 128 | 0.5966 | 0.7172 | 0.6514 | 198 | 0.7914 | 0.7692 | 0.7801 | 143 | 0.6395 | 0.7103 | 0.6730 | 497 | 0.6422 | 0.6121 | 0.6268 | 214 | 0.8338 | 0.9059 | 0.8683 | 1689 | 0.6711 | 0.7183 | 0.6939 | 142 | 0.7635 | 0.775 | 0.7692 | 200 | 0.7669 | 0.7608 | 0.7638 | 372 | 0.6872 | 0.7936 | 0.7366 | 407 | 0.7100 | 0.5982 | 0.6493 | 667 | 0.7181 | 0.7277 | 0.7228 | 224 | 0.4765 | 0.5553 | 0.5129 | 859 | 0.6225 | 0.5866 | 0.6040 | 433 | 0.6078 | 0.5191 | 0.5599 | 603 | 0.7222 | 0.5039 | 0.5936 | 129 | 0.6065 | 0.7737 | 0.6799 | 243 | 0.4783 | 0.5238 | 0.5 | 189 | 0.7313 | 0.7980 | 0.7632 | 307 | 0.8401 | 0.8597 | 0.8498 | 385 | 0.6058 | 0.6495 | 0.6269 | 194 | 0.7512 | 0.7938 | 0.7719 | 194 | 0.6983 | 0.6279 | 0.6612 | 129 | 0.5804 | 0.7184 | 0.6421 | 206 | 0.7571 | 0.8052 | 0.7804 | 693 | 0.6916 | 0.6892 | 0.6904 | 563 | 0.7012 | 0.7309 | 0.7158 | 0.9375 | | 0.1314 | 5.0 | 106765 | 0.2272 | 0.7707 | 0.8768 | 0.8203 | 138 | 0.7137 | 0.7577 | 0.7350 | 227 | 0.6058 | 0.6336 | 0.6194 | 131 | 0.7229 | 0.8513 | 0.7819 | 1722 | 0.7361 | 0.7761 | 0.7556 | 719 | 0.6839 | 0.8302 | 0.7500 | 159 | 0.5845 | 0.6020 | 0.5931 | 201 | 0.6148 | 0.6484 | 0.6312 | 128 | 0.6121 | 0.7172 | 0.6605 | 198 | 0.6970 | 0.8042 | 0.7468 | 143 | 0.6438 | 0.6982 | 0.6699 | 497 | 0.6197 | 0.6776 | 0.6473 | 214 | 0.8390 | 0.8887 | 0.8631 | 1689 | 0.7333 | 0.6972 | 0.7148 | 142 | 0.7443 | 0.815 | 0.7780 | 200 | 0.7217 | 0.7876 | 0.7532 | 372 | 0.7113 | 0.7568 | 0.7333 | 407 | 0.6682 | 0.6522 | 0.6601 | 667 | 0.7136 | 0.7009 | 0.7072 | 224 | 0.5351 | 0.4796 | 0.5058 | 859 | 0.5930 | 0.6259 | 0.6090 | 433 | 0.6112 | 0.5240 | 0.5643 | 603 | 0.7767 | 0.6202 | 0.6897 | 129 | 0.6254 | 0.7284 | 0.6730 | 243 | 0.4815 | 0.4815 | 0.4815 | 189 | 0.7654 | 0.8078 | 0.7861 | 307 | 0.7611 | 0.8935 | 0.8220 | 385 | 0.6667 | 0.6082 | 0.6361 | 194 | 0.7828 | 0.7990 | 0.7908 | 194 | 0.6692 | 0.6899 | 0.6794 | 129 | 0.5983 | 0.6942 | 0.6427 | 206 | 0.7584 | 0.8153 | 0.7858 | 693 | 0.6740 | 0.7052 | 0.6892 | 563 | 0.7006 | 0.7401 | 0.7198 | 0.9378 | | 0.1224 | 6.0 | 128118 | 0.2275 | 0.8286 | 0.8406 | 0.8345 | 138 | 0.6898 | 0.7445 | 0.7161 | 227 | 0.6013 | 0.7252 | 0.6574 | 131 | 0.7574 | 0.8415 | 0.7972 | 1722 | 0.7400 | 0.7483 | 0.7441 | 719 | 0.8084 | 0.8491 | 0.8282 | 159 | 0.7055 | 0.5721 | 0.6319 | 201 | 0.6061 | 0.625 | 0.6154 | 128 | 0.7090 | 0.6768 | 0.6925 | 198 | 0.7868 | 0.7483 | 0.7670 | 143 | 0.6454 | 0.7545 | 0.6957 | 497 | 0.6287 | 0.6963 | 0.6608 | 214 | 0.8548 | 0.8851 | 0.8697 | 1689 | 0.7669 | 0.7183 | 0.7418 | 142 | 0.75 | 0.825 | 0.7857 | 200 | 0.7130 | 0.8280 | 0.7662 | 372 | 0.6848 | 0.8059 | 0.7404 | 407 | 0.7112 | 0.6462 | 0.6771 | 667 | 0.7879 | 0.6964 | 0.7393 | 224 | 0.5378 | 0.5378 | 0.5378 | 859 | 0.6554 | 0.5797 | 0.6152 | 433 | 0.5946 | 0.5887 | 0.5917 | 603 | 0.8131 | 0.6744 | 0.7373 | 129 | 0.6483 | 0.7737 | 0.7054 | 243 | 0.5537 | 0.5185 | 0.5355 | 189 | 0.7704 | 0.7980 | 0.784 | 307 | 0.8415 | 0.8961 | 0.8679 | 385 | 0.6566 | 0.6701 | 0.6633 | 194 | 0.7879 | 0.8041 | 0.7959 | 194 | 0.6159 | 0.7829 | 0.6894 | 129 | 0.5887 | 0.7087 | 0.6432 | 206 | 0.7864 | 0.8023 | 0.7943 | 693 | 0.7388 | 0.6732 | 0.7045 | 563 | 0.7221 | 0.7475 | 0.7346 | 0.9406 | | 0.0964 | 7.0 | 149471 | 0.2456 | 0.7947 | 0.8696 | 0.8304 | 138 | 0.7107 | 0.7577 | 0.7335 | 227 | 0.6522 | 0.6870 | 0.6691 | 131 | 0.7780 | 0.8182 | 0.7976 | 1722 | 0.7546 | 0.7483 | 0.7514 | 719 | 0.7870 | 0.8365 | 0.8110 | 159 | 0.6020 | 0.6020 | 0.6020 | 201 | 0.58 | 0.6797 | 0.6259 | 128 | 0.6129 | 0.7677 | 0.6816 | 198 | 0.7468 | 0.8252 | 0.7841 | 143 | 0.6642 | 0.7284 | 0.6948 | 497 | 0.6840 | 0.6776 | 0.6808 | 214 | 0.8586 | 0.8810 | 0.8697 | 1689 | 0.7836 | 0.7394 | 0.7609 | 142 | 0.7082 | 0.825 | 0.7621 | 200 | 0.7731 | 0.7876 | 0.7803 | 372 | 0.7606 | 0.7494 | 0.7550 | 407 | 0.6726 | 0.6837 | 0.6781 | 667 | 0.7581 | 0.7277 | 0.7426 | 224 | 0.5176 | 0.5634 | 0.5396 | 859 | 0.6599 | 0.6005 | 0.6288 | 433 | 0.5938 | 0.5672 | 0.5802 | 603 | 0.8776 | 0.6667 | 0.7577 | 129 | 0.7198 | 0.7613 | 0.74 | 243 | 0.5078 | 0.5185 | 0.5131 | 189 | 0.7933 | 0.7752 | 0.7842 | 307 | 0.8033 | 0.8909 | 0.8448 | 385 | 0.6071 | 0.7010 | 0.6507 | 194 | 0.7429 | 0.8041 | 0.7723 | 194 | 0.7321 | 0.6357 | 0.6805 | 129 | 0.5775 | 0.7233 | 0.6422 | 206 | 0.7858 | 0.7994 | 0.7926 | 693 | 0.6678 | 0.7282 | 0.6967 | 563 | 0.7199 | 0.7475 | 0.7334 | 0.9403 | | 0.0838 | 8.0 | 170824 | 0.2562 | 0.7722 | 0.8841 | 0.8243 | 138 | 0.6929 | 0.7753 | 0.7318 | 227 | 0.6483 | 0.7176 | 0.6812 | 131 | 0.7859 | 0.8101 | 0.7978 | 1722 | 0.7419 | 0.7316 | 0.7367 | 719 | 0.7389 | 0.8365 | 0.7847 | 159 | 0.5797 | 0.5970 | 0.5882 | 201 | 0.5878 | 0.6797 | 0.6304 | 128 | 0.6574 | 0.7172 | 0.6860 | 198 | 0.7597 | 0.8182 | 0.7879 | 143 | 0.7108 | 0.7123 | 0.7116 | 497 | 0.6511 | 0.7150 | 0.6815 | 214 | 0.8791 | 0.8822 | 0.8806 | 1689 | 0.75 | 0.7606 | 0.7552 | 142 | 0.7594 | 0.805 | 0.7816 | 200 | 0.7842 | 0.8011 | 0.7926 | 372 | 0.7395 | 0.7813 | 0.7599 | 407 | 0.6965 | 0.6777 | 0.6869 | 667 | 0.7179 | 0.75 | 0.7336 | 224 | 0.5081 | 0.5809 | 0.5421 | 859 | 0.6327 | 0.6166 | 0.6246 | 433 | 0.6094 | 0.5821 | 0.5954 | 603 | 0.8776 | 0.6667 | 0.7577 | 129 | 0.7059 | 0.7407 | 0.7229 | 243 | 0.5444 | 0.5185 | 0.5312 | 189 | 0.7722 | 0.7948 | 0.7833 | 307 | 0.8067 | 0.8779 | 0.8408 | 385 | 0.6408 | 0.6804 | 0.6600 | 194 | 0.7546 | 0.8402 | 0.7951 | 194 | 0.6831 | 0.7519 | 0.7159 | 129 | 0.6255 | 0.7136 | 0.6667 | 206 | 0.7392 | 0.8427 | 0.7876 | 693 | 0.7289 | 0.7069 | 0.7178 | 563 | 0.7242 | 0.7514 | 0.7376 | 0.9414 | | 0.0753 | 9.0 | 192177 | 0.2708 | 0.8026 | 0.8841 | 0.8414 | 138 | 0.7054 | 0.8018 | 0.7505 | 227 | 0.6277 | 0.6565 | 0.6418 | 131 | 0.7762 | 0.8380 | 0.8059 | 1722 | 0.7552 | 0.7552 | 0.7552 | 719 | 0.7701 | 0.8428 | 0.8048 | 159 | 0.6610 | 0.5821 | 0.6190 | 201 | 0.5915 | 0.6562 | 0.6222 | 128 | 0.6575 | 0.7273 | 0.6906 | 198 | 0.7887 | 0.7832 | 0.7860 | 143 | 0.7050 | 0.7163 | 0.7106 | 497 | 0.6270 | 0.7383 | 0.6781 | 214 | 0.8441 | 0.8881 | 0.8656 | 1689 | 0.7589 | 0.7535 | 0.7562 | 142 | 0.7125 | 0.855 | 0.7773 | 200 | 0.755 | 0.8118 | 0.7824 | 372 | 0.7512 | 0.8010 | 0.7753 | 407 | 0.6788 | 0.6972 | 0.6879 | 667 | 0.7830 | 0.7411 | 0.7615 | 224 | 0.5155 | 0.5821 | 0.5467 | 859 | 0.6386 | 0.6490 | 0.6438 | 433 | 0.6629 | 0.5804 | 0.6189 | 603 | 0.8598 | 0.7132 | 0.7797 | 129 | 0.6667 | 0.7490 | 0.7054 | 243 | 0.4787 | 0.5344 | 0.505 | 189 | 0.7610 | 0.7883 | 0.7744 | 307 | 0.8285 | 0.8909 | 0.8586 | 385 | 0.7027 | 0.6701 | 0.6860 | 194 | 0.7778 | 0.8299 | 0.8030 | 194 | 0.6923 | 0.7674 | 0.7279 | 129 | 0.6396 | 0.6893 | 0.6636 | 206 | 0.7879 | 0.8095 | 0.7986 | 693 | 0.7110 | 0.7123 | 0.7116 | 563 | 0.7258 | 0.7593 | 0.7422 | 0.9424 | | 0.0574 | 10.0 | 213530 | 0.2862 | 0.8 | 0.8986 | 0.8464 | 138 | 0.7375 | 0.7797 | 0.7580 | 227 | 0.6471 | 0.6718 | 0.6592 | 131 | 0.7831 | 0.8008 | 0.7918 | 1722 | 0.6997 | 0.7747 | 0.7353 | 719 | 0.7714 | 0.8491 | 0.8084 | 159 | 0.6091 | 0.5970 | 0.6030 | 201 | 0.6357 | 0.6406 | 0.6381 | 128 | 0.7 | 0.7071 | 0.7035 | 198 | 0.7436 | 0.8112 | 0.7759 | 143 | 0.6729 | 0.7243 | 0.6977 | 497 | 0.6830 | 0.7150 | 0.6986 | 214 | 0.8627 | 0.8857 | 0.8741 | 1689 | 0.7483 | 0.7535 | 0.7509 | 142 | 0.7611 | 0.86 | 0.8075 | 200 | 0.7846 | 0.8226 | 0.8031 | 372 | 0.7640 | 0.7715 | 0.7677 | 407 | 0.6921 | 0.6942 | 0.6931 | 667 | 0.7478 | 0.7545 | 0.7511 | 224 | 0.5079 | 0.5960 | 0.5485 | 859 | 0.6457 | 0.6397 | 0.6427 | 433 | 0.6223 | 0.5821 | 0.6015 | 603 | 0.8704 | 0.7287 | 0.7932 | 129 | 0.7041 | 0.7737 | 0.7373 | 243 | 0.5073 | 0.5503 | 0.5279 | 189 | 0.7680 | 0.7980 | 0.7827 | 307 | 0.8658 | 0.8883 | 0.8769 | 385 | 0.7111 | 0.6598 | 0.6845 | 194 | 0.7681 | 0.8196 | 0.7930 | 194 | 0.7197 | 0.7364 | 0.7280 | 129 | 0.6192 | 0.7184 | 0.6652 | 206 | 0.7922 | 0.8196 | 0.8057 | 693 | 0.7206 | 0.7194 | 0.72 | 563 | 0.7282 | 0.7572 | 0.7424 | 0.9424 | | 0.0568 | 11.0 | 234883 | 0.2951 | 0.8026 | 0.8841 | 0.8414 | 138 | 0.7458 | 0.7753 | 0.7603 | 227 | 0.6241 | 0.6718 | 0.6471 | 131 | 0.7737 | 0.8240 | 0.7981 | 1722 | 0.7646 | 0.7455 | 0.7549 | 719 | 0.8121 | 0.8428 | 0.8272 | 159 | 0.6685 | 0.5920 | 0.6280 | 201 | 0.6870 | 0.6172 | 0.6502 | 128 | 0.7150 | 0.6970 | 0.7059 | 198 | 0.7872 | 0.7762 | 0.7817 | 143 | 0.6631 | 0.7485 | 0.7032 | 497 | 0.6842 | 0.6682 | 0.6761 | 214 | 0.8594 | 0.8828 | 0.8709 | 1689 | 0.7863 | 0.7254 | 0.7546 | 142 | 0.7824 | 0.845 | 0.8125 | 200 | 0.7628 | 0.8038 | 0.7827 | 372 | 0.7664 | 0.7740 | 0.7702 | 407 | 0.7232 | 0.6777 | 0.6997 | 667 | 0.7820 | 0.7366 | 0.7586 | 224 | 0.5362 | 0.5949 | 0.5640 | 859 | 0.6306 | 0.6467 | 0.6385 | 433 | 0.6472 | 0.5871 | 0.6157 | 603 | 0.8857 | 0.7209 | 0.7949 | 129 | 0.7138 | 0.7901 | 0.7500 | 243 | 0.5075 | 0.5397 | 0.5231 | 189 | 0.7834 | 0.8013 | 0.7923 | 307 | 0.8561 | 0.8961 | 0.8756 | 385 | 0.6809 | 0.6598 | 0.6702 | 194 | 0.7656 | 0.8247 | 0.7940 | 194 | 0.6736 | 0.7519 | 0.7106 | 129 | 0.6262 | 0.6505 | 0.6381 | 206 | 0.7892 | 0.8211 | 0.8048 | 693 | 0.7561 | 0.7105 | 0.7326 | 563 | 0.7390 | 0.7548 | 0.7468 | 0.9431 | | 0.0465 | 12.0 | 256236 | 0.3103 | 0.8194 | 0.9203 | 0.8669 | 138 | 0.7031 | 0.7930 | 0.7453 | 227 | 0.5867 | 0.6718 | 0.6263 | 131 | 0.7829 | 0.8211 | 0.8016 | 1722 | 0.7582 | 0.7413 | 0.7496 | 719 | 0.8059 | 0.8616 | 0.8328 | 159 | 0.6648 | 0.5920 | 0.6263 | 201 | 0.6385 | 0.6484 | 0.6434 | 128 | 0.6827 | 0.7172 | 0.6995 | 198 | 0.7778 | 0.8322 | 0.8041 | 143 | 0.6679 | 0.7324 | 0.6987 | 497 | 0.6864 | 0.7056 | 0.6959 | 214 | 0.8473 | 0.8905 | 0.8684 | 1689 | 0.7552 | 0.7606 | 0.7579 | 142 | 0.7362 | 0.865 | 0.7954 | 200 | 0.7487 | 0.8011 | 0.7740 | 372 | 0.7470 | 0.7764 | 0.7614 | 407 | 0.7042 | 0.7031 | 0.7037 | 667 | 0.7435 | 0.7634 | 0.7533 | 224 | 0.5438 | 0.5856 | 0.5639 | 859 | 0.6261 | 0.6536 | 0.6395 | 433 | 0.6442 | 0.6186 | 0.6311 | 603 | 0.8482 | 0.7364 | 0.7884 | 129 | 0.7283 | 0.7613 | 0.7445 | 243 | 0.5075 | 0.5397 | 0.5231 | 189 | 0.7915 | 0.7915 | 0.7915 | 307 | 0.8564 | 0.8987 | 0.8771 | 385 | 0.6211 | 0.7268 | 0.6698 | 194 | 0.7633 | 0.8144 | 0.7880 | 194 | 0.7313 | 0.7597 | 0.7452 | 129 | 0.6450 | 0.7233 | 0.6819 | 206 | 0.7758 | 0.8139 | 0.7944 | 693 | 0.7189 | 0.7176 | 0.7182 | 563 | 0.7312 | 0.7621 | 0.7464 | 0.9428 | | 0.0459 | 13.0 | 277589 | 0.3141 | 0.8267 | 0.8986 | 0.8611 | 138 | 0.7254 | 0.7797 | 0.7516 | 227 | 0.6099 | 0.6565 | 0.6324 | 131 | 0.7929 | 0.8182 | 0.8054 | 1722 | 0.7562 | 0.7677 | 0.7619 | 719 | 0.8084 | 0.8491 | 0.8282 | 159 | 0.6302 | 0.6020 | 0.6158 | 201 | 0.6412 | 0.6562 | 0.6486 | 128 | 0.6931 | 0.7071 | 0.7000 | 198 | 0.7770 | 0.8042 | 0.7904 | 143 | 0.6834 | 0.7384 | 0.7099 | 497 | 0.6967 | 0.6869 | 0.6918 | 214 | 0.8631 | 0.8845 | 0.8737 | 1689 | 0.7939 | 0.7324 | 0.7619 | 142 | 0.7830 | 0.83 | 0.8058 | 200 | 0.7822 | 0.8011 | 0.7915 | 372 | 0.7482 | 0.7740 | 0.7609 | 407 | 0.6982 | 0.6972 | 0.6977 | 667 | 0.7867 | 0.7411 | 0.7632 | 224 | 0.5323 | 0.5856 | 0.5576 | 859 | 0.6469 | 0.6559 | 0.6514 | 433 | 0.6512 | 0.6036 | 0.6265 | 603 | 0.8611 | 0.7209 | 0.7848 | 129 | 0.7287 | 0.7737 | 0.7505 | 243 | 0.5185 | 0.5185 | 0.5185 | 189 | 0.7910 | 0.8013 | 0.7961 | 307 | 0.8715 | 0.8987 | 0.8849 | 385 | 0.7283 | 0.6907 | 0.7090 | 194 | 0.7512 | 0.7938 | 0.7719 | 194 | 0.7313 | 0.7597 | 0.7452 | 129 | 0.6147 | 0.6893 | 0.6499 | 206 | 0.7947 | 0.8268 | 0.8105 | 693 | 0.7170 | 0.7247 | 0.7208 | 563 | 0.7406 | 0.7588 | 0.7496 | 0.9436 | | 0.0386 | 14.0 | 298942 | 0.3268 | 0.8333 | 0.9058 | 0.8681 | 138 | 0.7092 | 0.7841 | 0.7448 | 227 | 0.6028 | 0.6489 | 0.625 | 131 | 0.7848 | 0.8153 | 0.7998 | 1722 | 0.7701 | 0.7594 | 0.7647 | 719 | 0.8047 | 0.8553 | 0.8293 | 159 | 0.6373 | 0.6119 | 0.6244 | 201 | 0.6204 | 0.6641 | 0.6415 | 128 | 0.6794 | 0.7172 | 0.6978 | 198 | 0.7986 | 0.8042 | 0.8014 | 143 | 0.6691 | 0.7324 | 0.6993 | 497 | 0.7109 | 0.7009 | 0.7059 | 214 | 0.8624 | 0.8834 | 0.8728 | 1689 | 0.7754 | 0.7535 | 0.7643 | 142 | 0.7757 | 0.83 | 0.8019 | 200 | 0.7712 | 0.8065 | 0.7884 | 372 | 0.7621 | 0.7715 | 0.7668 | 407 | 0.6782 | 0.7076 | 0.6926 | 667 | 0.7661 | 0.7455 | 0.7557 | 224 | 0.5417 | 0.5669 | 0.5540 | 859 | 0.6388 | 0.6536 | 0.6461 | 433 | 0.6160 | 0.6119 | 0.6140 | 603 | 0.8889 | 0.7442 | 0.8101 | 129 | 0.7431 | 0.7737 | 0.7581 | 243 | 0.5025 | 0.5397 | 0.5204 | 189 | 0.7915 | 0.7915 | 0.7915 | 307 | 0.8618 | 0.8909 | 0.8761 | 385 | 0.6699 | 0.7113 | 0.6900 | 194 | 0.7621 | 0.8093 | 0.7850 | 194 | 0.7226 | 0.7674 | 0.7444 | 129 | 0.6384 | 0.6942 | 0.6651 | 206 | 0.7872 | 0.8167 | 0.8017 | 693 | 0.7140 | 0.7229 | 0.7184 | 563 | 0.7358 | 0.7585 | 0.7470 | 0.9435 | | 0.037 | 15.0 | 320295 | 0.3308 | 0.8117 | 0.9058 | 0.8562 | 138 | 0.716 | 0.7885 | 0.7505 | 227 | 0.6028 | 0.6489 | 0.625 | 131 | 0.7838 | 0.8252 | 0.8040 | 1722 | 0.7835 | 0.7552 | 0.7691 | 719 | 0.7953 | 0.8553 | 0.8242 | 159 | 0.6630 | 0.5970 | 0.6283 | 201 | 0.6296 | 0.6641 | 0.6464 | 128 | 0.6961 | 0.7172 | 0.7065 | 198 | 0.8 | 0.8112 | 0.8056 | 143 | 0.6849 | 0.7304 | 0.7069 | 497 | 0.7156 | 0.7056 | 0.7106 | 214 | 0.8631 | 0.8845 | 0.8737 | 1689 | 0.7852 | 0.7465 | 0.7653 | 142 | 0.7602 | 0.84 | 0.7981 | 200 | 0.7601 | 0.8091 | 0.7839 | 372 | 0.7506 | 0.7617 | 0.7561 | 407 | 0.6943 | 0.7151 | 0.7046 | 667 | 0.7767 | 0.7455 | 0.7608 | 224 | 0.5351 | 0.5588 | 0.5467 | 859 | 0.6453 | 0.6513 | 0.6483 | 433 | 0.6277 | 0.6153 | 0.6214 | 603 | 0.8981 | 0.7519 | 0.8186 | 129 | 0.7362 | 0.7695 | 0.7525 | 243 | 0.5178 | 0.5397 | 0.5285 | 189 | 0.7806 | 0.7883 | 0.7844 | 307 | 0.8804 | 0.8987 | 0.8895 | 385 | 0.6863 | 0.7216 | 0.7035 | 194 | 0.7696 | 0.8093 | 0.7889 | 194 | 0.7333 | 0.7674 | 0.7500 | 129 | 0.6471 | 0.6942 | 0.6698 | 206 | 0.7917 | 0.8225 | 0.8068 | 693 | 0.7255 | 0.7229 | 0.7242 | 563 | 0.7404 | 0.7600 | 0.7501 | 0.9437 |
0f8db13cae613bae2f3a31157343da88
mit
['generated_from_trainer']
false
xlmRoberta-for-VietnameseQA This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the UIT-Viquad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.8315
e2930e82edb2262378d17b3f7e1f9343
mit
['generated_from_trainer']
false
Training and evaluation data Credits to Viet Nguyen (FPTU AI Club) for the training and evaluation data. Training data: https://github.com/vietnguyen012/QA_viuit/blob/main/train.json Evaluation data: https://github.com/vietnguyen012/QA_viuit/blob/main/trial/trial.json
c0e702c9cddb177293b9410c12971705
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
55bb74f7a477639d4496fe21d46bed3c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5701 | 1.0 | 2534 | 1.2220 | | 1.2942 | 2.0 | 5068 | 0.9698 | | 1.0693 | 3.0 | 7602 | 0.8315 |
83a91659ef76bb37a44836baa3343d97
mit
['generated_from_trainer']
false
deberta_base_fine_tuned_mind This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3914 - Accuracy: 0.9085
cdee2fab5b573a4eaf76b6e82919c359
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7244 | 1.0 | 3054 | 0.5959 | 0.8013 | | 0.5036 | 2.0 | 6108 | 0.3817 | 0.8805 | | 0.3064 | 3.0 | 9162 | 0.3914 | 0.9085 |
9005e362b0d84ab3c10edb6c8132fc89
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-mrpc-from-scratch-custom-tokenizer-expand-vocab This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1253
9d82bb1aacc3bf472a301e732079fae5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.7459 | 1.09 | 500 | 6.8361 | | 6.6663 | 2.18 | 1000 | 6.5166 | | 6.4828 | 3.27 | 1500 | 6.4653 | | 6.376 | 4.36 | 2000 | 6.3790 | | 6.2758 | 5.45 | 2500 | 6.3507 | | 6.2192 | 6.54 | 3000 | 6.2435 | | 6.1177 | 7.63 | 3500 | 6.2547 | | 6.0904 | 8.71 | 4000 | 6.1996 | | 6.0272 | 9.8 | 4500 | 6.2123 | | 5.9979 | 10.89 | 5000 | 6.1253 |
ec0a2937c3f964c0cac8aa12c11dad19
apache-2.0
['vision', 'maxim', 'image-to-image']
false
MAXIM pre-trained on REDS for image deblurring MAXIM model pre-trained for image deblurring. It was introduced in the paper [MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/abs/2201.02973) by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in [this repository](https://github.com/google-research/maxim). Disclaimer: The team releasing MAXIM did not write a model card for this model so this model card has been written by the Hugging Face team.
4fd9ac01ab0eb072ae020896bb304554
apache-2.0
['vision', 'maxim', 'image-to-image']
false
How to use Here is how to use this model: ```python from huggingface_hub import from_pretrained_keras from PIL import Image import tensorflow as tf import numpy as np import requests url = "https://github.com/sayakpaul/maxim-tf/blob/main/images/Deblurring/input/109fromGOPR1096.MP4.png?raw=true" image = Image.open(requests.get(url, stream=True).raw) image = np.array(image) image = tf.convert_to_tensor(image) image = tf.image.resize(image, (256, 256)) model = from_pretrained_keras("google/maxim-s3-deblurring-reds") predictions = model.predict(tf.expand_dims(image, 0)) ``` For a more elaborate prediction pipeline, refer to [this Colab Notebook](https://colab.research.google.com/github/sayakpaul/maxim-tf/blob/main/notebooks/inference-dynamic-resize.ipynb).
21c0085498a931cfbd96bc71e31aac53
mit
['generated_from_trainer']
false
bertimbau-base-finetuned-lener-br-finetuned-peticoes-grupo_competencia This model is a fine-tuned version of [Luciano/bertimbau-base-finetuned-lener-br](https://huggingface.co/Luciano/bertimbau-base-finetuned-lener-br) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3234 - Accuracy: 0.9434
926c1ecd8b0129736d26e90415988e65
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.37 | 1.0 | 897 | 0.2100 | 0.9365 | | 0.1662 | 2.0 | 1794 | 0.2009 | 0.9479 | | 0.1205 | 3.0 | 2691 | 0.2489 | 0.9423 | | 0.0855 | 4.0 | 3588 | 0.2918 | 0.9404 | | 0.0438 | 5.0 | 4485 | 0.3234 | 0.9434 |
77ecc371a431970e6e8d682c92bb793f
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Fantasy Scene on Stable Diffusion via Dreambooth This the Stable Diffusion model fine-tuned the Fantasy Scene concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of fantasy_scene**
25953daa422ff5ef12fe2b4c0c3782d3
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Run on [Mirage](https://app.mirageml.com) Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)! Here are is a sample output for this model: ![image 0](https://huggingface.co/MirageML/fantasy-scene/resolve/main/output.png)
4e168e90b26ec02f04c452bef4daf61f
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 64 - total_train_batch_size: 256 - total_eval_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2.0 - training precision: Mixed Precision
f62f741365312ab001a4fe7fcfd549e3
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-small-test-ged-mlsum_max_target_length_10 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the mlsum dataset. It achieves the following results on the evaluation set: - Loss: 0.3341 - Rouge1: 74.8229 - Rouge2: 68.1808 - Rougel: 74.8297 - Rougelsum: 74.8414
ca1692f5f6b4b650d7a7033ac2094599
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.5565 | 1.0 | 33296 | 0.3827 | 69.9041 | 62.821 | 69.8709 | 69.8924 | | 0.2636 | 2.0 | 66592 | 0.3552 | 72.0701 | 65.4937 | 72.0787 | 72.091 | | 0.2309 | 3.0 | 99888 | 0.3525 | 72.5071 | 65.8026 | 72.5132 | 72.512 | | 0.2109 | 4.0 | 133184 | 0.3346 | 74.0842 | 67.4776 | 74.0887 | 74.0968 | | 0.1972 | 5.0 | 166480 | 0.3398 | 74.6051 | 68.6024 | 74.6177 | 74.6365 | | 0.1867 | 6.0 | 199776 | 0.3283 | 74.9022 | 68.2146 | 74.9023 | 74.926 | | 0.1785 | 7.0 | 233072 | 0.3325 | 74.8631 | 68.2468 | 74.8843 | 74.9026 | | 0.1725 | 8.0 | 266368 | 0.3341 | 74.8229 | 68.1808 | 74.8297 | 74.8414 |
41f92f82656d29003ead7b81ba82151b
apache-2.0
['translation']
false
opus-mt-fr-ty * source languages: fr * target languages: ty * OPUS readme: [fr-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ty/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.eval.txt)
18429170a92b4bc7a88e63956d4b2dbf
apache-2.0
['translation']
false
nld-ukr * source group: Dutch * target group: Ukrainian * OPUS readme: [nld-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-ukr/README.md) * model: transformer-align * source language(s): nld * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.eval.txt)
1ce09b74a71c59144c47f63e4c206143
apache-2.0
['translation']
false
System Info: - hf_name: nld-ukr - source_languages: nld - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['nl', 'uk'] - src_constituents: {'nld'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.test.txt - src_alpha3: nld - tgt_alpha3: ukr - short_pair: nl-uk - chrF2_score: 0.619 - bleu: 40.8 - brevity_penalty: 0.992 - ref_len: 51674.0 - src_name: Dutch - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: nl - tgt_alpha2: uk - prefer_old: False - long_pair: nld-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
c56f9f7b52d9ea6cf045486a877d8b10
apache-2.0
['generated_from_trainer']
false
bert-base-banking77-pt2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset. It achieves the following results on the evaluation set: - Loss: 0.2982 - F1: 0.9392
7ebd511d61801d1171e607a2f0cc0b9e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP
bdff8cd22d4fb297c1e0448e84955be8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1486 | 1.0 | 626 | 0.3336 | 0.9223 | | 0.0934 | 2.0 | 1252 | 0.3148 | 0.9324 | | 0.0314 | 3.0 | 1878 | 0.2982 | 0.9392 |
99ff7a0441eab80e0446aeebd6cbfa92
apache-2.0
['whisper-event']
false
Whisper Tiny Tatar - Kirill Milintsevich This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5106 - Wer: 49.2285
8546c269e7da14f5ee8c88e1bc5c1387
apache-2.0
['whisper-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.4268 | 2.49 | 500 | 0.6232 | 63.6537 | | 0.2331 | 4.98 | 1000 | 0.5044 | 52.3818 | | 0.1332 | 7.46 | 1500 | 0.4927 | 50.2300 | | 0.09 | 9.95 | 2000 | 0.5106 | 49.2285 | | 0.048 | 12.44 | 2500 | 0.5526 | 49.7806 | | 0.0346 | 14.93 | 3000 | 0.5850 | 50.0319 | | 0.0181 | 17.41 | 3500 | 0.6276 | 50.5592 | | 0.0122 | 19.9 | 4000 | 0.6494 | 50.3327 | | 0.0086 | 22.39 | 4500 | 0.6737 | 50.6688 | | 0.0077 | 24.88 | 5000 | 0.6777 | 50.6724 |
5b9d5e2e229660e7e3348e196517f556
apache-2.0
['automatic-speech-recognition', 'zh-CN']
false
exp_w2v2t_zh-cn_unispeech-ml_s772 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
2115eb16bf05f5c83a44a308f8511248
apache-2.0
['generated_from_trainer']
false
eval_masked_v4_rte This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.8360 - Accuracy: 0.6209
2d7de098e17b8033a0d3cf419633649b
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3076 - Accuracy: 0.8767 - F1: 0.8771
7f88a7e4faec6cd5bd1acd947158f9df
cc-by-4.0
[]
false
Cour de Cassation semi-automatic *titrage* prediction model Model for the semi-automatic prediction of *titrages* (keyword sequence) from *sommaires* (synthesis of legal cases). The models are similar to the automatic models described in [this paper](https://hal.inria.fr/hal-03663110/file/LREC_2022___CCass_Inria-camera-ready.pdf) and to the model available [here](https://huggingface.co/rbawden/CCASS-pred-titrages-base). If you use this semi-automatic model, please cite our research paper (see [below](
2be6af19ec0616a54922a1aafcb0726f
cc-by-4.0
[]
false
Model description The model is a transformer-base model trained on parallel data (sommaires-titrages) provided by the Cour de Cassation. The model was intially trained using the Fairseq toolkit, converted to HuggingFace and then fine-tuned on the original training data to smooth out minor differences that arose during the conversion process. Tokenisation is performed using a SentencePiece model, the BPE strategy and a vocab size of 8000.
408f380957bffb73db9f1a31b8304cf5
cc-by-4.0
[]
false
How to use Contrary to the [automatic *titrage* prediction model](https://huggingface.co/rbawden/CCASS-pred-titrages-base) (designed to predict the entire sequence), this model is designed to help in the manual production of *titrages*, by proposing the next *titre* (keyword) in the sequence given a *sommaire* and the beginning of the *titrage*. Model input is the *matière* (matter) concatenated to the *titres* already decided on (separated by <t>), concatenated to the text from the sommaire separated by the token `<t>`. Each example should be on a single line. E.g. `bail <t> résiliation <t> causes <t> La recommendation du tribunal selon l'article...` (fictive example for illustrative purposes, where the matter=bail, the beginning of the *titrage*=résiliation <t> causes. The maximum input length of the model is 1024 input tokens (after tokenisation). ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokeniser = AutoTokenizer.from_pretrained("rbawden/CCASS-semi-auto-titrages-base") model = AutoModelForSeq2SeqLM.from_pretrained("rbawden/CCASS-semi-auto-titrages-base") matiere_and_titrage_prefix = "matter <t> titre" sommaire = "full text from the sommaire on a single line" inputs = tokeniser([matiere_and_titrage_prefix + " <t> " + sommaire], return_tensors='pt') outputs = model.generate(inputs['input_ids']) tokeniser.batch_decode(outputs, skip_special_tokens=True, clean_up_tokenisation_spaces=True) ```
627d0dd33e2d26fcbdd88489ed977e0e
cc-by-4.0
[]
false
Limitations and bias The models' predictions should not be taken as ground-truth *titrages* and the final decision should be the expert's. The model is not constrained to predict *titres* that have previously been seen, so this should be taken into account in the deployment of this model as a *titrage* tool in order to avoid the multiplication of different *titres*.
a1e6a2e9a18518eaffedf3193f5e59d9
cc-by-4.0
[]
false
Training data Training data is provided by the Cour de Cassation (the original source being Jurinet data, but with pseudo-anonymisation applied). For training, we use a total of 159,836 parallel examples (each example is a sommaire-titrage pair). Our development data consists of 1,833 held-out examples.
0f5df177eaf8e2b1cd784d96e515f0d4
cc-by-4.0
[]
false
Preprocessing We use SentencePiece, the BPE strategy and a joint vocabulary of 8000 tokens. This model was converted into the HuggingFace format and integrates a number of normalisation processes (e.g. removing double doubles, apostrophes and quotes, normalisation of different accent formats, lowercasing).
0095d87d1c2eb245def705058095f4dd
cc-by-4.0
[]
false
Training The model was initialised trained using Fairseq until convergence on the development set (according to our customised weighted accuracy measure - please see [the paper](https://hal.inria.fr/hal-03663110/file/LREC_2022___CCass_Inria-camera-ready.pdf) for more details). The model was then converted to HuggingFace and training continued to smooth out incoherences introduced during the conversion procedure (incompatibilities in the way the SentencePiece and NMT vocabularies are defined, linked to HuggingFace vocabularies being necessarily the same as the tokeniser vocabulary, a constraint that is not imposed in Fairseq).
1cd791fb855073ff12591ef3d133d406
cc-by-4.0
[]
false
Evaluation results Full results for the initial (automatic) Fairseq models can be found in [the paper](https://hal.inria.fr/hal-03663110/file/LREC_2022___CCass_Inria-camera-ready.pdf). Results on this semi-automatic model coming soon!
d8c7cf61f301ff6e7695cf499dc474ff
cc-by-4.0
[]
false
BibTex entry and citation info <a name="cite"></a> If you use this work, please cite the following article: Thibault Charmet, Inès Cherichi, Matthieu Allain, Urszula Czerwinska, Amaury Fouret, Benoît Sagot and Rachel Bawden, 2022. [**Complex Labelling and Similarity Prediction in Legal Texts: Automatic Analysis of France’s Court of Cassation Rulings**](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.509.pdf). In Proceedings of the 13th Language Resources and Evaluation Conference, Marseille, France.] ``` @inproceedings{charmet-et-al-2022-complex, tite = {Complex Labelling and Similarity Prediction in Legal Texts: Automatic Analysis of France’s Court of Cassation Rulings}, author = {Charmet, Thibault and Cherichi, Inès and Allain, Matthieu and Czerwinska, Urszula and Fouret, Amaury, and Sagot, Benoît and Bawden, Rachel}, booktitle = {Proceedings of the 13th Language Resources and Evaluation Conference}, year = {2022}, address = {Marseille, France}, pages = {4754--4766}, url = {http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.509.pdf} ```
c2707eaee88bbfdb970fe5385fa00a8d
mit
['generated_from_trainer']
false
wonderful_engelbart This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
6af3316324ce981b40ac35e9f1188040
mit
['generated_from_trainer']
false
Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.00056}, 'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048, 'prefix': '<|aligned|>'}, {'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'wonderful_engelbart', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
5b2a9ba373f0d3b69563ac8b2cfd5b5d
apache-2.0
['generated_from_trainer']
false
swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1507 - Accuracy: 0.9342
5c5460da8b1e87470158c8d30bf0d6e9
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12
99fa0f3eb9ae44774b7eb92085fe08ba
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2891 | 1.0 | 146 | 0.2322 | 0.9068 | | 0.2609 | 2.0 | 292 | 0.1710 | 0.9227 | | 0.2417 | 3.0 | 438 | 0.1830 | 0.9251 | | 0.2406 | 4.0 | 584 | 0.1809 | 0.9198 | | 0.2113 | 5.0 | 730 | 0.1631 | 0.9289 | | 0.1812 | 6.0 | 876 | 0.1561 | 0.9308 | | 0.2082 | 7.0 | 1022 | 0.1507 | 0.9342 | | 0.1922 | 8.0 | 1168 | 0.1611 | 0.9294 | | 0.1715 | 9.0 | 1314 | 0.1536 | 0.9308 | | 0.1675 | 10.0 | 1460 | 0.1609 | 0.9289 | | 0.194 | 11.0 | 1606 | 0.1499 | 0.9337 | | 0.1706 | 12.0 | 1752 | 0.1514 | 0.9323 |
ed693e447b4e9084b89be205200cb408
mit
['generated_from_trainer']
false
predict-perception-bertino-cause-object This model is a fine-tuned version of [indigo-ai/BERTino](https://huggingface.co/indigo-ai/BERTino) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0766 - R2: 0.8216
f21e9ea6cffe061949664dc5ca1606e8
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | R2 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6807 | 1.0 | 14 | 0.4011 | 0.0652 | | 0.3529 | 2.0 | 28 | 0.2304 | 0.4631 | | 0.1539 | 3.0 | 42 | 0.0596 | 0.8611 | | 0.0853 | 4.0 | 56 | 0.1600 | 0.6272 | | 0.066 | 5.0 | 70 | 0.1596 | 0.6280 | | 0.0563 | 6.0 | 84 | 0.1146 | 0.7330 | | 0.0777 | 7.0 | 98 | 0.1010 | 0.7646 | | 0.0299 | 8.0 | 112 | 0.0897 | 0.7910 | | 0.0311 | 9.0 | 126 | 0.0832 | 0.8061 | | 0.0274 | 10.0 | 140 | 0.0988 | 0.7697 | | 0.0262 | 11.0 | 154 | 0.1048 | 0.7557 | | 0.0204 | 12.0 | 168 | 0.0615 | 0.8566 | | 0.0254 | 13.0 | 182 | 0.0742 | 0.8270 | | 0.0251 | 14.0 | 196 | 0.0923 | 0.7850 | | 0.0149 | 15.0 | 210 | 0.0663 | 0.8456 | | 0.0141 | 16.0 | 224 | 0.0755 | 0.8241 | | 0.0112 | 17.0 | 238 | 0.0905 | 0.7891 | | 0.0108 | 18.0 | 252 | 0.0834 | 0.8057 | | 0.0096 | 19.0 | 266 | 0.0823 | 0.8082 | | 0.0073 | 20.0 | 280 | 0.0825 | 0.8078 | | 0.0092 | 21.0 | 294 | 0.0869 | 0.7974 | | 0.0075 | 22.0 | 308 | 0.0744 | 0.8266 | | 0.0075 | 23.0 | 322 | 0.0825 | 0.8078 | | 0.0062 | 24.0 | 336 | 0.0797 | 0.8144 | | 0.0065 | 25.0 | 350 | 0.0793 | 0.8152 | | 0.007 | 26.0 | 364 | 0.0840 | 0.8043 | | 0.0067 | 27.0 | 378 | 0.0964 | 0.7753 | | 0.0064 | 28.0 | 392 | 0.0869 | 0.7976 | | 0.0063 | 29.0 | 406 | 0.0766 | 0.8215 | | 0.0057 | 30.0 | 420 | 0.0764 | 0.8219 | | 0.0057 | 31.0 | 434 | 0.0796 | 0.8145 | | 0.0054 | 32.0 | 448 | 0.0853 | 0.8012 | | 0.0044 | 33.0 | 462 | 0.0750 | 0.8253 | | 0.0072 | 34.0 | 476 | 0.0782 | 0.8179 | | 0.006 | 35.0 | 490 | 0.0867 | 0.7979 | | 0.0054 | 36.0 | 504 | 0.0819 | 0.8092 | | 0.0047 | 37.0 | 518 | 0.0839 | 0.8045 | | 0.0043 | 38.0 | 532 | 0.0764 | 0.8221 | | 0.0039 | 39.0 | 546 | 0.0728 | 0.8303 | | 0.0041 | 40.0 | 560 | 0.0755 | 0.8241 | | 0.0038 | 41.0 | 574 | 0.0729 | 0.8301 | | 0.0034 | 42.0 | 588 | 0.0781 | 0.8180 | | 0.0038 | 43.0 | 602 | 0.0762 | 0.8224 | | 0.0032 | 44.0 | 616 | 0.0777 | 0.8189 | | 0.0035 | 45.0 | 630 | 0.0776 | 0.8191 | | 0.0037 | 46.0 | 644 | 0.0765 | 0.8217 | | 0.0036 | 47.0 | 658 | 0.0766 | 0.8216 |
3312b912ac446effe945315565ba53ef
apache-2.0
['generated_from_keras_callback']
false
mymodel This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.4016 - Epoch: 2
464ed580599c74d596f78f901268c73b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP
d38169fdfbb457d454d19beb58d08b4a
mit
['roberta-base', 'roberta-base-epoch_41']
false
RoBERTa, Intermediate Checkpoint - Epoch 41 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_41.
4afee0911a401a48d8cafb1cf3ad8908
apache-2.0
['generated_from_trainer']
false
distilbart-podimo-data-eval-1-2e This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.7114 - Rouge1: 32.7887 - Rouge2: 6.5245 - Rougel: 16.9089 - Rougelsum: 29.6437 - Gen Len: 141.3408
81508f0a5ffbcd47446343ea14964fe3
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
533408d92eff7ccb1f35ef9e5b3df187
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 4.2142 | 0.98 | 44 | 3.8082 | 32.7658 | 6.2506 | 16.7953 | 29.6922 | 140.5503 | | 3.6965 | 1.98 | 88 | 3.7114 | 32.7887 | 6.5245 | 16.9089 | 29.6437 | 141.3408 |
30c20212630533f9c1123707164920ec
apache-2.0
['finnish', 'gpt2']
false
Model page TODO. Model name in my thesis was FinnGPT but I chose not to pollute the namespace and leave that kind of name for a more serious attempt at Finnish GPT models. You may call this however you want. Example names are Väinö's GPT-FI or by hatanpav/gpt-fi. If you really want you can also refer to this with the FinnGPT like I did in my thesis.
bd234353c86c2ebe101b6c0e1af95871
apache-2.0
['finnish', 'gpt2']
false
How to use Example with text generation pipeline: ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='hatanp/gpt-fi') >>> generator("Testilauseella voidaan testata tokenisointia. Tämän jatkaminen on luultavasti vaikeaa, mutta", max_length=3,do_sample=True, top_p=0.9, top_k=12, temperature=0.9, num_return_sequences=2) [{'generated_text': 'Testilauseella voidaan testata tokenisointia. Tämän jatkaminen on luultavasti vaikeaa, mutta ei mahdotonta. \n Jos et ole kiinnostunut tokenis'}, {'generated_text': 'Testilauseella voidaan testata tokenisointia. Tämän jatkaminen on luultavasti vaikeaa, mutta sen toteuttaminen onnistuu, jos testilaboratorio osaa analysoida'}, {'generated_text': 'Testilauseella voidaan testata tokenisointia. Tämän jatkaminen on luultavasti vaikeaa, mutta sen testaaminen on silti hyödyllistä. Jos testisuorit'}] ``` Example to generate text manually: ```python >>> from transformers import AutoModelForCausalLM,AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained("hatanp/gpt-fi") >>> tokenizer = AutoTokenizer.from_pretrained("hatanp/gpt-fi") >>> prompt = "Testilauseella voidaan testata tokenisointia. Tämän jatkaminen on luultavasti vaikeaa, mutta" >>> inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") >>> prompt_len = len(tokenizer.decode(inputs[0],skip_special_tokens=True, clean_up_tokenization_spaces=True)) >>> outputs = model.generate(inputs, max_length=len(inputs[0])+20, do_sample=True, top_p=0.9, top_k=12, temperature=0.9) >>> text_out = tokenizer.decode(outputs[0])[prompt_len:] >>> print(text_out) " on olemassa joitain keinoja, joilla voit testata tokenisointia. Tässä artikkelissa käydään läpi testilauseiden" ```
1aca9529283448148d82dc6a6f1f1b1e
apache-2.0
['generated_from_keras_callback']
false
devansh71/news-sum-dev-ai5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: nan - Validation Loss: nan - Epoch: 3
ba68eee96de9145618c0c96d9b87450c
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.05, 'decay_steps': 165000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
9cf1248c72dd2749a200e3a187b736c2
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | nan | nan | 0 | | nan | nan | 1 | | nan | nan | 2 | | nan | nan | 3 |
164b9bfe2fa12206a0bfd7307774c3db
apache-2.0
['vision', 'image-classification']
false
ResNet-50 v1.5 ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al. Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team.
10d8e3fb2580a92e2146a26f66b3bf0d
apache-2.0
['vision', 'image-classification']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ResNetForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50") model = ResNetForImageClassification.from_pretrained("microsoft/resnet-50") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits
96afd73210b963ba5bd7d0ade24d70f6
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
wav2vec2-xls-r-300m-ab-CV8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2105 - Wer: 0.5474
baa94447a24a71b3ea281a47541e4208
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 15 - mixed_precision_training: Native AMP
4ce4dd4ad17550281519f4a652c39330
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.7729 | 0.63 | 500 | 3.0624 | 1.0021 | | 2.7348 | 1.26 | 1000 | 1.0460 | 0.9815 | | 1.2756 | 1.9 | 1500 | 0.4618 | 0.8309 | | 1.0419 | 2.53 | 2000 | 0.3725 | 0.7449 | | 0.9491 | 3.16 | 2500 | 0.3368 | 0.7345 | | 0.9006 | 3.79 | 3000 | 0.3014 | 0.6936 | | 0.8519 | 4.42 | 3500 | 0.2852 | 0.6767 | | 0.8243 | 5.06 | 4000 | 0.2701 | 0.6504 | | 0.7902 | 5.69 | 4500 | 0.2641 | 0.6221 | | 0.7767 | 6.32 | 5000 | 0.2549 | 0.6192 | | 0.7516 | 6.95 | 5500 | 0.2515 | 0.6179 | | 0.737 | 7.59 | 6000 | 0.2408 | 0.5963 | | 0.7217 | 8.22 | 6500 | 0.2429 | 0.6261 | | 0.7101 | 8.85 | 7000 | 0.2366 | 0.5687 | | 0.6922 | 9.48 | 7500 | 0.2277 | 0.5680 | | 0.6866 | 10.11 | 8000 | 0.2242 | 0.5847 | | 0.6703 | 10.75 | 8500 | 0.2222 | 0.5803 | | 0.6649 | 11.38 | 9000 | 0.2247 | 0.5765 | | 0.6513 | 12.01 | 9500 | 0.2182 | 0.5644 | | 0.6369 | 12.64 | 10000 | 0.2128 | 0.5508 | | 0.6425 | 13.27 | 10500 | 0.2132 | 0.5514 | | 0.6399 | 13.91 | 11000 | 0.2116 | 0.5495 | | 0.6208 | 14.54 | 11500 | 0.2105 | 0.5474 |
846d108ac2442e2509c5cb8126df6cbe
apache-2.0
['generated_from_trainer']
false
miny-bert-aug-sst2-distilled This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the augmented_glue_sst2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2643 - Accuracy: 0.9128
86faf408989d486e24f6d5a1fa7ecee0
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP
b1d6c8be80aa6c6b102f49429a31c959
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.602 | 1.0 | 6227 | 0.3389 | 0.9186 | | 0.4195 | 2.0 | 12454 | 0.2989 | 0.9151 | | 0.3644 | 3.0 | 18681 | 0.2794 | 0.9117 | | 0.3304 | 4.0 | 24908 | 0.2793 | 0.9106 | | 0.3066 | 5.0 | 31135 | 0.2659 | 0.9186 | | 0.2881 | 6.0 | 37362 | 0.2668 | 0.9140 | | 0.2754 | 7.0 | 43589 | 0.2643 | 0.9128 |
f6e9c676ec88171b555faa9ce398a75f
apache-2.0
['generated_from_trainer']
false
hubert-base-timit-demo-google-colab-ft30ep_v5 This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the timit-asr dataset. It achieves the following results on the evaluation set: - Loss: 0.4763 - Wer: 0.3322
111b262e4c6e5f977a24a9f3bbd23e30
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.9596 | 0.87 | 500 | 3.1237 | 1.0 | | 2.5388 | 1.73 | 1000 | 1.1689 | 0.9184 | | 1.0448 | 2.6 | 1500 | 0.6106 | 0.5878 | | 0.6793 | 3.46 | 2000 | 0.4912 | 0.5200 | | 0.5234 | 4.33 | 2500 | 0.4529 | 0.4798 | | 0.4368 | 5.19 | 3000 | 0.4239 | 0.4543 | | 0.3839 | 6.06 | 3500 | 0.4326 | 0.4339 | | 0.3315 | 6.92 | 4000 | 0.4265 | 0.4173 | | 0.2878 | 7.79 | 4500 | 0.4304 | 0.4068 | | 0.25 | 8.65 | 5000 | 0.4130 | 0.3940 | | 0.242 | 9.52 | 5500 | 0.4310 | 0.3938 | | 0.2182 | 10.38 | 6000 | 0.4204 | 0.3843 | | 0.2063 | 11.25 | 6500 | 0.4449 | 0.3816 | | 0.2099 | 12.11 | 7000 | 0.4016 | 0.3681 | | 0.1795 | 12.98 | 7500 | 0.4027 | 0.3647 | | 0.1604 | 13.84 | 8000 | 0.4294 | 0.3664 | | 0.1683 | 14.71 | 8500 | 0.4412 | 0.3661 | | 0.1452 | 15.57 | 9000 | 0.4484 | 0.3588 | | 0.1491 | 16.44 | 9500 | 0.4508 | 0.3515 | | 0.1388 | 17.3 | 10000 | 0.4240 | 0.3518 | | 0.1399 | 18.17 | 10500 | 0.4605 | 0.3513 | | 0.1265 | 19.03 | 11000 | 0.4412 | 0.3485 | | 0.1137 | 19.9 | 11500 | 0.4520 | 0.3467 | | 0.106 | 20.76 | 12000 | 0.4873 | 0.3426 | | 0.1243 | 21.63 | 12500 | 0.4456 | 0.3396 | | 0.1055 | 22.49 | 13000 | 0.4819 | 0.3406 | | 0.1124 | 23.36 | 13500 | 0.4613 | 0.3391 | | 0.1064 | 24.22 | 14000 | 0.4842 | 0.3430 | | 0.0875 | 25.09 | 14500 | 0.4661 | 0.3348 | | 0.086 | 25.95 | 15000 | 0.4724 | 0.3371 | | 0.0842 | 26.82 | 15500 | 0.4982 | 0.3381 | | 0.0834 | 27.68 | 16000 | 0.4856 | 0.3337 | | 0.0918 | 28.55 | 16500 | 0.4783 | 0.3344 | | 0.0773 | 29.41 | 17000 | 0.4763 | 0.3322 |
6e85c33aea89df49c42e6a2807ceaa32
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qnli_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4386 - Accuracy: 0.5578
8a23b20c1cf5e26e4bf18950dcb2c571
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3496 | 1.0 | 16604 | 0.4386 | 0.5578 | | 0.3031 | 2.0 | 33208 | 0.4636 | 0.5607 | | 0.281 | 3.0 | 49812 | 0.4565 | 0.5576 | | 0.2682 | 4.0 | 66416 | 0.4627 | 0.5647 | | 0.2596 | 5.0 | 83020 | 0.4572 | 0.5768 | | 0.2533 | 6.0 | 99624 | 0.4660 | 0.5753 |
af6e8973103745f9c9d1fbb2481ee0f5
mit
['generated_from_trainer']
false
camembert-base-finetuned-LineCause This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 - Accuracy: 1.0 - F1: 1.0 - Recall: 1.0
d55c08ba1ea4bf8bc7050f279a9f74e6
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 50 - eval_batch_size: 50 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
e410ebe0652b554b2db5cd3c4ffc741b
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:------:| | 0.0428 | 1.0 | 4409 | 0.0002 | 1.0 | 1.0 | 1.0 | | 0.0009 | 2.0 | 8818 | 0.0001 | 1.0 | 1.0 | 1.0 |
a95572e29e25e0eb8e8dde93e7d36355
apache-2.0
[]
false
Model description Entailer is a text-to-text model trained to create entailment-style explanations for a hypothesis (following the format of [EntailmentBank](https://allenai.org/data/entailmentbank)), as well as verifying both the reasoning and the factuality of the premises. Entailer was built on top of [T5](https://github.com/google-research/text-to-text-transfer-transformer) and comes in two sizes: [entailer-11b](https://huggingface.co/allenai/entailer-11b) and [entailer-large](https://huggingface.co/allenai/entailer-large). See https://github.com/allenai/entailment_bank for more details.
76ef2054ae35ffef894034a625d8d930
apache-2.0
['generated_from_trainer']
false
reddit-bert-text_10 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5198
c72e1970fba981ff37bd1a1da8ff1831
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9626 | 1.0 | 946 | 2.6163 | | 2.6934 | 2.0 | 1892 | 2.5612 | | 2.5971 | 3.0 | 2838 | 2.5023 |
73d68c41046d145a6e2626afe3022de1
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
vit-base-cifar10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 2.3302 - Accuracy: 0.106
45816259273efb4de3d274445145dd70
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0
471b570160ae0e1c038492b027285282
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.3324 | 1.0 | 664 | 2.3352 | 0.0967 | | 2.3489 | 2.0 | 1328 | 2.3288 | 0.1049 | | 2.4899 | 3.0 | 1992 | 2.4473 | 0.0989 | | 2.479 | 4.0 | 2656 | 2.4894 | 0.1 | | 2.4179 | 5.0 | 3320 | 2.4404 | 0.0947 | | 2.3881 | 6.0 | 3984 | 2.3931 | 0.102 | | 2.3597 | 7.0 | 4648 | 2.3744 | 0.0967 | | 2.3721 | 8.0 | 5312 | 2.3667 | 0.0935 | | 2.3456 | 9.0 | 5976 | 2.3495 | 0.1036 | | 2.3361 | 10.0 | 6640 | 2.3473 | 0.1025 |
d02dac6bee002520df98f0f37b2cdfc9
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2t_en_r-wav2vec2_s863 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
e550238547940e33c9c8ca9f6abfd81e
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/nli-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
744d44eb6bb8afc5fa712e38af240a7f
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/nli-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ```
61f829ec3018d84aa7c05eaed4de8a6b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-mpnet-base-v2)
856833cf5c38dcb566e985f7e7e1feb8
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
0760815250eb6b291176885660448675
apache-2.0
['Twitter']
false
1. Paper Fajri Koto, Jey Han Lau, and Timothy Baldwin. [_IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effective Domain-Specific Vocabulary Initialization_](https://arxiv.org/pdf/2109.04607.pdf). In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (**EMNLP 2021**), Dominican Republic (virtual).
4524f96b288fdf8c369946a72768b21d
apache-2.0
['Twitter']
false
2. About [IndoBERTweet](https://github.com/indolem/IndoBERTweet) is the first large-scale pretrained model for Indonesian Twitter that is trained by extending a monolingually trained Indonesian BERT model with additive domain-specific vocabulary. In this paper, we show that initializing domain-specific vocabulary with average-pooling of BERT subword embeddings is more efficient than pretraining from scratch, and more effective than initializing based on word2vec projections.
4b24309e32b01e79700a16d17f9b7668
apache-2.0
['Twitter']
false
3. Pretraining Data We crawl Indonesian tweets over a 1-year period using the official Twitter API, from December 2019 to December 2020, with 60 keywords covering 4 main topics: economy, health, education, and government. We obtain in total of **409M word tokens**, two times larger than the training data used to pretrain [IndoBERT](https://aclanthology.org/2020.coling-main.66.pdf). Due to Twitter policy, this pretraining data will not be released to public.
42f2a8aa476c0dbdf45d7ce02f56dcad
apache-2.0
['Twitter']
false
4. How to use Load model and tokenizer (tested with transformers==3.5.1) ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("indolem/indobertweet-base-uncased") model = AutoModel.from_pretrained("indolem/indobertweet-base-uncased") ``` **Preprocessing Steps:** * lower-case all words * converting user mentions and URLs into @USER and HTTPURL, respectively * translating emoticons into text using the [emoji package](https://pypi.org/project/emoji/).
66c34c408afcc4113594a3c45fccddc6
apache-2.0
['Twitter']
false
5. Results over 7 Indonesian Twitter Datasets <table> <col> <colgroup span="2"></colgroup> <colgroup span="2"></colgroup> <tr> <th rowspan="2">Models</td> <th colspan="2" scope="colgroup">Sentiment</th> <th colspan="1" scope="colgroup">Emotion</th> <th colspan="2" scope="colgroup">Hate Speech</th> <th colspan="2" scope="colgroup">NER</th> <th rowspan="2" scope="colgroup">Average</th> </tr> <tr> <th scope="col">IndoLEM</th> <th scope="col">SmSA</th> <th scope="col">EmoT</th> <th scope="col">HS1</th> <th scope="col">HS2</th> <th scope="col">Formal</th> <th scope="col">Informal</th> </tr> <tr> <td scope="row">mBERT</td> <td>76.6</td> <td>84.7</td> <td>67.5</td> <td>85.1</td> <td>75.1</td> <td>85.2</td> <td>83.2</td> <td>79.6</td> </tr> <tr> <td scope="row">malayBERT</td> <td>82.0</td> <td>84.1</td> <td>74.2</td> <td>85.0</td> <td>81.9</td> <td>81.9</td> <td>81.3</td> <td>81.5</td> </tr> <tr> <td scope="row">IndoBERT (Willie, et al., 2020)</td> <td>84.1</td> <td>88.7</td> <td>73.3</td> <td>86.8</td> <td>80.4</td> <td>86.3</td> <td>84.3</td> <td>83.4</td> </tr> <tr> <td scope="row">IndoBERT (Koto, et al., 2020)</td> <td>84.1</td> <td>87.9</td> <td>71.0</td> <td>86.4</td> <td>79.3</td> <td>88.0</td> <td><b>86.9</b></td> <td>83.4</td> </tr> <tr> <td scope="row">IndoBERTweet (1M steps from scratch)</td> <td>86.2</td> <td>90.4</td> <td>76.0</td> <td><b>88.8</b></td> <td><b>87.5</b></td> <td><b>88.1</b></td> <td>85.4</td> <td>86.1</td> </tr> <tr> <td scope="row">IndoBERT + Voc adaptation + 200k steps</td> <td><b>86.6</b></td> <td><b>92.7</b></td> <td><b>79.0</b></td> <td>88.4</td> <td>84.0</td> <td>87.7</td> <td><b>86.9</b></td> <td><b>86.5</b></td> </tr> </table>
d2bd798501a5494f8522197c974c8baf
apache-2.0
['Twitter']
false
Citation If you use our work, please cite: ```bibtex @inproceedings{koto2021indobertweet, title={IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effective Domain-Specific Vocabulary Initialization}, author={Fajri Koto and Jey Han Lau and Timothy Baldwin}, booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021)}, year={2021} } ```
74d19de3c26719131b9f0f034cbc36c3
creativeml-openrail-m
[]
false
VAE NOT REQUIRED BUT RECOMENDED Model requires VAE - https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main File Structure for AUTOMATIC1111-webui : |──sd |----|──stable-diffusion-webui |----|----|──models |----|----|----|──VAE |----|----|----|----|──Put your VAE file here Merged Models A list of merged models can be found bellow in the description of the attached model version. Capabilities NSFW Photography SFW Photography is also possible, see "Trigger Words" bellow. Photorealistic 3D renders Emphasis on human anatomy Limitations Anything not listed above. This is model was created as a baseline to a general purpose model I'm working on. Stylized images and object images are possible, but require a little finesse to generate. Trigger Words This checkpoint does not contain any trigger words. However, placing some tags at the beginning of the prompts can heavily influence the generation. These tags include: "nsfw", "sfw", "erotica", and "nudity", "3d render", "cartoon" Note: For SFW generation, try adding sfw to your prompt and nsfw to your negative prompt. For NSFW generation, try adding either nsfw, erotica, or nudity to your prompt and sfw to your negative prompt. In general, this is more useful for generating sfw images. This concept also applies to 3rd render and cartoon. I recommend leaving 3rd render and cartoon both in your negative prompt for generating photographic images. Basic Prompt Guide This model heavily revolves around UnstablePhotorealv.5. This means that you can the tagging system for PhotoReal, although I would recommend using a combination of the PhotoReal comma system and more natural language prompting. Guide to prompting with PhotoReal - https://docs.google.com/document/d/1-DDIHVbsYfynTp_rsKLu4b2tSQgxtO5F6pNsNla12k0/edit
a2567d120987f7414d213b08133692b1
creativeml-openrail-m
[]
false
heading=h.3znysh7 Example prompt using commas and natural language: Positive A Professional Full Body Photo, of a beautiful young woman, clothed, standing indoors, Caucasian, toned physique, strawberry red hair, neutral expression Negative I recommend something simple like, deformed, bad anatomy, disfigured, missing limb, floating limbs, twisted, blurry, fused fingers, long neck, words, logo, text, mutated hands, mutated fingers Modify as needed. For example, adding 3d render, cartoon to your negative prompt will help generate photographic images. The prompts for this model are fairly flexible, experiment to find out what works best for you.
b38986a2c29c3ec6794145b80ad5894e
other
[]
false
<html> <body> <h1>Welcome to Crying-Chopper Model</h1> <p>This is a Stable Diffusion 1.4 based model that adds the ability to make any character you would like into a Crying Chopper meme as seen in the below picture. This model was trained on about 20 different versions aka characters of this art style, thanks to the wonderful artists over at the OnePieceCock discord server. To get the best result do a prompt like this 'NAME as cryingchopper, ...' make sure to keep crying chopper with no space because that was how it trained. </p> <img alt="cleanchooper.jpg" src="https://s3.amazonaws.com/moonup/production/uploads/1666321853612-631ba03acf39db4b171a0877.jpeg" title="cleanchooper.jpg"> <a href="https://s3.amazonaws.com/moonup/production/uploads/1666321853612-631ba03acf39db4b171a0877.jpeg" download>Download: Crying-Chopper_model-v1</a> </body> </html>
d5fa323b41aa4003bf966fe3698c5edd