repo_id stringlengths 4 110 | author stringlengths 2 27 ⌀ | model_type stringlengths 2 29 ⌀ | files_per_repo int64 2 15.4k | downloads_30d int64 0 19.9M | library stringlengths 2 37 ⌀ | likes int64 0 4.34k | pipeline stringlengths 5 30 ⌀ | pytorch bool 2 classes | tensorflow bool 2 classes | jax bool 2 classes | license stringlengths 2 30 | languages stringlengths 4 1.63k ⌀ | datasets stringlengths 2 2.58k ⌀ | co2 stringclasses 29 values | prs_count int64 0 125 | prs_open int64 0 120 | prs_merged int64 0 15 | prs_closed int64 0 28 | discussions_count int64 0 218 | discussions_open int64 0 148 | discussions_closed int64 0 70 | tags stringlengths 2 513 | has_model_index bool 2 classes | has_metadata bool 1 class | has_text bool 1 class | text_length int64 401 598k | is_nc bool 1 class | readme stringlengths 0 598k | hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lfcc/bert-portuguese-squad | lfcc | bert | 10 | 1 | transformers | 0 | question-answering | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,451 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-portuguese-squad
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.041 | 1.0 | 5578 | 1.1970 |
| 0.8267 | 2.0 | 11156 | 1.2215 |
| 0.586 | 3.0 | 16734 | 1.3191 |
| 0.4251 | 4.0 | 22312 | 1.6129 |
| 0.3045 | 5.0 | 27890 | 1.7907 |
| 0.2432 | 6.0 | 33468 | 1.9715 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
| 63da614a4ceb33ef543be7e6c7cdb22d |
Roy029/mpyt5_e5 | Roy029 | mt5 | 9 | 1 | transformers | 0 | text2text-generation | true | false | false | openrail | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,050 | false | # Model Card for mpyt5_e5
<!-- Provide a quick summary of what the model is/does. [Optional] -->
事前に自然言語だけでなくPythonを学習したモデル
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Python Code (1.05GB)
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- MLM
- python vocab (https://huggingface.co/kkuramitsu/mt5-pytoken)
### Preprocessing
mT5 + Python
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- mT5-small(300M Paramators)
- max_length = 128
# Model Version
- *epoch5: This Model
- *epoch10: https://huggingface.co/Roy029/mpyt5_e10
- *epoch15: https://huggingface.co/Roy029/mpyt5_e15
- *epoch20: https://huggingface.co/Roy029/mpyt5_e20 | fa99d5c41632f29d721cb195415483f1 |
mikr/whisper-small-hu-cv11 | mikr | whisper | 17 | 2 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,537 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5649
- Wer: 30.6374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0182 | 7.01 | 1000 | 0.4546 | 31.4735 |
| 0.0023 | 14.02 | 2000 | 0.5045 | 31.0910 |
| 0.0008 | 22.01 | 3000 | 0.5318 | 30.2816 |
| 0.0006 | 29.02 | 4000 | 0.5585 | 30.5989 |
| 0.0004 | 37.01 | 5000 | 0.5649 | 30.6374 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
| 95ddc9a707c412d270a28c4d96c35ba7 |
pig4431/YELP_ALBERT_5E | pig4431 | albert | 10 | 8 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['yelp_review_full'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 10,800 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YELP_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1394
- Accuracy: 0.9733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4967 | 0.03 | 50 | 0.1667 | 0.9467 |
| 0.3268 | 0.06 | 100 | 0.2106 | 0.9133 |
| 0.3413 | 0.1 | 150 | 0.2107 | 0.9667 |
| 0.3172 | 0.13 | 200 | 0.1906 | 0.94 |
| 0.2804 | 0.16 | 250 | 0.2588 | 0.9 |
| 0.2604 | 0.19 | 300 | 0.2023 | 0.94 |
| 0.2532 | 0.22 | 350 | 0.1263 | 0.9533 |
| 0.2103 | 0.26 | 400 | 0.1233 | 0.96 |
| 0.212 | 0.29 | 450 | 0.2019 | 0.9267 |
| 0.2669 | 0.32 | 500 | 0.1110 | 0.9667 |
| 0.2187 | 0.35 | 550 | 0.1542 | 0.96 |
| 0.2203 | 0.38 | 600 | 0.0879 | 0.9733 |
| 0.2699 | 0.42 | 650 | 0.0971 | 0.9667 |
| 0.2107 | 0.45 | 700 | 0.0863 | 0.9667 |
| 0.2443 | 0.48 | 750 | 0.0823 | 0.9733 |
| 0.1987 | 0.51 | 800 | 0.1207 | 0.9733 |
| 0.2326 | 0.54 | 850 | 0.1368 | 0.9667 |
| 0.1787 | 0.58 | 900 | 0.1027 | 0.9667 |
| 0.2159 | 0.61 | 950 | 0.2443 | 0.9333 |
| 0.1316 | 0.64 | 1000 | 0.2035 | 0.9467 |
| 0.2416 | 0.67 | 1050 | 0.0882 | 0.9733 |
| 0.2008 | 0.7 | 1100 | 0.1709 | 0.9533 |
| 0.2065 | 0.74 | 1150 | 0.1098 | 0.9667 |
| 0.2391 | 0.77 | 1200 | 0.1055 | 0.9667 |
| 0.1533 | 0.8 | 1250 | 0.1997 | 0.94 |
| 0.2016 | 0.83 | 1300 | 0.0899 | 0.96 |
| 0.2016 | 0.86 | 1350 | 0.0957 | 0.9733 |
| 0.2316 | 0.9 | 1400 | 0.0784 | 0.98 |
| 0.1839 | 0.93 | 1450 | 0.0784 | 0.9733 |
| 0.2121 | 0.96 | 1500 | 0.1150 | 0.9733 |
| 0.1307 | 0.99 | 1550 | 0.0969 | 0.9733 |
| 0.1271 | 1.02 | 1600 | 0.2326 | 0.9467 |
| 0.1736 | 1.06 | 1650 | 0.0979 | 0.9667 |
| 0.1357 | 1.09 | 1700 | 0.0862 | 0.98 |
| 0.1871 | 1.12 | 1750 | 0.1419 | 0.9667 |
| 0.1411 | 1.15 | 1800 | 0.1301 | 0.96 |
| 0.1317 | 1.18 | 1850 | 0.1602 | 0.9533 |
| 0.1432 | 1.22 | 1900 | 0.1885 | 0.9533 |
| 0.1793 | 1.25 | 1950 | 0.0776 | 0.9667 |
| 0.1322 | 1.28 | 2000 | 0.0822 | 0.9733 |
| 0.1416 | 1.31 | 2050 | 0.0920 | 0.9733 |
| 0.1524 | 1.34 | 2100 | 0.0673 | 0.98 |
| 0.1338 | 1.38 | 2150 | 0.0602 | 0.98 |
| 0.152 | 1.41 | 2200 | 0.0916 | 0.98 |
| 0.1192 | 1.44 | 2250 | 0.0559 | 0.98 |
| 0.1471 | 1.47 | 2300 | 0.1096 | 0.9667 |
| 0.1267 | 1.5 | 2350 | 0.0695 | 0.9733 |
| 0.1776 | 1.54 | 2400 | 0.1363 | 0.96 |
| 0.1495 | 1.57 | 2450 | 0.0818 | 0.98 |
| 0.1158 | 1.6 | 2500 | 0.1282 | 0.9667 |
| 0.1772 | 1.63 | 2550 | 0.0682 | 0.9733 |
| 0.1187 | 1.66 | 2600 | 0.1032 | 0.9733 |
| 0.136 | 1.7 | 2650 | 0.1071 | 0.9667 |
| 0.1829 | 1.73 | 2700 | 0.0753 | 0.9667 |
| 0.1147 | 1.76 | 2750 | 0.1071 | 0.9733 |
| 0.1174 | 1.79 | 2800 | 0.1441 | 0.9667 |
| 0.0707 | 1.82 | 2850 | 0.1362 | 0.9667 |
| 0.1372 | 1.86 | 2900 | 0.1861 | 0.9533 |
| 0.2108 | 1.89 | 2950 | 0.0770 | 0.9733 |
| 0.2014 | 1.92 | 3000 | 0.1114 | 0.9667 |
| 0.1373 | 1.95 | 3050 | 0.1244 | 0.9667 |
| 0.1242 | 1.98 | 3100 | 0.1220 | 0.96 |
| 0.1267 | 2.02 | 3150 | 0.1139 | 0.9733 |
| 0.1021 | 2.05 | 3200 | 0.2013 | 0.9533 |
| 0.1091 | 2.08 | 3250 | 0.1027 | 0.9733 |
| 0.0648 | 2.11 | 3300 | 0.1464 | 0.9733 |
| 0.1207 | 2.14 | 3350 | 0.1255 | 0.9733 |
| 0.0833 | 2.18 | 3400 | 0.0708 | 0.98 |
| 0.0796 | 2.21 | 3450 | 0.1608 | 0.96 |
| 0.0624 | 2.24 | 3500 | 0.0827 | 0.98 |
| 0.0518 | 2.27 | 3550 | 0.0602 | 0.98 |
| 0.1242 | 2.3 | 3600 | 0.0752 | 0.9733 |
| 0.0422 | 2.34 | 3650 | 0.1000 | 0.9733 |
| 0.0748 | 2.37 | 3700 | 0.1171 | 0.9667 |
| 0.0839 | 2.4 | 3750 | 0.1341 | 0.9667 |
| 0.1033 | 2.43 | 3800 | 0.0744 | 0.98 |
| 0.0567 | 2.46 | 3850 | 0.0869 | 0.98 |
| 0.0756 | 2.5 | 3900 | 0.0745 | 0.98 |
| 0.0768 | 2.53 | 3950 | 0.0895 | 0.9733 |
| 0.0878 | 2.56 | 4000 | 0.0703 | 0.98 |
| 0.1023 | 2.59 | 4050 | 0.0806 | 0.98 |
| 0.0807 | 2.62 | 4100 | 0.0338 | 0.9867 |
| 0.0868 | 2.66 | 4150 | 0.0892 | 0.9667 |
| 0.0648 | 2.69 | 4200 | 0.1637 | 0.9533 |
| 0.0535 | 2.72 | 4250 | 0.1622 | 0.9667 |
| 0.0675 | 2.75 | 4300 | 0.1354 | 0.9733 |
| 0.1121 | 2.78 | 4350 | 0.1440 | 0.9533 |
| 0.0714 | 2.82 | 4400 | 0.1022 | 0.9467 |
| 0.0786 | 2.85 | 4450 | 0.1110 | 0.9733 |
| 0.0822 | 2.88 | 4500 | 0.1218 | 0.9733 |
| 0.1075 | 2.91 | 4550 | 0.1041 | 0.9733 |
| 0.0783 | 2.94 | 4600 | 0.0992 | 0.9733 |
| 0.1059 | 2.98 | 4650 | 0.1187 | 0.9733 |
| 0.067 | 3.01 | 4700 | 0.0931 | 0.9733 |
| 0.0425 | 3.04 | 4750 | 0.1252 | 0.9733 |
| 0.0539 | 3.07 | 4800 | 0.1152 | 0.9733 |
| 0.0419 | 3.1 | 4850 | 0.1534 | 0.9667 |
| 0.0462 | 3.13 | 4900 | 0.1398 | 0.9733 |
| 0.0435 | 3.17 | 4950 | 0.1168 | 0.98 |
| 0.0144 | 3.2 | 5000 | 0.1489 | 0.9667 |
| 0.0367 | 3.23 | 5050 | 0.1293 | 0.9733 |
| 0.0336 | 3.26 | 5100 | 0.1353 | 0.9733 |
| 0.0246 | 3.29 | 5150 | 0.0958 | 0.98 |
| 0.0181 | 3.33 | 5200 | 0.1294 | 0.9733 |
| 0.0357 | 3.36 | 5250 | 0.1209 | 0.9733 |
| 0.0683 | 3.39 | 5300 | 0.1748 | 0.96 |
| 0.0353 | 3.42 | 5350 | 0.2159 | 0.9533 |
| 0.0415 | 3.45 | 5400 | 0.1723 | 0.96 |
| 0.0336 | 3.49 | 5450 | 0.1031 | 0.98 |
| 0.0475 | 3.52 | 5500 | 0.0959 | 0.98 |
| 0.0393 | 3.55 | 5550 | 0.2163 | 0.96 |
| 0.0337 | 3.58 | 5600 | 0.1097 | 0.9733 |
| 0.0415 | 3.61 | 5650 | 0.1365 | 0.98 |
| 0.035 | 3.65 | 5700 | 0.1175 | 0.98 |
| 0.0448 | 3.68 | 5750 | 0.1543 | 0.9667 |
| 0.0445 | 3.71 | 5800 | 0.2005 | 0.96 |
| 0.0211 | 3.74 | 5850 | 0.1179 | 0.98 |
| 0.0198 | 3.77 | 5900 | 0.1298 | 0.9733 |
| 0.026 | 3.81 | 5950 | 0.2167 | 0.9667 |
| 0.0412 | 3.84 | 6000 | 0.1224 | 0.98 |
| 0.0446 | 3.87 | 6050 | 0.0798 | 0.98 |
| 0.0174 | 3.9 | 6100 | 0.0577 | 0.9933 |
| 0.0535 | 3.93 | 6150 | 0.1482 | 0.9667 |
| 0.0495 | 3.97 | 6200 | 0.0862 | 0.98 |
| 0.0267 | 4.0 | 6250 | 0.1190 | 0.98 |
| 0.0087 | 4.03 | 6300 | 0.0747 | 0.98 |
| 0.0102 | 4.06 | 6350 | 0.0753 | 0.9867 |
| 0.0178 | 4.09 | 6400 | 0.1812 | 0.9667 |
| 0.0088 | 4.13 | 6450 | 0.0817 | 0.98 |
| 0.0144 | 4.16 | 6500 | 0.0805 | 0.98 |
| 0.014 | 4.19 | 6550 | 0.0862 | 0.9867 |
| 0.0002 | 4.22 | 6600 | 0.0894 | 0.98 |
| 0.0112 | 4.25 | 6650 | 0.1004 | 0.9733 |
| 0.0054 | 4.29 | 6700 | 0.0832 | 0.9867 |
| 0.0001 | 4.32 | 6750 | 0.0812 | 0.9867 |
| 0.0202 | 4.35 | 6800 | 0.1828 | 0.9667 |
| 0.009 | 4.38 | 6850 | 0.1114 | 0.98 |
| 0.0001 | 4.41 | 6900 | 0.1295 | 0.98 |
| 0.0077 | 4.45 | 6950 | 0.1610 | 0.9733 |
| 0.0082 | 4.48 | 7000 | 0.1787 | 0.9667 |
| 0.0198 | 4.51 | 7050 | 0.1485 | 0.9733 |
| 0.0017 | 4.54 | 7100 | 0.1774 | 0.9733 |
| 0.0115 | 4.57 | 7150 | 0.1567 | 0.9733 |
| 0.0001 | 4.61 | 7200 | 0.1534 | 0.9733 |
| 0.0247 | 4.64 | 7250 | 0.2020 | 0.9667 |
| 0.0059 | 4.67 | 7300 | 0.1918 | 0.9667 |
| 0.0052 | 4.7 | 7350 | 0.1315 | 0.98 |
| 0.0076 | 4.73 | 7400 | 0.1289 | 0.98 |
| 0.0218 | 4.77 | 7450 | 0.1610 | 0.9733 |
| 0.0077 | 4.8 | 7500 | 0.1355 | 0.98 |
| 0.0096 | 4.83 | 7550 | 0.1378 | 0.9733 |
| 0.008 | 4.86 | 7600 | 0.1568 | 0.9733 |
| 0.0103 | 4.89 | 7650 | 0.1388 | 0.9733 |
| 0.0009 | 4.93 | 7700 | 0.1221 | 0.98 |
| 0.0287 | 4.96 | 7750 | 0.1448 | 0.9733 |
| 0.01 | 4.99 | 7800 | 0.1394 | 0.9733 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
| b243c9bf28a38905d47a885fe0b96852 |
mlegls/usv3_usdc_predictor_0 | mlegls | gpt2 | 9 | 2 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,030 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# usv3_usdc_predictor_0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
| 72b993018f7fb35f91816e3af5691874 |
c-x-he/my_awesome_wnut_model | c-x-he | distilbert | 15 | 0 | transformers | 0 | token-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,835 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# c-x-he/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1304
- Validation Loss: 0.2744
- Train Precision: 0.5429
- Train Recall: 0.4007
- Train F1: 0.4611
- Train Accuracy: 0.9441
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3493 | 0.3035 | 0.4447 | 0.2309 | 0.3039 | 0.9347 | 0 |
| 0.1647 | 0.2772 | 0.5284 | 0.3565 | 0.4257 | 0.9415 | 1 |
| 0.1304 | 0.2744 | 0.5429 | 0.4007 | 0.4611 | 0.9441 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.10.0
- Datasets 2.9.0
- Tokenizers 0.13.2
| 9ff17fbdd1d3e8b9b70dea1f57983794 |
dannytkn/bert-finetuned-squad | dannytkn | bert | 10 | 3 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 948 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.2
- Datasets 1.18.3
- Tokenizers 0.10.3
| 11d1a74a6b86d94bdae62591288fce53 |
d4niel92/distilbert-base-uncased-finetuned-emotion | d4niel92 | distilbert | 12 | 0 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2259
- Accuracy: 0.924
- F1: 0.9238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8417 | 1.0 | 250 | 0.3291 | 0.9005 | 0.8962 |
| 0.2551 | 2.0 | 500 | 0.2259 | 0.924 | 0.9238 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 406ba899c219444e946545424bc77a29 |
liaad/srl-en_xlmr-base | liaad | xlm-roberta | 7 | 3 | transformers | 1 | feature-extraction | true | false | false | apache-2.0 | ['multilingual', 'pt', 'en'] | ['PropBank.Br', 'CoNLL-2012'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['xlm-roberta-base', 'semantic role labeling', 'finetuned'] | false | true | true | 4,083 | false |
# XLM-R base fine-tuned on English semantic role labeling
## Model description
This model is the [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) fine-tuned on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-en_xlmr-base")
model = AutoModel.from_pretrained("liaad/srl-en_xlmr-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The models were trained only for 5 epochs.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The models were trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data. They were tested on the PropBank.Br data set as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 09cd56349a6470246a7af6296de84312 |
thatdramebaazguy/movie-roberta-base | thatdramebaazguy | roberta | 10 | 7 | transformers | 1 | fill-mask | true | true | true | cc-by-4.0 | ['English'] | ['imdb', 'cornell_movie_dialogue', 'polarity_movie_data', '25mlens_movie_data'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'roberta-base', 'masked-language-modeling', 'masked-lm'] | false | true | true | 1,319 | false | # roberta-base for MLM
Objective: To make a Roberta Base for the Movie Domain by using various Movie Datasets as simple text for Masked Language Modeling.
This is the Movie Roberta to be used in Movie Domain applications.
```
model_name = "thatdramebaazguy/movie-roberta-base"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="Fill-Mask")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Fill-Mask
**Training data:** imdb, polarity movie data, cornell_movie_dialogue, 25mlens movie names
**Eval data:** imdb, polarity movie data, cornell_movie_dialogue, 25mlens movie names
**Infrastructure**: 4x Tesla v100
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/train_movie_roberta.sh)
## Hyperparameters
```
Num examples = 4767233
Num Epochs = 2
Instantaneous batch size per device = 20
Total train batch size (w. parallel, distributed & accumulation) = 80
Gradient Accumulation steps = 1
Total optimization steps = 119182
eval_loss = 1.6153
eval_samples = 20573
perplexity = 5.0296
learning_rate=5e-05
n_gpu = 4
```
## Performance
perplexity = 5.0296
Some of my work:
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
---
| a6a9b3834886565cd570e09f067b23ed |
sayakpaul/distilbert-base-uncased-finetuned-emotion-lr-3e-05-wd-001 | sayakpaul | distilbert | 10 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,394 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-lr-3e-05-wd-001
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2415
- Accuracy: 0.919
- F1: 0.9191
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9356 | 1.0 | 125 | 0.3832 | 0.8895 | 0.8855 |
| 0.2866 | 2.0 | 250 | 0.2415 | 0.919 | 0.9191 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.10.0
- Datasets 2.6.1
- Tokenizers 0.13.1
| d935a2bf1a0b863de14705fde085a237 |
ryusangwon/distilbert-base-uncased-finetuned-emotion | ryusangwon | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,341 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an emtion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2254
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3271 | 0.903 | 0.8983 |
| No log | 2.0 | 500 | 0.2254 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| cdc9cb535b0f3068352302b7de96e867 |
Rolv-Arild/xls-r-300m-npsc-4 | Rolv-Arild | wav2vec2 | 27 | 11 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'NbAiLab/NPSC', 'generated_from_trainer'] | true | true | true | 5,564 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1957
- Wer: 0.1697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.4527 | 0.28 | 250 | 4.0144 | 1.0 |
| 3.1828 | 0.56 | 500 | 3.1369 | 1.0 |
| 2.9927 | 0.85 | 750 | 3.0183 | 1.0 |
| 2.9591 | 1.13 | 1000 | 2.9991 | 1.0 |
| 2.8989 | 1.41 | 1250 | 2.9000 | 1.0000 |
| 2.4286 | 1.69 | 1500 | 1.7688 | 0.9550 |
| 1.6765 | 1.98 | 1750 | 0.6842 | 0.4855 |
| 1.4521 | 2.26 | 2000 | 0.5096 | 0.3736 |
| 1.3589 | 2.54 | 2250 | 0.4479 | 0.3335 |
| 1.3136 | 2.82 | 2500 | 0.4056 | 0.3123 |
| 1.2856 | 3.11 | 2750 | 0.3870 | 0.2987 |
| 1.2283 | 3.39 | 3000 | 0.3646 | 0.2828 |
| 1.2053 | 3.67 | 3250 | 0.3499 | 0.2748 |
| 1.2087 | 3.95 | 3500 | 0.3345 | 0.2603 |
| 1.2002 | 4.24 | 3750 | 0.3320 | 0.2523 |
| 1.1383 | 4.52 | 4000 | 0.3117 | 0.2439 |
| 1.1364 | 4.8 | 4250 | 0.3198 | 0.2383 |
| 1.158 | 5.08 | 4500 | 0.3071 | 0.2342 |
| 1.108 | 5.37 | 4750 | 0.3011 | 0.2314 |
| 1.1025 | 5.65 | 5000 | 0.2875 | 0.2289 |
| 1.0697 | 5.93 | 5250 | 0.2926 | 0.2256 |
| 1.0904 | 6.21 | 5500 | 0.2695 | 0.2245 |
| 1.0802 | 6.5 | 5750 | 0.2602 | 0.2189 |
| 1.0882 | 6.78 | 6000 | 0.2603 | 0.2168 |
| 1.0881 | 7.06 | 6250 | 0.2540 | 0.2293 |
| 1.0378 | 7.34 | 6500 | 0.2614 | 0.2193 |
| 1.0397 | 7.63 | 6750 | 0.2707 | 0.2104 |
| 1.0296 | 7.91 | 7000 | 0.2483 | 0.2119 |
| 1.0249 | 8.19 | 7250 | 0.2483 | 0.2047 |
| 1.013 | 8.47 | 7500 | 0.2487 | 0.2042 |
| 1.0064 | 8.76 | 7750 | 0.2456 | 0.2016 |
| 1.0668 | 9.04 | 8000 | 0.2397 | 0.1995 |
| 1.0129 | 9.32 | 8250 | 0.2374 | 0.1994 |
| 1.0164 | 9.6 | 8500 | 0.2206 | 0.1992 |
| 0.975 | 9.89 | 8750 | 0.2247 | 0.1973 |
| 0.9849 | 10.17 | 9000 | 0.2325 | 0.1953 |
| 0.9826 | 10.45 | 9250 | 0.2301 | 0.1934 |
| 0.9835 | 10.73 | 9500 | 0.2192 | 0.1942 |
| 0.9676 | 11.02 | 9750 | 0.2266 | 0.1913 |
| 0.9627 | 11.3 | 10000 | 0.2193 | 0.1921 |
| 0.976 | 11.58 | 10250 | 0.2309 | 0.1882 |
| 0.969 | 11.86 | 10500 | 0.2268 | 0.1886 |
| 0.9611 | 12.15 | 10750 | 0.2322 | 0.1863 |
| 0.9397 | 12.43 | 11000 | 0.2197 | 0.1844 |
| 0.9601 | 12.71 | 11250 | 0.2211 | 0.1871 |
| 0.9718 | 12.99 | 11500 | 0.2079 | 0.1898 |
| 0.9347 | 13.28 | 11750 | 0.2054 | 0.1843 |
| 0.9377 | 13.56 | 12000 | 0.2031 | 0.1842 |
| 0.934 | 13.84 | 12250 | 0.2059 | 0.1806 |
| 0.9295 | 14.12 | 12500 | 0.2122 | 0.1861 |
| 0.935 | 14.41 | 12750 | 0.2072 | 0.1787 |
| 0.9021 | 14.69 | 13000 | 0.2105 | 0.1781 |
| 0.9193 | 14.97 | 13250 | 0.2035 | 0.1786 |
| 0.9214 | 15.25 | 13500 | 0.2035 | 0.1766 |
| 0.9048 | 15.54 | 13750 | 0.1964 | 0.1758 |
| 0.9006 | 15.82 | 14000 | 0.1984 | 0.1757 |
| 0.9027 | 16.1 | 14250 | 0.2022 | 0.1743 |
| 0.9083 | 16.38 | 14500 | 0.1969 | 0.1744 |
| 0.9761 | 16.67 | 14750 | 0.1963 | 0.1728 |
| 0.9311 | 16.95 | 15000 | 0.1960 | 0.1737 |
| 0.886 | 17.23 | 15250 | 0.1929 | 0.1726 |
| 0.8969 | 17.51 | 15500 | 0.1928 | 0.1734 |
| 0.9084 | 17.8 | 15750 | 0.1937 | 0.1713 |
| 0.8795 | 18.08 | 16000 | 0.1978 | 0.1709 |
| 0.8883 | 18.36 | 16250 | 0.1956 | 0.1703 |
| 0.8901 | 18.64 | 16500 | 0.1933 | 0.1705 |
| 0.8922 | 18.93 | 16750 | 0.1962 | 0.1711 |
| 0.8765 | 19.21 | 17000 | 0.1962 | 0.1711 |
| 0.8992 | 19.49 | 17250 | 0.1965 | 0.1703 |
| 0.8778 | 19.77 | 17500 | 0.1957 | 0.1699 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1
- Tokenizers 0.11.0
| 1279e61ae88f9811203aadaa615e3148 |
Maheedhar/TF-Fine_tuned_T5-base | Maheedhar | t5 | 4 | 1 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,317 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TF-Fine_tuned_T5-base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2063
- Validation Loss: 0.1893
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6995 | 0.2622 | 0 |
| 0.2845 | 0.2256 | 1 |
| 0.2471 | 0.2079 | 2 |
| 0.2216 | 0.1974 | 3 |
| 0.2063 | 0.1893 | 4 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
| 58151a3cfeb502911ca1399a2fd0b668 |
sd-dreambooth-library/langel | sd-dreambooth-library | null | 24 | 2 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,096 | false | ### Langel on Stable Diffusion via Dreambooth
#### model by Kasuzu
This your the Stable Diffusion model fine-tuned the Langel concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **Langel**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
Here are the images used for training this concept:






| ad04136f9dbc7319b6c5a0749dc70c28 |
elopezlopez/Bio_ClinicalBERT_fold_10_ternary_v1 | elopezlopez | bert | 13 | 3 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,669 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_10_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0706
- F1: 0.7748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.6097 | 0.7290 |
| 0.555 | 2.0 | 580 | 0.6106 | 0.7649 |
| 0.555 | 3.0 | 870 | 0.6608 | 0.7847 |
| 0.2449 | 4.0 | 1160 | 0.8894 | 0.7809 |
| 0.2449 | 5.0 | 1450 | 1.1049 | 0.7760 |
| 0.1055 | 6.0 | 1740 | 1.2951 | 0.7884 |
| 0.0338 | 7.0 | 2030 | 1.4809 | 0.7760 |
| 0.0338 | 8.0 | 2320 | 1.4751 | 0.7698 |
| 0.0225 | 9.0 | 2610 | 1.6648 | 0.7809 |
| 0.0225 | 10.0 | 2900 | 1.7174 | 0.7772 |
| 0.006 | 11.0 | 3190 | 1.7872 | 0.7735 |
| 0.006 | 12.0 | 3480 | 1.7803 | 0.7748 |
| 0.0161 | 13.0 | 3770 | 1.9302 | 0.7735 |
| 0.0005 | 14.0 | 4060 | 1.9853 | 0.7748 |
| 0.0005 | 15.0 | 4350 | 2.0043 | 0.7735 |
| 0.0062 | 16.0 | 4640 | 1.9969 | 0.7760 |
| 0.0062 | 17.0 | 4930 | 2.0173 | 0.7760 |
| 0.0068 | 18.0 | 5220 | 1.9891 | 0.7785 |
| 0.0034 | 19.0 | 5510 | 1.9951 | 0.7797 |
| 0.0034 | 20.0 | 5800 | 2.0283 | 0.7748 |
| 0.0049 | 21.0 | 6090 | 1.9985 | 0.7834 |
| 0.0049 | 22.0 | 6380 | 2.0131 | 0.7760 |
| 0.0011 | 23.0 | 6670 | 2.0526 | 0.7748 |
| 0.0011 | 24.0 | 6960 | 2.0662 | 0.7748 |
| 0.001 | 25.0 | 7250 | 2.0706 | 0.7748 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 83c3644f6ec958ead1df010f7c12825f |
ali2066/finetuned_token_2e-05_16_02_2022-01_55_54 | ali2066 | distilbert | 40 | 10 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,787 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-01_55_54
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| b1a371d73545e6f02af6342457b5f9c5 |
meghazisofiane/opus-mt-en-ar-finetuned-en-to-ar | meghazisofiane | marian | 15 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['un_multi'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,378 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-finetuned-en-to-ar
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the un_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8133
- Bleu: 64.6767
- Gen Len: 17.595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 50 | 0.7710 | 64.3416 | 17.4 |
| No log | 2.0 | 100 | 0.7569 | 63.9546 | 17.465 |
| No log | 3.0 | 150 | 0.7570 | 64.7484 | 17.385 |
| No log | 4.0 | 200 | 0.7579 | 65.4073 | 17.305 |
| No log | 5.0 | 250 | 0.7624 | 64.8939 | 17.325 |
| No log | 6.0 | 300 | 0.7696 | 65.1257 | 17.45 |
| No log | 7.0 | 350 | 0.7747 | 65.527 | 17.395 |
| No log | 8.0 | 400 | 0.7791 | 65.1357 | 17.52 |
| No log | 9.0 | 450 | 0.7900 | 65.3812 | 17.415 |
| 0.3982 | 10.0 | 500 | 0.7925 | 65.7346 | 17.39 |
| 0.3982 | 11.0 | 550 | 0.7951 | 65.1267 | 17.62 |
| 0.3982 | 12.0 | 600 | 0.8040 | 64.6874 | 17.495 |
| 0.3982 | 13.0 | 650 | 0.8069 | 64.7788 | 17.52 |
| 0.3982 | 14.0 | 700 | 0.8105 | 64.6701 | 17.585 |
| 0.3982 | 15.0 | 750 | 0.8120 | 64.7111 | 17.58 |
| 0.3982 | 16.0 | 800 | 0.8133 | 64.6767 | 17.595 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| de90439c69747b76a5a26c2b1eb47f82 |
modhp/wav2vec2-model1-torgo | modhp | wav2vec2 | 80 | 9 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 975 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-model1-torgo
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.3
- Tokenizers 0.11.6
| b9f09de676ef9b06cc22fb1fb54ccabc |
kcarnold/inquisitive2 | kcarnold | bart | 15 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 984 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# inquisitive2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0
- Datasets 2.3.0
- Tokenizers 0.12.1
| 612efebbfc49ae9c7b7f0aaa3d542e50 |
huyue012/wav2vec2-base-cynthia-timit | huyue012 | wav2vec2 | 14 | 8 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,988 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cynthia-timit
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4888
- Wer: 0.3315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.7674 | 1.0 | 500 | 2.8994 | 1.0 |
| 1.3538 | 2.01 | 1000 | 0.5623 | 0.5630 |
| 0.5416 | 3.01 | 1500 | 0.4595 | 0.4765 |
| 0.3563 | 4.02 | 2000 | 0.4435 | 0.4328 |
| 0.2869 | 5.02 | 2500 | 0.4035 | 0.4145 |
| 0.2536 | 6.02 | 3000 | 0.4090 | 0.3945 |
| 0.2072 | 7.03 | 3500 | 0.4188 | 0.3809 |
| 0.1825 | 8.03 | 4000 | 0.4139 | 0.3865 |
| 0.1754 | 9.04 | 4500 | 0.4320 | 0.3763 |
| 0.1477 | 10.04 | 5000 | 0.4668 | 0.3699 |
| 0.1418 | 11.04 | 5500 | 0.4439 | 0.3683 |
| 0.1207 | 12.05 | 6000 | 0.4419 | 0.3678 |
| 0.115 | 13.05 | 6500 | 0.4606 | 0.3786 |
| 0.1022 | 14.06 | 7000 | 0.4403 | 0.3610 |
| 0.1019 | 15.06 | 7500 | 0.4966 | 0.3609 |
| 0.0898 | 16.06 | 8000 | 0.4675 | 0.3586 |
| 0.0824 | 17.07 | 8500 | 0.4844 | 0.3583 |
| 0.0737 | 18.07 | 9000 | 0.4801 | 0.3534 |
| 0.076 | 19.08 | 9500 | 0.4945 | 0.3529 |
| 0.0627 | 20.08 | 10000 | 0.4700 | 0.3417 |
| 0.0723 | 21.08 | 10500 | 0.4630 | 0.3449 |
| 0.0597 | 22.09 | 11000 | 0.5164 | 0.3456 |
| 0.0566 | 23.09 | 11500 | 0.4957 | 0.3401 |
| 0.0453 | 24.1 | 12000 | 0.5032 | 0.3419 |
| 0.0492 | 25.1 | 12500 | 0.5391 | 0.3387 |
| 0.0524 | 26.1 | 13000 | 0.5057 | 0.3348 |
| 0.0381 | 27.11 | 13500 | 0.5098 | 0.3331 |
| 0.0402 | 28.11 | 14000 | 0.5087 | 0.3353 |
| 0.0358 | 29.12 | 14500 | 0.4888 | 0.3315 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| 06eaa1e0926254149775a2b31d0f9a17 |
Evelyn18/distilbert-base-uncased-becasv2-4 | Evelyn18 | distilbert | 13 | 7 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['becasv2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,530 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becasv2-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 5.3677 |
| No log | 2.0 | 12 | 4.6741 |
| No log | 3.0 | 18 | 4.2978 |
| No log | 4.0 | 24 | 3.9963 |
| No log | 5.0 | 30 | 3.7544 |
| No log | 6.0 | 36 | 3.5810 |
| No log | 7.0 | 42 | 3.4932 |
| No log | 8.0 | 48 | 3.4637 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 12a2cbab990d315eda7981702d48fea1 |
merve/20newsgroups | merve | null | 4 | 0 | sklearn | 0 | text-classification | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sklearn', 'skops', 'text-classification'] | false | true | true | 11,561 | false |
# Model description
This is a multinomial naive Bayes model trained on 20 new groups dataset. Count vectorizer and TFIDF vectorizer are used on top of the model.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|---------------------|----------------------------------------------------------------------------------------|
| memory | |
| steps | [('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB())] |
| verbose | False |
| vect | CountVectorizer() |
| tfidf | TfidfTransformer() |
| clf | MultinomialNB() |
| vect__analyzer | word |
| vect__binary | False |
| vect__decode_error | strict |
| vect__dtype | <class 'numpy.int64'> |
| vect__encoding | utf-8 |
| vect__input | content |
| vect__lowercase | True |
| vect__max_df | 1.0 |
| vect__max_features | |
| vect__min_df | 1 |
| vect__ngram_range | (1, 1) |
| vect__preprocessor | |
| vect__stop_words | |
| vect__strip_accents | |
| vect__token_pattern | (?u)\b\w\w+\b |
| vect__tokenizer | |
| vect__vocabulary | |
| tfidf__norm | l2 |
| tfidf__smooth_idf | True |
| tfidf__sublinear_tf | False |
| tfidf__use_idf | True |
| clf__alpha | 1.0 |
| clf__class_prior | |
| clf__fit_prior | True |
</details>
### Model Plot
The model plot is below.
<style>#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 {color: black;background-color: white;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 pre{padding: 0;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-toggleable {background-color: white;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-estimator:hover {background-color: #d4ebff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-item {z-index: 1;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel-item:only-child::after {width: 0;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-text-repr-fallback {display: none;}</style><div id="sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('vect', CountVectorizer()), ('tfidf', TfidfTransformer()),('clf', MultinomialNB())])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="9caae382-ba9c-4e50-b4e0-017fa1bca4b4" type="checkbox" ><label for="9caae382-ba9c-4e50-b4e0-017fa1bca4b4" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('vect', CountVectorizer()), ('tfidf', TfidfTransformer()),('clf', MultinomialNB())])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="6bf44786-d8ef-4af0-be6a-2ac8b82cf581" type="checkbox" ><label for="6bf44786-d8ef-4af0-be6a-2ac8b82cf581" class="sk-toggleable__label sk-toggleable__label-arrow">CountVectorizer</label><div class="sk-toggleable__content"><pre>CountVectorizer()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="69b80eb1-41d4-421a-9875-a9e95faa6d45" type="checkbox" ><label for="69b80eb1-41d4-421a-9875-a9e95faa6d45" class="sk-toggleable__label sk-toggleable__label-arrow">TfidfTransformer</label><div class="sk-toggleable__content"><pre>TfidfTransformer()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="63c8c7e2-7443-4092-a86b-32b1cbef1a1b" type="checkbox" ><label for="63c8c7e2-7443-4092-a86b-32b1cbef1a1b" class="sk-toggleable__label sk-toggleable__label-arrow">MultinomialNB</label><div class="sk-toggleable__content"><pre>MultinomialNB()</pre></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(pkl_filename, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
merve
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
bibtex
@inproceedings{...,year={2020}}
``` | ba60652a5320f7f68e4c6987f6de03e5 |
victorbahlangene/deberta-v3-small-fine-Disaster-Tweets-Part2 | victorbahlangene | deberta-v2 | 11 | 5 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,563 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-small-fine-Disaster-Tweets-Part2
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4849
- Accuracy: 0.8275
- F1: 0.8278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 203 | 0.4670 | 0.8511 | 0.8503 |
| No log | 2.0 | 406 | 0.4381 | 0.8459 | 0.8455 |
| 0.4016 | 3.0 | 609 | 0.4096 | 0.8424 | 0.8413 |
| 0.4016 | 4.0 | 812 | 0.4849 | 0.8275 | 0.8278 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 2d3b91c95cd2fc5825084459cee2cd48 |
eglesaks/xlm-roberta-base-finetuned-est | eglesaks | xlm-roberta | 11 | 5 | transformers | 0 | question-answering | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,257 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-est
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 52 | 4.2576 |
| No log | 2.0 | 104 | 3.8075 |
| No log | 3.0 | 156 | 3.6781 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 2bf8eb37e1c376bb70d00dadefa81d93 |
evamaxfield/soft-search | evamaxfield | distilbert | 10 | 7 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,756 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soft-search
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5558
- F1: 0.5960
- Accuracy: 0.7109
- Precision: 0.5769
- Recall: 0.6164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|
| 0.5939 | 1.0 | 71 | 0.5989 | 0.0533 | 0.6635 | 1.0 | 0.0274 |
| 0.5903 | 2.0 | 142 | 0.5558 | 0.5960 | 0.7109 | 0.5769 | 0.6164 |
| 0.4613 | 3.0 | 213 | 0.6670 | 0.5641 | 0.6777 | 0.5301 | 0.6027 |
| 0.4454 | 4.0 | 284 | 0.7647 | 0.5541 | 0.6872 | 0.5467 | 0.5616 |
| 0.2931 | 5.0 | 355 | 0.8726 | 0.5139 | 0.6682 | 0.5211 | 0.5068 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
| 9464c06b0f75eef2e479ea1afbf8d676 |
google/mobilenet_v2_0.35_96 | google | mobilenet_v2 | 5 | 834 | transformers | 0 | image-classification | true | false | false | other | null | ['imagenet-1k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision', 'image-classification'] | false | true | true | 2,898 | false |
# MobileNet V2
MobileNet V2 model pre-trained on ImageNet-1k at resolution 96x96. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet).
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_0.35\_96**, where **0.35** is the depth multiplier and **96** is the resolution of the input images the model was trained on.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
preprocessor = AutoImageProcessor.from_pretrained("google/mobilenet_v2_0.35_96")
model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_0.35_96")
inputs = preprocessor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
### BibTeX entry and citation info
```bibtex
@inproceedings{mobilenetv22018,
title={MobileNetV2: Inverted Residuals and Linear Bottlenecks},
author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen},
booktitle={CVPR},
year={2018}
}
```
| 9817eb3c500bc367ed6aad5ea23d8c8e |
JovialValley/model_broadclass_onSet2.1 | JovialValley | wav2vec2 | 12 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 13,091 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_broadclass_onSet2.1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1459
- 0 Precision: 0.9630
- 0 Recall: 1.0
- 0 F1-score: 0.9811
- 0 Support: 26
- 1 Precision: 1.0
- 1 Recall: 0.9231
- 1 F1-score: 0.9600
- 1 Support: 39
- 2 Precision: 1.0
- 2 Recall: 1.0
- 2 F1-score: 1.0
- 2 Support: 19
- 3 Precision: 0.8667
- 3 Recall: 1.0
- 3 F1-score: 0.9286
- 3 Support: 13
- Accuracy: 0.9691
- Macro avg Precision: 0.9574
- Macro avg Recall: 0.9808
- Macro avg F1-score: 0.9674
- Macro avg Support: 97
- Weighted avg Precision: 0.9722
- Weighted avg Recall: 0.9691
- Weighted avg F1-score: 0.9693
- Weighted avg Support: 97
- Wer: 0.1293
- Mtrix: [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 1, 36, 0, 2], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:---------------------------------------------------------------------------------------:|
| 2.3399 | 4.16 | 100 | 2.1769 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 2.3152 | 8.33 | 200 | 2.1458 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.9859 | 12.49 | 300 | 1.9172 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.7126 | 16.65 | 400 | 1.6954 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.6833 | 20.82 | 500 | 1.7553 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.5318 | 24.98 | 600 | 1.5921 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.5868 | 29.16 | 700 | 1.5517 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.5577 | 33.33 | 800 | 1.5089 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.2201 | 37.49 | 900 | 1.1567 | 0.4643 | 1.0 | 0.6341 | 26 | 1.0 | 0.4872 | 0.6552 | 39 | 1.0 | 0.5263 | 0.6897 | 19 | 1.0 | 0.9231 | 0.9600 | 13 | 0.6907 | 0.8661 | 0.7341 | 0.7347 | 97 | 0.8564 | 0.6907 | 0.6971 | 97 | 0.9485 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 20, 19, 0, 0], [2, 9, 0, 10, 0], [3, 1, 0, 0, 12]] |
| 0.9692 | 41.65 | 1000 | 1.0489 | 0.5102 | 0.9615 | 0.6667 | 26 | 0.9615 | 0.6410 | 0.7692 | 39 | 0.9167 | 0.5789 | 0.7097 | 19 | 1.0 | 0.7692 | 0.8696 | 13 | 0.7320 | 0.8471 | 0.7377 | 0.7538 | 97 | 0.8369 | 0.7320 | 0.7435 | 97 | 0.9374 | [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 13, 25, 1, 0], [2, 8, 0, 11, 0], [3, 3, 0, 0, 10]] |
| 0.9214 | 45.82 | 1100 | 0.9620 | 0.9615 | 0.9615 | 0.9615 | 26 | 0.9730 | 0.9231 | 0.9474 | 39 | 0.9048 | 1.0 | 0.9500 | 19 | 1.0 | 1.0 | 1.0 | 13 | 0.9588 | 0.9598 | 0.9712 | 0.9647 | 97 | 0.9602 | 0.9588 | 0.9587 | 97 | 0.9328 | [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 1, 36, 2, 0], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
| 0.9305 | 49.98 | 1200 | 0.9736 | 0.8125 | 1.0 | 0.8966 | 26 | 1.0 | 0.8205 | 0.9014 | 39 | 0.9048 | 1.0 | 0.9500 | 19 | 1.0 | 0.9231 | 0.9600 | 13 | 0.9175 | 0.9293 | 0.9359 | 0.9270 | 97 | 0.9311 | 0.9175 | 0.9175 | 97 | 0.9253 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 5, 32, 2, 0], [2, 0, 0, 19, 0], [3, 1, 0, 0, 12]] |
| 0.8982 | 54.16 | 1300 | 0.9586 | 0.7812 | 0.9615 | 0.8621 | 26 | 0.9688 | 0.7949 | 0.8732 | 39 | 0.9 | 0.9474 | 0.9231 | 19 | 1.0 | 1.0 | 1.0 | 13 | 0.8969 | 0.9125 | 0.9259 | 0.9146 | 97 | 0.9092 | 0.8969 | 0.8970 | 97 | 0.9283 | [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 6, 31, 2, 0], [2, 1, 0, 18, 0], [3, 0, 0, 0, 13]] |
| 0.8382 | 58.33 | 1400 | 0.8864 | 0.9615 | 0.9615 | 0.9615 | 26 | 0.9722 | 0.8974 | 0.9333 | 39 | 0.95 | 1.0 | 0.9744 | 19 | 0.8667 | 1.0 | 0.9286 | 13 | 0.9485 | 0.9376 | 0.9647 | 0.9495 | 97 | 0.9509 | 0.9485 | 0.9483 | 97 | 0.8904 | [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 1, 35, 1, 2], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
| 0.7314 | 62.49 | 1500 | 0.7880 | 0.96 | 0.9231 | 0.9412 | 26 | 0.9474 | 0.9231 | 0.9351 | 39 | 0.95 | 1.0 | 0.9744 | 19 | 0.9286 | 1.0 | 0.9630 | 13 | 0.9485 | 0.9465 | 0.9615 | 0.9534 | 97 | 0.9488 | 0.9485 | 0.9481 | 97 | 0.8020 | [[0, 1, 2, 3], [0, 24, 2, 0, 0], [1, 1, 36, 1, 1], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
| 0.448 | 66.65 | 1600 | 0.3458 | 0.9615 | 0.9615 | 0.9615 | 26 | 0.9730 | 0.9231 | 0.9474 | 39 | 1.0 | 1.0 | 1.0 | 19 | 0.8667 | 1.0 | 0.9286 | 13 | 0.9588 | 0.9503 | 0.9712 | 0.9594 | 97 | 0.9610 | 0.9588 | 0.9590 | 97 | 0.2561 | [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 1, 36, 0, 2], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
| 0.1921 | 70.82 | 1700 | 0.1970 | 0.9615 | 0.9615 | 0.9615 | 26 | 0.9730 | 0.9231 | 0.9474 | 39 | 1.0 | 1.0 | 1.0 | 19 | 0.8667 | 1.0 | 0.9286 | 13 | 0.9588 | 0.9503 | 0.9712 | 0.9594 | 97 | 0.9610 | 0.9588 | 0.9590 | 97 | 0.1581 | [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 1, 36, 0, 2], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
| 0.1499 | 74.98 | 1800 | 0.1463 | 0.9615 | 0.9615 | 0.9615 | 26 | 0.9730 | 0.9231 | 0.9474 | 39 | 1.0 | 1.0 | 1.0 | 19 | 0.8667 | 1.0 | 0.9286 | 13 | 0.9588 | 0.9503 | 0.9712 | 0.9594 | 97 | 0.9610 | 0.9588 | 0.9590 | 97 | 0.1384 | [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 1, 36, 0, 2], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
| 0.1099 | 79.16 | 1900 | 0.1459 | 0.9630 | 1.0 | 0.9811 | 26 | 1.0 | 0.9231 | 0.9600 | 39 | 1.0 | 1.0 | 1.0 | 19 | 0.8667 | 1.0 | 0.9286 | 13 | 0.9691 | 0.9574 | 0.9808 | 0.9674 | 97 | 0.9722 | 0.9691 | 0.9693 | 97 | 0.1293 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 1, 36, 0, 2], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 677cc09ee693bc1826a103d9a2e618e6 |
zates/distilbert-base-uncased-finetuned-squad-seed-420-finetuned-squad-seed-420 | zates | distilbert | 9 | 3 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad_v2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,027 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-seed-420-finetuned-squad-seed-420
This model is a fine-tuned version of [zates/distilbert-base-uncased-finetuned-squad-seed-420](https://huggingface.co/zates/distilbert-base-uncased-finetuned-squad-seed-420) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| b8bf39f193992fb1611cf58614adab2f |
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e8 | theojolliffe | bart | 13 | 2 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,225 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-v3-e8
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8329
- Rouge1: 53.3047
- Rouge2: 34.6219
- Rougel: 37.6148
- Rougelsum: 50.8973
- Gen Len: 141.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 1.1211 | 50.4753 | 30.5417 | 33.192 | 48.1321 | 141.8704 |
| 1.3657 | 2.0 | 796 | 0.9944 | 52.2197 | 33.6109 | 35.9448 | 50.0028 | 141.6111 |
| 0.887 | 3.0 | 1194 | 0.9149 | 52.796 | 33.7683 | 36.4941 | 50.4514 | 141.5926 |
| 0.6548 | 4.0 | 1592 | 0.8725 | 52.5353 | 33.4019 | 36.4573 | 50.2506 | 142.0 |
| 0.6548 | 5.0 | 1990 | 0.8540 | 53.2987 | 34.6476 | 38.314 | 51.163 | 141.4815 |
| 0.504 | 6.0 | 2388 | 0.8395 | 52.7218 | 34.6524 | 37.9921 | 50.5185 | 141.5556 |
| 0.4006 | 7.0 | 2786 | 0.8342 | 53.2251 | 35.2702 | 38.3763 | 51.1958 | 141.6667 |
| 0.3314 | 8.0 | 3184 | 0.8329 | 53.3047 | 34.6219 | 37.6148 | 50.8973 | 141.8704 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 5a9bc8eeb743abfea938e1cf2f918bbe |
kyryl0s/gpt2-uk-zno-edition | kyryl0s | gpt2 | 6 | 4 | transformers | 1 | text-generation | true | false | false | afl-3.0 | ['uk'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 950 | false | ## GPT2 trained to generate ЗНО (Ukrainian exam SAT type of thing) essays
Generated texts are not very cohesive yet but I'm working on it. <br />
The Hosted inference API outputs (on the right) are too short for some reason. Trying to fix it. <br />
Use the code from the example below. The model takes "ZNOTITLE: your essay title" inputs.
### Example of usage:
```python
from transformers import AlbertTokenizer, GPT2LMHeadModel
tokenizer = AlbertTokenizer.from_pretrained("kyryl0s/gpt2-uk-zno-edition")
model = GPT2LMHeadModel.from_pretrained("kyryl0s/gpt2-uk-zno-edition")
input_ids = tokenizer.encode("ZNOTITLE: За яку працю треба більше поважати людину - за фізичну чи інтелектуальну?", add_special_tokens=False, return_tensors='pt')
outputs = model.generate(
input_ids,
do_sample=True,
num_return_sequences=1,
max_length=250
)
for i, out in enumerate(outputs):
print("{}: {}".format(i, tokenizer.decode(out)))
``` | 682217a745be03200b407695d510c6ba |
model-attribution-challenge/gpt2 | model-attribution-challenge | gpt2 | 14 | 135 | transformers | 0 | text-generation | true | true | true | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert'] | false | true | true | 7,810 | false |
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| ec368bdecd4143f44919d3b8538f8026 |
inverse-scaling/opt-350m_eval | inverse-scaling | opt | 11 | 3 | transformers | 0 | text-generation | true | true | true | other | ['en'] | null | null | 18 | 6 | 7 | 5 | 0 | 0 | 0 | ['text-generation'] | true | true | true | 8,668 | false |
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-350m")
>>> generator("Hello, I'm am conscious and")
[{'generated_text': "Hello, I'm am conscious and I'm a bit of a noob. I'm looking for"}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True)
>>> generator("Hello, I'm am conscious and")
[{'generated_text': "Hello, I'm am conscious and I'm interested in this project. Can I get an initial contact"}]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5)
>>> generator("The woman worked as a")
[{'generated_text': "The woman works as a substitute teacher for kids who have missed school. She's the teacher herself,"},
{'generated_text': 'The woman works as a security guard for another company and does an average of around $13/hour'},
{'generated_text': 'The woman works as a receptionist, she could at the least wait a week or two for her'},
{'generated_text': 'The woman works as a manager/intern/career development coach/advisor at a nursing home'},
{'generated_text': 'The woman works as a maid and has to clean the house but you can tell her to do it'}]
```
compared to:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5)
>>> generator("The man worked as a")
[{'generated_text': 'The man works as a security guard for the National Football League franchise. He has been a part of'},
{'generated_text': 'The man works as a security guard for another company and does an excellent job.\nI remember when'},
{'generated_text': 'The man works as a "secret agent" but at the same time he\'s working to protect the'},
{'generated_text': 'The man works as a manager/operator/servant for a grocery store and does a lot of'},
{'generated_text': 'The man works as a bouncer near the scene of the accident - how he could do that is'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 4fdb14d7e6986beaa613d3ddc2df157a |
w11wo/javanese-distilbert-small | w11wo | distilbert | 8 | 7 | transformers | 0 | fill-mask | true | true | false | mit | ['jv'] | ['wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['javanese-distilbert-small'] | false | true | true | 3,473 | false |
## Javanese DistilBERT Small
Javanese DistilBERT Small is a masked language model based on the [DistilBERT model](https://arxiv.org/abs/1910.01108). It was trained on the latest (late December 2020) Javanese Wikipedia articles.
The model was originally HuggingFace's pretrained [English DistilBERT model](https://huggingface.co/distilbert-base-uncased) and is later fine-tuned on the Javanese dataset. It achieved a perplexity of 23.54 on the validation dataset (20% of the articles). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
Hugging Face's [Transformers](https://huggingface.co/transformers) library was used to train the model -- utilizing the base DistilBERT model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|-----------------------------|---------|------------------|-------------------------------------|
| `javanese-distilbert-small` | 66M | DistilBERT Small | Javanese Wikipedia (319 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 3.088 | 3.153 | 23.54 | 1:46:37 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-distilbert-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import DistilBertModel, DistilBertTokenizerFast
pretrained_name = "w11wo/javanese-distilbert-small"
model = DistilBertModel.from_pretrained(pretrained_name)
tokenizer = DistilBertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Author
Javanese DistilBERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
| 3e007c9d296a53dff7aeeb9af97157f7 |
muhtasham/small-mlm-glue-wnli-from-scratch-custom-tokenizer-expand-vocab | muhtasham | bert | 12 | 2 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,697 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-wnli-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.1384 | 6.25 | 500 | 5.9999 |
| 5.8428 | 12.5 | 1000 | 5.6581 |
| 5.4846 | 18.75 | 1500 | 5.4843 |
| 5.1716 | 25.0 | 2000 | 5.3955 |
| 4.8633 | 31.25 | 2500 | 4.9234 |
| 4.6185 | 37.5 | 3000 | 4.6246 |
| 4.2975 | 43.75 | 3500 | 4.3933 |
| 4.0116 | 50.0 | 4000 | 4.1432 |
| 3.7556 | 56.25 | 4500 | 3.8816 |
| 3.5262 | 62.5 | 5000 | 3.4922 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
| 62399edd11d63df4adad4623810c1ad2 |
jhakaran1/process-data | jhakaran1 | bert | 12 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,356 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# process-data
This model is a fine-tuned version of [jhakaran1/bert-base-uncased-bert-mlm](https://huggingface.co/jhakaran1/bert-base-uncased-bert-mlm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8087
- Accuracy: 0.6792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6939 | 1.0 | 3907 | 0.7903 | 0.6660 |
| 0.6155 | 2.0 | 7814 | 0.7929 | 0.6685 |
| 0.5436 | 3.0 | 11721 | 0.8087 | 0.6792 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| b7b0f4c7f079dd3cc21e0db86a165875 |
tomekkorbak/trusting_swartz | tomekkorbak | gpt2 | 23 | 1 | transformers | 0 | null | true | false | false | mit | ['en'] | ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 8,161 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trusting_swartz
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 4096}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'alpha': 1, 'name': 'Unlikelihood', 'score_threshold': 0.00078},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'trusting_swartz',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/2b4j03bo | a5599587a4041d4eec3557643e509802 |
Helsinki-NLP/opus-mt-en-run | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-en-run
* source languages: en
* target languages: run
* OPUS readme: [en-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-run/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.run | 34.2 | 0.591 |
| 93941dc1fb377cd473b39d97c08389f2 |
racro/sentiment-analysis-browser-extension | racro | distilbert | 45 | 7 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,054 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-analysis-browser-extension
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4233
- Accuracy: 0.8539
- F1: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 0b656b9c31c0462e33177fbadbfcc707 |
masapasa/xls-r-300m-sv-cv8 | masapasa | wav2vec2 | 19 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['sv-SE'] | ['mozilla-foundation/common_voice_8_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['robust-speech-event', 'automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'hf-asr-leaderboard'] | true | true | true | 23,213 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3347
- Wer: 1.0286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.7838 | 0.01 | 5 | 14.5035 | 1.0 |
| 13.0582 | 0.03 | 10 | 13.6658 | 1.0 |
| 7.3034 | 0.04 | 15 | 9.7898 | 1.0 |
| 6.1847 | 0.05 | 20 | 6.9148 | 1.0 |
| 5.3371 | 0.07 | 25 | 5.3661 | 1.0 |
| 4.4274 | 0.08 | 30 | 4.6945 | 1.0 |
| 4.0918 | 0.1 | 35 | 4.3172 | 1.0 |
| 4.1734 | 0.11 | 40 | 4.0759 | 1.0 |
| 3.7332 | 0.12 | 45 | 3.9039 | 1.0 |
| 3.6871 | 0.14 | 50 | 3.7777 | 1.0 |
| 3.4428 | 0.15 | 55 | 3.6718 | 1.0 |
| 3.5514 | 0.16 | 60 | 3.5947 | 1.0 |
| 3.4307 | 0.18 | 65 | 3.5144 | 1.0 |
| 3.4102 | 0.19 | 70 | 3.4432 | 1.0 |
| 3.4964 | 0.21 | 75 | 3.3890 | 1.0 |
| 3.3936 | 0.22 | 80 | 3.3467 | 1.0 |
| 3.3051 | 0.23 | 85 | 3.3102 | 1.0 |
| 3.278 | 0.25 | 90 | 3.2801 | 1.0 |
| 3.2223 | 0.26 | 95 | 3.2440 | 1.0 |
| 3.1888 | 0.27 | 100 | 3.2900 | 1.0 |
| 3.218 | 0.29 | 105 | 3.2627 | 1.0 |
| 3.1308 | 0.3 | 110 | 3.2152 | 1.0 |
| 3.109 | 0.31 | 115 | 3.1686 | 1.0 |
| 3.1188 | 0.33 | 120 | 3.1734 | 1.0 |
| 3.1132 | 0.34 | 125 | 3.1431 | 1.0 |
| 3.0667 | 0.36 | 130 | 3.1686 | 1.0 |
| 3.1167 | 0.37 | 135 | 3.1885 | 1.0 |
| 3.0592 | 0.38 | 140 | 3.1100 | 1.0 |
| 3.0531 | 0.4 | 145 | 3.1149 | 1.0 |
| 3.1224 | 0.41 | 150 | 3.1205 | 1.0 |
| 3.0651 | 0.42 | 155 | 3.1101 | 1.0 |
| 3.0077 | 0.44 | 160 | 3.0980 | 1.0 |
| 3.0027 | 0.45 | 165 | 3.1132 | 1.0 |
| 3.0423 | 0.47 | 170 | 3.0886 | 1.0 |
| 3.0462 | 0.48 | 175 | 3.0865 | 1.0 |
| 3.0701 | 0.49 | 180 | 3.0863 | 1.0 |
| 3.0871 | 0.51 | 185 | 3.0825 | 1.0 |
| 3.0585 | 0.52 | 190 | 3.0720 | 1.0 |
| 3.0274 | 0.53 | 195 | 3.0736 | 1.0 |
| 3.0983 | 0.55 | 200 | 3.0658 | 1.0 |
| 3.0538 | 0.56 | 205 | 3.1241 | 1.0 |
| 3.0862 | 0.57 | 210 | 3.0573 | 1.0 |
| 3.0041 | 0.59 | 215 | 3.0608 | 1.0 |
| 3.027 | 0.6 | 220 | 3.0614 | 1.0 |
| 2.9916 | 0.62 | 225 | 3.0527 | 1.0 |
| 3.0157 | 0.63 | 230 | 3.0514 | 1.0 |
| 3.0429 | 0.64 | 235 | 3.0391 | 1.0 |
| 2.999 | 0.66 | 240 | 3.0462 | 1.0 |
| 3.0053 | 0.67 | 245 | 3.0438 | 1.0 |
| 2.9812 | 0.68 | 250 | 3.0447 | 1.0 |
| 3.0062 | 0.7 | 255 | 3.0660 | 1.0 |
| 3.0045 | 0.71 | 260 | 3.0103 | 1.0 |
| 2.9684 | 0.73 | 265 | 3.0106 | 1.0 |
| 2.9885 | 0.74 | 270 | 3.0014 | 1.0 |
| 3.0062 | 0.75 | 275 | 2.9885 | 1.0 |
| 2.9736 | 0.77 | 280 | 3.0330 | 1.0 |
| 2.9766 | 0.78 | 285 | 2.9910 | 1.0 |
| 2.9545 | 0.79 | 290 | 2.9972 | 1.0 |
| 2.9936 | 0.81 | 295 | 2.9872 | 1.0 |
| 3.0832 | 0.82 | 300 | 2.9978 | 1.0 |
| 2.974 | 0.83 | 305 | 2.9978 | 1.0 |
| 2.9846 | 0.85 | 310 | 2.9849 | 1.0 |
| 2.9554 | 0.86 | 315 | 2.9810 | 1.0 |
| 2.9524 | 0.88 | 320 | 2.9731 | 1.0 |
| 2.9426 | 0.89 | 325 | 2.9824 | 1.0 |
| 2.9416 | 0.9 | 330 | 2.9731 | 1.0 |
| 2.9705 | 0.92 | 335 | 2.9830 | 1.0 |
| 2.9502 | 0.93 | 340 | 2.9713 | 1.0 |
| 2.9393 | 0.94 | 345 | 2.9790 | 1.0 |
| 2.9336 | 0.96 | 350 | 2.9684 | 1.0 |
| 2.9542 | 0.97 | 355 | 2.9689 | 1.0 |
| 2.9408 | 0.98 | 360 | 2.9556 | 1.0 |
| 2.9544 | 1.0 | 365 | 2.9563 | 1.0 |
| 2.9187 | 1.01 | 370 | 2.9624 | 1.0 |
| 2.9935 | 1.03 | 375 | 2.9500 | 1.0 |
| 2.9803 | 1.04 | 380 | 2.9558 | 1.0 |
| 2.9867 | 1.05 | 385 | 2.9473 | 1.0 |
| 2.8925 | 1.07 | 390 | 2.9444 | 1.0 |
| 2.9633 | 1.08 | 395 | 2.9490 | 1.0 |
| 2.9191 | 1.1 | 400 | 2.9362 | 1.0 |
| 2.9081 | 1.11 | 405 | 2.9394 | 1.0 |
| 2.9381 | 1.12 | 410 | 2.9846 | 1.0 |
| 2.9271 | 1.14 | 415 | 2.9638 | 1.0 |
| 2.959 | 1.15 | 420 | 2.9835 | 1.0 |
| 2.9486 | 1.16 | 425 | 2.9361 | 1.0 |
| 2.9246 | 1.18 | 430 | 2.9615 | 1.0 |
| 2.923 | 1.19 | 435 | 2.9313 | 1.0 |
| 2.8908 | 1.21 | 440 | 2.9362 | 1.0 |
| 2.8976 | 1.22 | 445 | 2.9224 | 1.0 |
| 2.9278 | 1.23 | 450 | 2.9276 | 1.0 |
| 2.8429 | 1.25 | 455 | 2.9299 | 1.0 |
| 2.867 | 1.26 | 460 | 2.9258 | 1.0 |
| 2.9734 | 1.27 | 465 | 2.9281 | 1.0000 |
| 2.934 | 1.29 | 470 | 2.9229 | 1.0 |
| 2.9521 | 1.3 | 475 | 2.9134 | 1.0 |
| 2.9098 | 1.31 | 480 | 2.9051 | 0.9993 |
| 2.9112 | 1.33 | 485 | 2.9028 | 0.9999 |
| 2.8799 | 1.34 | 490 | 2.9101 | 0.9986 |
| 2.857 | 1.36 | 495 | 2.9005 | 0.9992 |
| 2.8525 | 1.37 | 500 | 2.8937 | 1.0 |
| 2.8682 | 1.38 | 505 | 2.8904 | 1.0000 |
| 2.8899 | 1.4 | 510 | 2.8914 | 0.9964 |
| 2.7475 | 1.41 | 515 | 2.8842 | 0.9950 |
| 2.9263 | 1.42 | 520 | 2.8852 | 0.9972 |
| 2.8603 | 1.44 | 525 | 2.8762 | 0.9966 |
| 2.864 | 1.45 | 530 | 2.8680 | 0.9978 |
| 2.8632 | 1.47 | 535 | 2.8602 | 0.9964 |
| 2.9289 | 1.48 | 540 | 2.8584 | 0.9952 |
| 2.8689 | 1.49 | 545 | 2.8587 | 0.9956 |
| 2.8304 | 1.51 | 550 | 2.8511 | 0.9993 |
| 2.8024 | 1.52 | 555 | 2.8460 | 1.0 |
| 2.7649 | 1.53 | 560 | 2.8460 | 1.0000 |
| 2.8756 | 1.55 | 565 | 2.8348 | 0.9987 |
| 2.8808 | 1.56 | 570 | 2.8539 | 0.9993 |
| 2.9027 | 1.57 | 575 | 2.8282 | 0.9975 |
| 2.8586 | 1.59 | 580 | 2.8288 | 0.9976 |
| 2.8193 | 1.6 | 585 | 2.8101 | 1.0051 |
| 2.811 | 1.62 | 590 | 2.7965 | 1.0014 |
| 2.7332 | 1.63 | 595 | 2.7884 | 1.0026 |
| 2.7717 | 1.64 | 600 | 2.7883 | 1.0060 |
| 2.6901 | 1.66 | 605 | 2.7801 | 0.9974 |
| 2.6905 | 1.67 | 610 | 2.8113 | 0.9968 |
| 2.7442 | 1.68 | 615 | 2.8113 | 1.0007 |
| 2.8431 | 1.7 | 620 | 2.8152 | 1.0343 |
| 2.8028 | 1.71 | 625 | 2.7790 | 1.0250 |
| 2.7151 | 1.73 | 630 | 2.7653 | 1.0287 |
| 2.7405 | 1.74 | 635 | 2.7714 | 1.1303 |
| 2.7566 | 1.75 | 640 | 2.7488 | 1.0312 |
| 2.7337 | 1.77 | 645 | 2.7498 | 1.0176 |
| 2.7486 | 1.78 | 650 | 2.7496 | 1.0760 |
| 2.6918 | 1.79 | 655 | 2.7391 | 1.0353 |
| 2.7142 | 1.81 | 660 | 2.7500 | 1.0283 |
| 2.7057 | 1.82 | 665 | 2.7612 | 1.0127 |
| 2.8348 | 1.83 | 670 | 2.7441 | 1.0056 |
| 2.705 | 1.85 | 675 | 2.7473 | 1.0519 |
| 2.7547 | 1.86 | 680 | 2.7216 | 1.0218 |
| 2.7045 | 1.88 | 685 | 2.7261 | 1.1414 |
| 2.7121 | 1.89 | 690 | 2.7223 | 1.0287 |
| 2.6877 | 1.9 | 695 | 2.7283 | 1.0274 |
| 2.6879 | 1.92 | 700 | 2.7451 | 1.1322 |
| 2.6958 | 1.93 | 705 | 2.7166 | 1.0364 |
| 2.6692 | 1.94 | 710 | 2.7148 | 1.0074 |
| 2.5786 | 1.96 | 715 | 2.7101 | 1.0504 |
| 2.6919 | 1.97 | 720 | 2.6963 | 1.0454 |
| 2.7256 | 1.98 | 725 | 2.7201 | 1.0349 |
| 2.6507 | 2.0 | 730 | 2.7099 | 1.1339 |
| 2.7833 | 2.01 | 735 | 2.7111 | 1.0124 |
| 2.7521 | 2.03 | 740 | 2.7024 | 1.0275 |
| 2.6732 | 2.04 | 745 | 2.7058 | 1.0647 |
| 2.719 | 2.05 | 750 | 2.7200 | 1.0211 |
| 2.701 | 2.07 | 755 | 2.7024 | 1.0808 |
| 2.6444 | 2.08 | 760 | 2.6813 | 1.0582 |
| 2.5592 | 2.1 | 765 | 2.6783 | 1.1010 |
| 2.6444 | 2.11 | 770 | 2.6707 | 1.0946 |
| 2.6944 | 2.12 | 775 | 2.7012 | 1.1315 |
| 2.6733 | 2.14 | 780 | 2.7072 | 1.1144 |
| 2.6998 | 2.15 | 785 | 2.7132 | 1.0206 |
| 2.796 | 2.16 | 790 | 2.7076 | 1.1262 |
| 2.6881 | 2.18 | 795 | 2.6953 | 1.0841 |
| 2.7382 | 2.19 | 800 | 2.6605 | 1.1234 |
| 2.5814 | 2.21 | 805 | 2.6814 | 1.1865 |
| 2.6695 | 2.22 | 810 | 2.6531 | 1.0985 |
| 2.6415 | 2.23 | 815 | 2.6590 | 1.0804 |
| 2.646 | 2.25 | 820 | 2.6514 | 1.0853 |
| 2.6028 | 2.26 | 825 | 2.6723 | 1.1411 |
| 2.6429 | 2.27 | 830 | 2.6729 | 1.0395 |
| 2.6736 | 2.29 | 835 | 2.7039 | 1.0355 |
| 2.6959 | 2.3 | 840 | 2.6510 | 1.0414 |
| 2.6426 | 2.31 | 845 | 2.6660 | 1.1591 |
| 2.7152 | 2.33 | 850 | 2.6361 | 1.0276 |
| 2.7148 | 2.34 | 855 | 2.6723 | 1.2461 |
| 2.6336 | 2.36 | 860 | 2.6332 | 1.0310 |
| 2.665 | 2.37 | 865 | 2.6365 | 1.1312 |
| 2.5607 | 2.38 | 870 | 2.6344 | 1.1301 |
| 2.5614 | 2.4 | 875 | 2.6437 | 1.1513 |
| 2.4899 | 2.41 | 880 | 2.6418 | 1.1532 |
| 2.6794 | 2.42 | 885 | 2.6403 | 1.0272 |
| 2.6814 | 2.44 | 890 | 2.6420 | 1.1323 |
| 2.6614 | 2.45 | 895 | 2.6183 | 1.0525 |
| 2.6629 | 2.47 | 900 | 2.6414 | 1.1569 |
| 2.6166 | 2.48 | 905 | 2.6167 | 1.0265 |
| 2.6374 | 2.49 | 910 | 2.6299 | 1.1720 |
| 2.6035 | 2.51 | 915 | 2.6139 | 1.1565 |
| 2.595 | 2.52 | 920 | 2.6126 | 1.0557 |
| 2.6416 | 2.53 | 925 | 2.6190 | 1.0414 |
| 2.6785 | 2.55 | 930 | 2.6352 | 1.0289 |
| 2.6986 | 2.56 | 935 | 2.6268 | 1.0077 |
| 2.6145 | 2.57 | 940 | 2.6166 | 1.0445 |
| 2.6961 | 2.59 | 945 | 2.6142 | 1.0185 |
| 2.6852 | 2.6 | 950 | 2.6072 | 1.0122 |
| 2.5792 | 2.62 | 955 | 2.6078 | 1.1165 |
| 2.6118 | 2.63 | 960 | 2.6177 | 1.1210 |
| 2.5472 | 2.64 | 965 | 2.6126 | 1.0044 |
| 2.577 | 2.66 | 970 | 2.6051 | 1.0881 |
| 2.5602 | 2.67 | 975 | 2.5992 | 1.0178 |
| 2.695 | 2.68 | 980 | 2.6023 | 1.0248 |
| 2.7017 | 2.7 | 985 | 2.6190 | 1.0041 |
| 2.6327 | 2.71 | 990 | 2.6024 | 1.0142 |
| 2.6193 | 2.73 | 995 | 2.5897 | 1.0148 |
| 2.5939 | 2.74 | 1000 | 2.5900 | 1.0329 |
| 2.5477 | 2.75 | 1005 | 2.5971 | 1.0338 |
| 2.6089 | 2.77 | 1010 | 2.5969 | 1.0064 |
| 2.5625 | 2.78 | 1015 | 2.5899 | 1.0648 |
| 2.5745 | 2.79 | 1020 | 2.5861 | 1.0627 |
| 2.5702 | 2.81 | 1025 | 2.5923 | 1.0526 |
| 2.645 | 2.82 | 1030 | 2.6053 | 1.0199 |
| 2.6869 | 2.83 | 1035 | 2.6227 | 1.0011 |
| 2.6678 | 2.85 | 1040 | 2.6094 | 1.0179 |
| 2.6787 | 2.86 | 1045 | 2.5978 | 1.0028 |
| 2.6246 | 2.88 | 1050 | 2.5965 | 1.0093 |
| 2.5676 | 2.89 | 1055 | 2.5927 | 1.0627 |
| 2.6773 | 2.9 | 1060 | 2.5907 | 1.0817 |
| 2.6114 | 2.92 | 1065 | 2.5932 | 1.1013 |
| 2.6227 | 2.93 | 1070 | 2.5840 | 1.0402 |
| 2.594 | 2.94 | 1075 | 2.5997 | 1.1371 |
| 2.751 | 2.96 | 1080 | 2.5909 | 1.0972 |
| 2.6366 | 2.97 | 1085 | 2.6081 | 1.0598 |
| 2.577 | 2.98 | 1090 | 2.5915 | 1.0410 |
| 2.579 | 3.0 | 1095 | 2.5953 | 1.1433 |
| 2.6706 | 3.01 | 1100 | 2.5913 | 1.0456 |
| 2.6161 | 3.03 | 1105 | 2.6079 | 1.1009 |
| 2.6397 | 3.04 | 1110 | 2.5951 | 1.1771 |
| 2.6246 | 3.05 | 1115 | 2.5730 | 1.0299 |
| 2.5637 | 3.07 | 1120 | 2.5622 | 1.0848 |
| 2.5692 | 3.08 | 1125 | 2.5561 | 1.1472 |
| 2.5948 | 3.1 | 1130 | 2.5568 | 1.0802 |
| 2.5372 | 3.11 | 1135 | 2.5638 | 1.1261 |
| 2.4995 | 3.12 | 1140 | 2.5727 | 1.1395 |
| 2.6304 | 3.14 | 1145 | 2.5671 | 1.0259 |
| 2.6395 | 3.15 | 1150 | 2.5778 | 1.0212 |
| 2.6127 | 3.16 | 1155 | 2.5609 | 1.0457 |
| 2.5919 | 3.18 | 1160 | 2.5604 | 1.0902 |
| 2.6111 | 3.19 | 1165 | 2.5463 | 1.0014 |
| 2.5971 | 3.21 | 1170 | 2.5429 | 1.0022 |
| 2.5887 | 3.22 | 1175 | 2.5394 | 1.0412 |
| 2.5644 | 3.23 | 1180 | 2.5342 | 1.0469 |
| 2.4805 | 3.25 | 1185 | 2.6066 | 1.2668 |
| 2.5324 | 3.26 | 1190 | 2.5395 | 1.0234 |
| 2.5491 | 3.27 | 1195 | 2.5431 | 1.0644 |
| 2.6302 | 3.29 | 1200 | 2.5558 | 1.0680 |
| 2.6139 | 3.3 | 1205 | 2.5711 | 1.0565 |
| 2.5607 | 3.31 | 1210 | 2.5635 | 1.0415 |
| 2.6535 | 3.33 | 1215 | 2.5505 | 1.0613 |
| 2.6129 | 3.34 | 1220 | 2.5403 | 1.0724 |
| 2.5157 | 3.36 | 1225 | 2.5294 | 1.0585 |
| 2.551 | 3.37 | 1230 | 2.5242 | 1.1599 |
| 2.5527 | 3.38 | 1235 | 2.5474 | 1.2327 |
| 2.4964 | 3.4 | 1240 | 2.5244 | 1.0857 |
| 2.5781 | 3.41 | 1245 | 2.5299 | 1.0470 |
| 2.6143 | 3.42 | 1250 | 2.5313 | 1.0019 |
| 2.6566 | 3.44 | 1255 | 2.5431 | 1.0488 |
| 2.5373 | 3.45 | 1260 | 2.5281 | 1.0901 |
| 2.6597 | 3.47 | 1265 | 2.5300 | 1.0610 |
| 2.5457 | 3.48 | 1270 | 2.5130 | 1.0420 |
| 2.5632 | 3.49 | 1275 | 2.5306 | 1.1418 |
| 2.5267 | 3.51 | 1280 | 2.5021 | 1.0293 |
| 2.507 | 3.52 | 1285 | 2.5013 | 1.0196 |
| 2.5713 | 3.53 | 1290 | 2.4978 | 1.0664 |
| 2.4783 | 3.55 | 1295 | 2.4958 | 1.0530 |
| 2.5874 | 3.56 | 1300 | 2.4968 | 1.0059 |
| 2.5744 | 3.57 | 1305 | 2.5078 | 1.0287 |
| 2.5701 | 3.59 | 1310 | 2.4971 | 1.0366 |
| 2.5366 | 3.6 | 1315 | 2.4897 | 1.0191 |
| 2.5679 | 3.62 | 1320 | 2.4830 | 1.0223 |
| 2.5239 | 3.63 | 1325 | 2.4833 | 1.0784 |
| 2.5411 | 3.64 | 1330 | 2.4851 | 1.1522 |
| 2.5037 | 3.66 | 1335 | 2.4792 | 1.0928 |
| 2.5907 | 3.67 | 1340 | 2.4750 | 1.0187 |
| 2.5107 | 3.68 | 1345 | 2.4805 | 1.0873 |
| 2.5908 | 3.7 | 1350 | 2.4753 | 1.0098 |
| 2.6274 | 3.71 | 1355 | 2.4765 | 1.0045 |
| 2.5708 | 3.73 | 1360 | 2.4597 | 1.0456 |
| 2.6039 | 3.74 | 1365 | 2.4503 | 1.0485 |
| 2.5305 | 3.75 | 1370 | 2.4439 | 1.0126 |
| 2.4878 | 3.77 | 1375 | 2.4407 | 1.0162 |
| 2.5055 | 3.78 | 1380 | 2.4421 | 1.0605 |
| 2.5249 | 3.79 | 1385 | 2.4499 | 1.1163 |
| 2.5508 | 3.81 | 1390 | 2.4654 | 1.1472 |
| 2.5827 | 3.82 | 1395 | 2.4510 | 1.0561 |
| 2.6148 | 3.83 | 1400 | 2.4496 | 0.9998 |
| 2.5763 | 3.85 | 1405 | 2.4417 | 1.0067 |
| 2.6077 | 3.86 | 1410 | 2.4458 | 1.0682 |
| 2.5388 | 3.88 | 1415 | 2.4352 | 1.0820 |
| 2.5235 | 3.89 | 1420 | 2.4277 | 1.0784 |
| 2.4996 | 3.9 | 1425 | 2.4245 | 1.0671 |
| 2.5601 | 3.92 | 1430 | 2.4202 | 1.0650 |
| 2.5805 | 3.93 | 1435 | 2.4199 | 1.0530 |
| 2.5841 | 3.94 | 1440 | 2.4228 | 1.0797 |
| 2.4877 | 3.96 | 1445 | 2.4284 | 1.1159 |
| 2.5542 | 3.97 | 1450 | 2.4190 | 1.0575 |
| 2.5961 | 3.98 | 1455 | 2.4162 | 1.0676 |
| 2.495 | 4.0 | 1460 | 2.4165 | 1.0821 |
| 2.6157 | 4.01 | 1465 | 2.4119 | 1.0117 |
| 2.5415 | 4.03 | 1470 | 2.4089 | 1.0110 |
| 2.4916 | 4.04 | 1475 | 2.4032 | 1.0498 |
| 2.5445 | 4.05 | 1480 | 2.3997 | 1.0429 |
| 2.4941 | 4.07 | 1485 | 2.4008 | 1.0141 |
| 2.5113 | 4.08 | 1490 | 2.3975 | 1.0357 |
| 2.4707 | 4.1 | 1495 | 2.3938 | 1.0288 |
| 2.4952 | 4.11 | 1500 | 2.3910 | 1.0300 |
| 2.5017 | 4.12 | 1505 | 2.3861 | 1.0813 |
| 2.5566 | 4.14 | 1510 | 2.3919 | 1.1082 |
| 2.5754 | 4.15 | 1515 | 2.3947 | 1.0074 |
| 2.6138 | 4.16 | 1520 | 2.4040 | 0.9989 |
| 2.5024 | 4.18 | 1525 | 2.3949 | 1.0039 |
| 2.5136 | 4.19 | 1530 | 2.3993 | 1.0496 |
| 2.5646 | 4.21 | 1535 | 2.3981 | 1.0729 |
| 2.4556 | 4.22 | 1540 | 2.3952 | 1.0494 |
| 2.5774 | 4.23 | 1545 | 2.3924 | 1.0345 |
| 2.5126 | 4.25 | 1550 | 2.3888 | 1.0306 |
| 2.4596 | 4.26 | 1555 | 2.3960 | 1.0775 |
| 2.521 | 4.27 | 1560 | 2.3978 | 1.1025 |
| 2.6304 | 4.29 | 1565 | 2.3885 | 1.0433 |
| 2.543 | 4.3 | 1570 | 2.3849 | 1.0072 |
| 2.5601 | 4.31 | 1575 | 2.3855 | 1.0110 |
| 2.6304 | 4.33 | 1580 | 2.3878 | 1.0369 |
| 2.4121 | 4.34 | 1585 | 2.3783 | 1.0366 |
| 2.4261 | 4.36 | 1590 | 2.3746 | 1.0307 |
| 2.5038 | 4.37 | 1595 | 2.3789 | 1.0611 |
| 2.5391 | 4.38 | 1600 | 2.3849 | 1.0738 |
| 2.4341 | 4.4 | 1605 | 2.3779 | 1.0573 |
| 2.5306 | 4.41 | 1610 | 2.3751 | 1.0460 |
| 2.5818 | 4.42 | 1615 | 2.3743 | 1.0251 |
| 2.5531 | 4.44 | 1620 | 2.3723 | 1.0209 |
| 2.51 | 4.45 | 1625 | 2.3755 | 1.0316 |
| 2.5788 | 4.47 | 1630 | 2.3725 | 1.0396 |
| 2.5701 | 4.48 | 1635 | 2.3663 | 1.0292 |
| 2.4194 | 4.49 | 1640 | 2.3641 | 1.0261 |
| 2.5439 | 4.51 | 1645 | 2.3629 | 1.0376 |
| 2.4527 | 4.52 | 1650 | 2.3629 | 1.0563 |
| 2.5705 | 4.53 | 1655 | 2.3654 | 1.0766 |
| 2.4552 | 4.55 | 1660 | 2.3708 | 1.0802 |
| 2.5657 | 4.56 | 1665 | 2.3638 | 1.0248 |
| 2.5371 | 4.57 | 1670 | 2.3639 | 1.0053 |
| 2.5365 | 4.59 | 1675 | 2.3626 | 1.0072 |
| 2.5383 | 4.6 | 1680 | 2.3584 | 1.0170 |
| 2.546 | 4.62 | 1685 | 2.3574 | 1.0469 |
| 2.6006 | 4.63 | 1690 | 2.3517 | 1.0509 |
| 2.4894 | 4.64 | 1695 | 2.3489 | 1.0452 |
| 2.4732 | 4.66 | 1700 | 2.3489 | 1.0586 |
| 2.4933 | 4.67 | 1705 | 2.3501 | 1.0694 |
| 2.4784 | 4.68 | 1710 | 2.3472 | 1.0647 |
| 2.5349 | 4.7 | 1715 | 2.3419 | 1.0299 |
| 2.553 | 4.71 | 1720 | 2.3420 | 1.0115 |
| 2.5035 | 4.73 | 1725 | 2.3415 | 1.0117 |
| 2.561 | 4.74 | 1730 | 2.3418 | 1.0242 |
| 2.4773 | 4.75 | 1735 | 2.3420 | 1.0325 |
| 2.4691 | 4.77 | 1740 | 2.3422 | 1.0394 |
| 2.4959 | 4.78 | 1745 | 2.3405 | 1.0418 |
| 2.4928 | 4.79 | 1750 | 2.3394 | 1.0449 |
| 2.5058 | 4.81 | 1755 | 2.3392 | 1.0489 |
| 2.5193 | 4.82 | 1760 | 2.3390 | 1.0506 |
| 2.5369 | 4.83 | 1765 | 2.3392 | 1.0384 |
| 2.4843 | 4.85 | 1770 | 2.3398 | 1.0236 |
| 2.5074 | 4.86 | 1775 | 2.3400 | 1.0150 |
| 2.4941 | 4.88 | 1780 | 2.3386 | 1.0150 |
| 2.4352 | 4.89 | 1785 | 2.3370 | 1.0172 |
| 2.4372 | 4.9 | 1790 | 2.3362 | 1.0208 |
| 2.4855 | 4.92 | 1795 | 2.3358 | 1.0238 |
| 2.4516 | 4.93 | 1800 | 2.3355 | 1.0276 |
| 2.5281 | 4.94 | 1805 | 2.3356 | 1.0312 |
| 2.5519 | 4.96 | 1810 | 2.3352 | 1.0318 |
| 2.4641 | 4.97 | 1815 | 2.3349 | 1.0294 |
| 2.4515 | 4.98 | 1820 | 2.3348 | 1.0284 |
| 2.553 | 5.0 | 1825 | 2.3347 | 1.0286 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| 3c1a521b8275fb62c741b47c837558a2 |
tomekkorbak/jovial_clarke | tomekkorbak | null | 2 | 0 | null | 0 | null | false | false | false | mit | ['en'] | ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 7,793 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jovial_clarke
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'gpt2'},
'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'jovial_clarke',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/2037bqrd | 41e5725b5897d9f5911a4b37196385b5 |
darkVOYAGE/dvAuto | darkVOYAGE | null | 3 | 0 | null | 0 | null | false | false | false | cc | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,227 | false | dvAuto is a custom tuned model built using the base SD v1.5, and trained on thirty-two 768x768px images of concept / sports / antique cars.
Use the words "dvAuto" or "dvAuto style" near the beginning of the prompt.
Sample images and prompt below.
"dvAuto style, 85mm, telephoto, mountain background, low contrast, muted, photo realistic, 8k"
scale 11.50
k_euler
Model: dvAuto











| 24f2f6ea01d363f19e9aa7acccb6d987 |
Helsinki-NLP/opus-mt-ha-es | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-ha-es
* source languages: ha
* target languages: es
* OPUS readme: [ha-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.es | 21.8 | 0.394 |
| 4eb58c43c9a0036ca904e109d7d9530f |
suhasy2/fin_sentiment | suhasy2 | distilbert | 12 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,109 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.5162 | 0.7978 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| a5752e31c1406fa960462e31a9eca4ba |
muhtasham/tiny-mlm-glue-cola-custom-tokenizer-expand-vocab | muhtasham | bert | 12 | 4 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,683 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-cola-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.6267 | 0.47 | 500 | 4.9363 |
| 5.0496 | 0.94 | 1000 | 4.7414 |
| 4.7524 | 1.4 | 1500 | 4.5982 |
| 4.6772 | 1.87 | 2000 | 4.5334 |
| 4.543 | 2.34 | 2500 | 4.3460 |
| 4.5676 | 2.81 | 3000 | 4.1526 |
| 4.419 | 3.27 | 3500 | 4.3221 |
| 4.3187 | 3.74 | 4000 | 4.0862 |
| 4.3635 | 4.21 | 4500 | 4.1023 |
| 4.2545 | 4.68 | 5000 | 3.8843 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
| 52021166e1addc9c7ca1dd7762aeb3d5 |
javilonso/Mex_Rbta_Opinion_Attraction | javilonso | roberta | 9 | 4 | transformers | 0 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,466 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/Mex_Rbta_Opinion_Attraction
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0061
- Validation Loss: 0.0386
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8979, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0863 | 0.0476 | 0 |
| 0.0230 | 0.0353 | 1 |
| 0.0061 | 0.0386 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
| 6e8379c9e93a9ea7735db1f7100341a1 |
stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1 | stjiris | bert | 16 | 2 | sentence-transformers | 1 | sentence-similarity | true | false | false | mit | ['pt'] | ['stjiris/portuguese-legal-sentences-v0', 'assin', 'assin2', 'stsb_multi_mt', 'stjiris/IRIS_sts'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'transformers', 'bert', 'pytorch', 'sentence-similarity'] | false | true | true | 5,401 | false |
[](https://www.inesc-id.pt/projects/PR07005/)
[](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/).
Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
# stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1 (Legal BERTimbau)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1 derives from stjiris/bert-large-portuguese-cased-legal-mlm (legal variant of [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large).
It was trained using the MLM technique with a learning rate 1e-5 [Legal Sentences from +-30000 documents](https://huggingface.co/datasets/stjiris/portuguese-legal-sentences-v1.0) 15000 training steps (best performance for our semantic search system implementation)
It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2), [stsb_multi_mt pt](https://huggingface.co/datasets/stsb_multi_mt) and [IRIS STS](https://huggingface.co/datasets/stjiris/IRIS_sts) datasets. 'lr': 1e-5
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
model = SentenceTransformer('stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1')
model = AutoModel.from_pretrained('stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1028, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
If you use this work, please cite:
```bibtex
@inproceedings{MeloSemantic,
author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o},
title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a},
}
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@inproceedings{fonseca2016assin,
title={ASSIN: Avaliacao de similaridade semantica e inferencia textual},
author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S},
booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal},
pages={13--15},
year={2016}
}
@inproceedings{real2020assin,
title={The assin 2 shared task: a quick overview},
author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={406--412},
year={2020},
organization={Springer}
}
@InProceedings{huggingface:dataset:stsb_multi_mt,
title = {Machine translated multilingual STS benchmark dataset.},
author={Philip May},
year={2021},
url={https://github.com/PhilipMay/stsb-multi-mt}
}
``` | b3038705be0763c6f2a9fa703a94d37b |
nickmuchi/bert-finetuned-squad | nickmuchi | bert | 8 | 5 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,315 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nickmuchi/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5685
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2720 | 0 |
| 0.7798 | 1 |
| 0.5685 | 2 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
| d139040956528c3c27bde61cb0e25a82 |
anas-awadalla/splinter-large-few-shot-k-256-finetuned-squad-seed-4 | anas-awadalla | splinter | 16 | 1 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,004 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-256-finetuned-squad-seed-4
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| 4f88a037871526ca74f603bf35a9489b |
creat89/NER_FEDA_Cyrillic1 | creat89 | bert | 7 | 0 | transformers | 0 | null | true | false | false | mit | ['multilingual', 'ru', 'bg', 'mk', 'uk', 'fi'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['labse', 'ner'] | false | true | true | 849 | false |
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. SlavNER 17 (LOC, MISC, ORG, PER)
4. CNE5 (GEOPOLIT, LOC, MEDIA, PER, ORG)
5. FactRuEval (LOC, ORG, PER)
6. NER-UK (LOC, MISC, ORG, PER)
7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical,
You can select the tagset to use in the output by configuring the model.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA). | b202e54e5cd6b813aec5ba15a52798a0 |
Helsinki-NLP/opus-mt-da-ru | Helsinki-NLP | marian | 11 | 113 | transformers | 0 | translation | true | true | false | apache-2.0 | ['da', 'ru'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,984 | false |
### dan-rus
* source group: Danish
* target group: Russian
* OPUS readme: [dan-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-rus/README.md)
* model: transformer-align
* source language(s): dan
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.dan.rus | 52.5 | 0.715 |
### System Info:
- hf_name: dan-rus
- source_languages: dan
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'ru']
- src_constituents: {'dan'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-rus/opus-2020-06-17.test.txt
- src_alpha3: dan
- tgt_alpha3: rus
- short_pair: da-ru
- chrF2_score: 0.715
- bleu: 52.5
- brevity_penalty: 0.991
- ref_len: 10480.0
- src_name: Danish
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: da
- tgt_alpha2: ru
- prefer_old: False
- long_pair: dan-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 69726ad55efb4b68532e69b8cdbb5ebc |
StonyBrookNLP/teabreac-nt5-small-drop | StonyBrookNLP | t5 | 8 | 3 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering, multi-step-reasoning, multi-hop-reasoning'] | false | true | true | 2,627 | false |
# What's this?
This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496).
This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details.
We release the following models:
- **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}`
- **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}`
- **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}`
The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`.
The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`.
The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**.
# How to use it?
Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac
model_name = "StonyBrookNLP/teabreac-nt5-small-drop"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
enable_digit_tokenization(tokenizer)
input_texts = [
"answer_me: Who scored the first touchdown of the game?" +
"context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..."
# Note: some models have slightly different qn/ctxt format. See the github repo.
]
input_ids = tokenizer(
input_texts, return_tensors="pt",
truncation=True, max_length=800,
add_special_tokens=True, padding=True,
)["input_ids"]
generated_ids = model.generate(input_ids, min_length=1, max_length=50)
generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
generated_predictions = [
tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions
]
# => ["Chaz Schilens"]
``` | 1a133c8e4873ab474ffd360f2fa6dceb |
google/maxim-s3-denoising-sidd | google | null | 7 | 102 | keras | 2 | image-to-image | false | false | false | apache-2.0 | ['en'] | ['sidd'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision', 'maxim', 'image-to-image'] | false | true | true | 2,506 | false |
# MAXIM pre-trained on SIDD for image denoising
MAXIM model pre-trained for image denoising. It was introduced in the paper [MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/abs/2201.02973) by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in [this repository](https://github.com/google-research/maxim).
Disclaimer: The team releasing MAXIM did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MAXIM introduces a shared MLP-based backbone for different image processing tasks such as image deblurring, deraining, denoising, dehazing, low-light image enhancement, and retouching. The following figure depicts the main components of MAXIM:

## Training procedure and results
The authors didn't release the training code. For more details on how the model was trained, refer to the [original paper](https://arxiv.org/abs/2201.02973).
As per the [table](https://github.com/google-research/maxim#results-and-pre-trained-models), the model achieves a PSNR of 39.96 and an SSIM of 0.96.
## Intended uses & limitations
You can use the raw model for image denoising tasks.
The model is [officially released in JAX](https://github.com/google-research/maxim). It was ported to TensorFlow in [this repository](https://github.com/sayakpaul/maxim-tf).
### How to use
Here is how to use this model:
```python
from huggingface_hub import from_pretrained_keras
from PIL import Image
import tensorflow as tf
import numpy as np
import requests
url = "https://github.com/sayakpaul/maxim-tf/raw/main/images/Denoising/input/0011_23.png"
image = Image.open(requests.get(url, stream=True).raw)
image = np.array(image)
image = tf.convert_to_tensor(image)
image = tf.image.resize(image, (256, 256))
model = from_pretrained_keras("google/maxim-s3-denoising-sidd")
predictions = model.predict(tf.expand_dims(image, 0))
```
For a more elaborate prediction pipeline, refer to [this Colab Notebook](https://colab.research.google.com/github/sayakpaul/maxim-tf/blob/main/notebooks/inference-dynamic-resize.ipynb).
### Citation
```bibtex
@article{tu2022maxim,
title={MAXIM: Multi-Axis MLP for Image Processing},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={CVPR},
year={2022},
}
``` | e1e75c4729ebe608416e02a96182020d |
Bioskop/lucyedge | Bioskop | null | 25 | 3 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,510 | false | ### LucyEdge on Stable Diffusion via Dreambooth
#### model by Bioskop
This your the Stable Diffusion model fine-tuned the LucyEdge concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **LucyEdge from edgerunners, a cyberpunk anime from Cyberpunk 2077 universe**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:







| 9f407e16e792a6b84e73e3a427aacfb8 |
pulkitkumar13/dark-bert-finetuned-ner | pulkitkumar13 | bert | 10 | 13 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,517 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dark-bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0639
- Precision: 0.9283
- Recall: 0.9478
- F1: 0.9380
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0881 | 1.0 | 1756 | 0.0716 | 0.9172 | 0.9322 | 0.9246 | 0.9817 |
| 0.0375 | 2.0 | 3512 | 0.0610 | 0.9275 | 0.9455 | 0.9364 | 0.9857 |
| 0.0207 | 3.0 | 5268 | 0.0639 | 0.9283 | 0.9478 | 0.9380 | 0.9859 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
| 82804771dabfec2ffa8fc1dd262ac453 |
lmqg/mt5-small-jaquad-qag | lmqg | mt5 | 13 | 37 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | ['ja'] | ['lmqg/qag_jaquad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['questions and answers generation'] | true | true | true | 3,899 | false |
# Model Card of `lmqg/mt5-small-jaquad-qag`
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question & answer pair generation task on the [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Language:** ja
- **Training data:** [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="lmqg/mt5-small-jaquad-qag")
# model prediction
question_answer_pairs = model.generate_qa("フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-small-jaquad-qag")
output = pipe("ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている6月28日は2人の14回目の結婚記念日であった。")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-jaquad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_jaquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-------------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 58.35 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) |
| QAAlignedF1Score (MoverScore) | 39.19 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) |
| QAAlignedPrecision (BERTScore) | 58.34 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) |
| QAAlignedPrecision (MoverScore) | 39.21 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) |
| QAAlignedRecall (BERTScore) | 58.38 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) |
| QAAlignedRecall (MoverScore) | 39.17 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_jaquad
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: None
- model: google/mt5-small
- max_length: 512
- max_length_output: 256
- epoch: 18
- batch: 8
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.0
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-jaquad-qag/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| b2edeefacddf39c91cb56b09b2bb093e |
stevemobs/deberta-base-combined-squad1-aqa-1epoch | stevemobs | deberta | 13 | 5 | transformers | 0 | question-answering | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,168 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-combined-squad1-aqa-1epoch
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0971 | 1.0 | 9906 | 0.9431 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 306adfc6dd0f35c5cb46bf10f6e7b745 |
jayanta/resnet-152-fv-finetuned-memess | jayanta | resnet | 12 | 7 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,217 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-152-fv-finetuned-memess
This model is a fine-tuned version of [microsoft/resnet-152](https://huggingface.co/microsoft/resnet-152) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6281
- Accuracy: 0.7674
- Precision: 0.7651
- Recall: 0.7674
- F1: 0.7647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.5902 | 0.99 | 20 | 1.5519 | 0.4938 | 0.3491 | 0.4938 | 0.3529 |
| 1.4694 | 1.99 | 40 | 1.3730 | 0.4892 | 0.4095 | 0.4892 | 0.3222 |
| 1.3129 | 2.99 | 60 | 1.2052 | 0.5301 | 0.3504 | 0.5301 | 0.4005 |
| 1.1831 | 3.99 | 80 | 1.1142 | 0.5587 | 0.4077 | 0.5587 | 0.4444 |
| 1.0581 | 4.99 | 100 | 0.9930 | 0.6012 | 0.5680 | 0.6012 | 0.5108 |
| 0.9464 | 5.99 | 120 | 0.9263 | 0.6507 | 0.6200 | 0.6507 | 0.6029 |
| 0.8581 | 6.99 | 140 | 0.8400 | 0.6917 | 0.6645 | 0.6917 | 0.6638 |
| 0.7739 | 7.99 | 160 | 0.7829 | 0.7087 | 0.6918 | 0.7087 | 0.6845 |
| 0.6762 | 8.99 | 180 | 0.7512 | 0.7318 | 0.7206 | 0.7318 | 0.7189 |
| 0.6162 | 9.99 | 200 | 0.7409 | 0.7264 | 0.7244 | 0.7264 | 0.7241 |
| 0.5546 | 10.99 | 220 | 0.6936 | 0.7465 | 0.7429 | 0.7465 | 0.7395 |
| 0.4633 | 11.99 | 240 | 0.6779 | 0.7473 | 0.7393 | 0.7473 | 0.7412 |
| 0.4373 | 12.99 | 260 | 0.6736 | 0.7573 | 0.7492 | 0.7573 | 0.7523 |
| 0.4074 | 13.99 | 280 | 0.6534 | 0.7566 | 0.7516 | 0.7566 | 0.7528 |
| 0.39 | 14.99 | 300 | 0.6521 | 0.7651 | 0.7603 | 0.7651 | 0.7608 |
| 0.3766 | 15.99 | 320 | 0.6499 | 0.7682 | 0.7607 | 0.7682 | 0.7630 |
| 0.3507 | 16.99 | 340 | 0.6497 | 0.7697 | 0.7686 | 0.7697 | 0.7686 |
| 0.3589 | 17.99 | 360 | 0.6519 | 0.7535 | 0.7485 | 0.7535 | 0.7502 |
| 0.3261 | 18.99 | 380 | 0.6449 | 0.7589 | 0.7597 | 0.7589 | 0.7585 |
| 0.3234 | 19.99 | 400 | 0.6281 | 0.7674 | 0.7651 | 0.7674 | 0.7647 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
| fc2031880c140d52abcfae03d3248fff |
sd-concepts-library/hours-sentry-fade | sd-concepts-library | null | 10 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,190 | false | ### Hours_Sentry_fade on Stable Diffusion
This is the `<Hours_Sentry>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





| 532ebc0408b6f590861361d44cf4895d |
microsoft/swin-base-simmim-window6-192 | microsoft | swin | 5 | 924 | transformers | 0 | null | true | false | false | apache-2.0 | null | ['imagenet-1k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision', 'simmim'] | false | true | true | 624 | false |
# Swin Transformer (base-sized model)
Swin Transformer model pre-trained on ImageNet-1k using the SimMIM objective at resolution 192x192. It was introduced in the paper [SimMIM: A Simple Framework for Masked Image Modeling](https://arxiv.org/abs/2111.09886) by Xie et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
# Intended use cases
This model is pre-trained only, it's meant to be fine-tuned on a downstream dataset.
# Usage
Refer to the [documentation](https://huggingface.co/docs/transformers/model_doc/swin#transformers.SwinForMaskedImageModeling.forward.example). | 934f424b85c1e4452dc94f044a0b93e4 |
Kayvane/distilroberta-base-wandb-week-3-complaints-classifier-512 | Kayvane | roberta | 11 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['consumer-finance-complaints'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,672 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-wandb-week-3-complaints-classifier-512
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the consumer-finance-complaints dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6004
- Accuracy: 0.8038
- F1: 0.7919
- Recall: 0.8038
- Precision: 0.7922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7835312622444155e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 512
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.7559 | 0.61 | 1500 | 0.7307 | 0.7733 | 0.7411 | 0.7733 | 0.7286 |
| 0.6361 | 1.22 | 3000 | 0.6559 | 0.7846 | 0.7699 | 0.7846 | 0.7718 |
| 0.5774 | 1.83 | 4500 | 0.6004 | 0.8038 | 0.7919 | 0.8038 | 0.7922 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| ca8fc2e79413f4de2478c9a821ebae36 |
domenicrosati/deberta-v3-xsmall-finetuned-review_classifier | domenicrosati | deberta-v2 | 13 | 3 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'generated_from_trainer'] | true | true | true | 1,434 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-xsmall-finetuned-review_classifier
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1441
- Accuracy: 0.9513
- F1: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.1518 | 1.0 | 6667 | 0.1575 | 0.9510 | 0.7155 |
| 0.1247 | 2.0 | 13334 | 0.1441 | 0.9513 | 0.7458 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 9f378c2a841888a8f9cdd6739a1eed4b |
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-8_female-2_s26 | jonatasgrosman | wav2vec2 | 10 | 1 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'en'] | false | true | true | 475 | false | # exp_w2v2r_en_xls-r_gender_male-8_female-2_s26
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| c4698c80589562fbf3e7a6f56d199f37 |
Helsinki-NLP/opus-mt-sv-srn | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-sv-srn
* source languages: sv
* target languages: srn
* OPUS readme: [sv-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-srn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-srn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-srn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-srn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.srn | 31.3 | 0.506 |
| 19f5f6de5943d7c9da731b9b46f6535d |
elopezlopez/distilbert-base-uncased_fold_2_ternary_v1 | elopezlopez | distilbert | 13 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,659 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_2_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8941
- F1: 0.7889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 294 | 0.6025 | 0.7402 |
| 0.5688 | 2.0 | 588 | 0.5025 | 0.7943 |
| 0.5688 | 3.0 | 882 | 0.6102 | 0.7794 |
| 0.2582 | 4.0 | 1176 | 0.8896 | 0.7835 |
| 0.2582 | 5.0 | 1470 | 1.0392 | 0.7821 |
| 0.1185 | 6.0 | 1764 | 1.0865 | 0.7848 |
| 0.0461 | 7.0 | 2058 | 1.2951 | 0.7686 |
| 0.0461 | 8.0 | 2352 | 1.3348 | 0.7821 |
| 0.0313 | 9.0 | 2646 | 1.4267 | 0.7876 |
| 0.0313 | 10.0 | 2940 | 1.4004 | 0.7957 |
| 0.0142 | 11.0 | 3234 | 1.5501 | 0.7794 |
| 0.0083 | 12.0 | 3528 | 1.5564 | 0.7903 |
| 0.0083 | 13.0 | 3822 | 1.5699 | 0.7876 |
| 0.0067 | 14.0 | 4116 | 1.7725 | 0.7794 |
| 0.0067 | 15.0 | 4410 | 1.7642 | 0.7767 |
| 0.0031 | 16.0 | 4704 | 1.7891 | 0.7848 |
| 0.0031 | 17.0 | 4998 | 1.8528 | 0.7740 |
| 0.0054 | 18.0 | 5292 | 1.8378 | 0.7781 |
| 0.003 | 19.0 | 5586 | 1.8223 | 0.7862 |
| 0.003 | 20.0 | 5880 | 1.7935 | 0.7930 |
| 0.0021 | 21.0 | 6174 | 1.9117 | 0.7808 |
| 0.0021 | 22.0 | 6468 | 1.8891 | 0.7930 |
| 0.0015 | 23.0 | 6762 | 1.9167 | 0.7916 |
| 0.0006 | 24.0 | 7056 | 1.9193 | 0.7862 |
| 0.0006 | 25.0 | 7350 | 1.8941 | 0.7889 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 07f362411da809aea91c0713333ef692 |
MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary | MoritzLaurer | deberta-v2 | 8 | 1,998 | transformers | 2 | zero-shot-classification | true | false | false | mit | ['en'] | ['multi_nli', 'anli', 'fever', 'lingnli'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'zero-shot-classification'] | false | true | true | 4,522 | false | # DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary
## Model description
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli).
Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". This is specifically designed for zero-shot classification, where the difference between "neutral" and "contradiction" is irrelevant.
The base model is [DeBERTa-v3-xsmall from Microsoft](https://huggingface.co/microsoft/deberta-v3-xsmall). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543).
For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "not_entailment"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli).
### Training procedure
DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=5, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
dataset | mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c
--------|---------|----------|---------|----------|----------|------
accuracy | 0.925 | 0.922 | 0.892 | 0.676 | 0.665 | 0.888
speed (text/sec, CPU, 128 batch) | 6.0 | 6.3 | 3.0 | 5.8 | 5.0 | 7.6
speed (text/sec, GPU Tesla P100, 128 batch) | 473 | 487 | 230 | 390 | 340 | 586
## Limitations and bias
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues. | 3d184cdae0cef5b8f4d419f0bac3643d |
tdc/hERG_Karim_Morgan | tdc | null | 4 | 0 | tdc | 0 | null | false | false | false | bsd-2-clause | ['en'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['biology', 'chemistry'] | false | true | true | 1,305 | false |
## Dataset description
An integrated Ether-a-go-go-related gene (hERG) dataset consisting of molecular structures labelled as hERG (<10uM) and non-hERG (>=10uM) blockers in the form of SMILES strings was obtained from the DeepHIT, the BindingDB database, ChEMBL bioactivity database, and other literature.
## Task description
Binary classification. Given a drug SMILES string, predict whether it blocks (1, <10uM) or not blocks (0, >=10uM).
## Dataset statistics
Total: 13445; Train_val: 12620; Test: 825
## Dataset split:
Random split on 70% training, 10% validation, and 20% testing
To load the dataset in TDC, type
```python
from tdc.single_pred import Tox
data = Tox(name = 'herg_karim')
```
## Model description
Morgan chemical fingerprint with an MLP decoder. Model is tuned with 100 runs using Ax platform.
To load the pre-trained model, type
```python
from tdc import tdc_hf_interface
tdc_hf_herg = tdc_hf_interface("hERG_Karim_Morgan")
# load deeppurpose model from this repo
dp_model = tdc_hf_herg.load_deeppurpose('./data')
dp_model.predict('YOUR SMILES STRING')
```
## References:
[1] Karim, A., et al. CardioTox net: a robust predictor for hERG channel blockade based on deep learning meta-feature ensembles. J Cheminform 13, 60 (2021). https://doi.org/10.1186/s13321-021-00541-z
| 985e9022eb34ef3e45220600e690abfc |
calcworks/distilbert-base-uncased-distilled-clinc | calcworks | distilbert | 10 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['clinc_oos'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,787 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1004
- Accuracy: 0.9410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9037 | 1.0 | 318 | 0.5745 | 0.7326 |
| 0.4486 | 2.0 | 636 | 0.2866 | 0.8819 |
| 0.2537 | 3.0 | 954 | 0.1794 | 0.9210 |
| 0.1762 | 4.0 | 1272 | 0.1387 | 0.9294 |
| 0.1419 | 5.0 | 1590 | 0.1210 | 0.9358 |
| 0.1247 | 6.0 | 1908 | 0.1119 | 0.9413 |
| 0.1138 | 7.0 | 2226 | 0.1067 | 0.9387 |
| 0.1078 | 8.0 | 2544 | 0.1026 | 0.9423 |
| 0.1043 | 9.0 | 2862 | 0.1010 | 0.9413 |
| 0.102 | 10.0 | 3180 | 0.1004 | 0.9410 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 539f96e578cf7b2006370f2edb134ad1 |
cwinkler/distilbert-base-uncased-finetuned-greenplastics-2 | cwinkler | distilbert | 12 | 8 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,349 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-greenplastics-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0162
- Accuracy: 0.9958
- F1: 0.9958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0289 | 1.0 | 123 | 0.0238 | 0.9949 | 0.9949 |
| 0.0112 | 2.0 | 246 | 0.0162 | 0.9958 | 0.9958 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 5fe3e00473531e178685425d64b3a96b |
henilp105/wav2vec2-large-xls-r-300m-telugu-asr | henilp105 | wav2vec2 | 25 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,117 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-telugu-asr
This model is a fine-tuned version of [henilp105/wav2vec2-large-xls-r-300m-telugu-asr](https://huggingface.co/henilp105/wav2vec2-large-xls-r-300m-telugu-asr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1050
- Wer: 0.6656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.0506 | 2.3 | 200 | 0.8841 | 0.7564 |
| 0.6354 | 4.59 | 400 | 0.7448 | 0.6912 |
| 0.3934 | 6.89 | 600 | 0.8321 | 0.6929 |
| 0.2652 | 9.19 | 800 | 0.9529 | 0.6984 |
| 0.2022 | 11.49 | 1000 | 0.9490 | 0.6979 |
| 0.1514 | 13.79 | 1200 | 1.0025 | 0.6869 |
| 0.124 | 16.09 | 1400 | 1.0367 | 0.6799 |
| 0.1007 | 18.39 | 1600 | 1.0658 | 0.6734 |
| 0.0875 | 20.69 | 1800 | 1.0758 | 0.6779 |
| 0.0838 | 22.98 | 2000 | 1.0999 | 0.6701 |
| 0.0745 | 25.29 | 2200 | 1.1020 | 0.6708 |
| 0.0641 | 27.58 | 2400 | 1.1140 | 0.6683 |
| 0.0607 | 29.88 | 2600 | 1.1050 | 0.6656 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
| eb50cc91bf49232b36e087dcf86ce8b3 |
SweetLuna/Kenshi | SweetLuna | null | 12 | 0 | diffusers | 83 | text-to-image | false | false | false | creativeml-openrail-m | ['en'] | null | null | 0 | 0 | 0 | 0 | 4 | 0 | 4 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers'] | false | true | true | 13,971 | false |
# <h1 style="font-size: 4em; text-align: center; color:black; font-family: Segoe UI"> <a href="https://huggingface.co/SweetLuna/Kenshi/blob/main/README.md" style="text-decoration: none; background-color: transparent;">Kenshi</a> </h1>
<a href="https://lensdump.com/i/RL8CTQ"><img src="https://i1.lensdump.com/i/RXYEm2.png" alt="RXYEm2.png" onclick="window.open('https://i1.lensdump.com/i/RXYEm2.png', '_blank')"></a>
<h4 style="font-size: 1em; text-align: center;"><p style="color: black;">“Do I hide or do I roam? That indecision… Now the world has changed and I’ve missed it all.”</p></h1>
---
### <h1 style="font-size: 1.75em; font-family: Segoe UI">[FULLSCREEN](https://huggingface.co/SweetLuna/Kenshi/blob/main/README.md) | [Demo (Discord Server)](https://discord.gg/pD9MKyBgNp)</h1>
<hr>
### <h1 style="font-size: 1.75em; font-family: Segoe UI">[CivitAI](https://civitai.com/models/3850) | [Download](https://huggingface.co/SweetLuna/Kenshi/tree/main/KENSHI%2001) | [Changelog](https://huggingface.co/SweetLuna/Kenshi/blob/main/Changelog.md)</h1>
<hr>
<style>▼-preamble {
font-size: 2em;
}</style>
<details id="#contents">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🧧 Contents</strong></summary>
<hr>
# <h1 style="font-size: 1.5em;"><strong>
- [🏮 Preamble](#▼-preamble)<p>
- [⚙️ Usage](#▼-usage)<p>
- [🎨 Versatility](#▼-versatility)<p>
- [🥢 VAE [ IMPORTANT ! ]](#▼-vae)<p>
- [🏔️ Examples Images ](#▼-sample)
- [The Celestial ☄️](#▼-celestial)
- [ChatGPT Prompt ⚙️](#▼-chatgpt)
- [Vivid 🌈](#▼-vivid)
- [Moon 🌙](#▼-moon)<p>
- [🌏 Demo](#▼-demo)<p>
- [🍣 Merge Recipes](#▼-merge)<p>
- [💡 Suggestions](#▼-suggestions)
- [Trigger Words](#trigger-words)
- [WebUI](#webui)
- [VAE](#vae)
- [Embeddings](#embeddings)<p>
- [💛 Donate](#▼-donation)<p>
- [License](#license)<p>
- [Disclaimer](#disclaimer)
</strong>
</h1>
</details>
<hr>
<details id="▼-preamble">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🏮 What is Kenshi?</strong></summary>
<hr>
<h1>
**Kenshi** is my personal merges which created by combining different models together. ***This includes models such as Nixeu, WLOP, Guweiz, BoChen, and many others.***
```TypeScript
My goal is to archive my own feelings towards styles I want for Semi-realistic artstyle.
Through this process, I hope not only to gain a deeper understanding of my own preferences, but also to inform and refine the capabilities of my personal skills,
and AI Art as it generates artwork that reflects my desired style.
```
Kenshi because it represents strength, resilience, and the ability to adapt and overcome challenges. Just like AI.
</h1>
</details>
<hr>
<details id="▼-usage">
<summary style="font-size: 2.25em; font-family: font-family: Segoe UI"><strong>⚙️ Usage</strong></summary>
<hr>
<h1>
## <h1 style="font-size: 1.5em; text-align: center; color:black; font-family: Segoe UI"> These are the settings I always use it is recommended but not essential;
| Settings | Value |
| ----------------- | ------------------------------------------------------------------ |
| Steps | 20+ |
| Sampler | DPM++ 2M Karras |
| CFG scale | 2-7 |
| Size |600x800 |
| Clip skip | 2 |
| ENSD | 31337 |
| Hires Fix | Enabled |
| Upscale by | 1.5 |
| Upscaler Fix | https://de-next.owncube.com/index.php/s/x99pKzS7TNaErrC |
| Hires Fix | Enabled |
Kenshi is not recommended for new users since it requires a lot of prompt to work with I suggest using this if you still want to use the model (install it as an extension on Automatic1111 WebUI) : https://github.com/DominikDoom/a1111-sd-webui-tagcomplete
</h1>
</h1>
<center><a href="https://i2.lensdump.com/i/TAbhx1.png"><img src="https://i2.lensdump.com/i/TAbhx1.png" alt="TAbhx1.png" onclick="window.open('https://i2.lensdump.com/i/TAbhx1.png', '_blank')"></a></center>
</details>
<hr>
<details id="▼-versatility">
<summary style="font-size: 2.25em; font-family: font-family: Segoe UI"><strong>🎨 Versatility</strong></summary>
<hr>
<h1>
## Unlike most models, Kenshi is known for its versatility, able to perform various styles with remarkable results. I've undergone testing with over 30 to 50 styles and most of the time I get remarkable results. I recommend using Lora and Embedding to improve this even further.
<center><a href="https://i2.lensdump.com/i/TAxjOD.png"><img src="https://i2.lensdump.com/i/TAxjOD.png" alt="TAxjOD.png" onclick="window.open('https://i2.lensdump.com/i/TAxjOD.png, '_blank')"></a></center>
</details>
<hr>
<details id="▼-vae">
<summary style="font-size: 2.25em; font-family: font-family: Segoe UI"><strong>🥢 VAE ⚠️</strong></summary>
<hr>
<h1>
## I recommend <a href="https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt" >**kl-f8-anime2.ckpt**</a> VAE from waifu-diffusion-v1-4 <a href="https://huggingface.co/hakurei">which is made by hakurei.</a>
</h1>
<a href="https://i2.lensdump.com/i/RbBe37.png"><img src="https://i2.lensdump.com/i/RbBe37.png" alt="RbBe37.png" onclick="window.open('https://i2.lensdump.com/i/RbBe37.png', '_blank')"></a>
# <h1 style="font-size: 2.5em;"><a href="https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt" >**VAE is important, please download it.**</h1></a>
</details>
<hr>
<details id="▼-sample">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🏔️ Examples Images</strong></summary><hr>
<details id="▼-celestial">
<summary style="font-size: 1.75em; font-family: monospace"><strong>The Celestial ☄️</strong></summary>
<img src="https://i3.lensdump.com/i/RLEz8M.png" alt=”1”>
<h1>
```c#
1girl, highly detailed face, bleak and dangerous atmosphere, moody, (dynamic pose:1.6), cataclysmic magic, dark blue wavy long hair,
(glowing eyes:0.85), (reaching through a magic circle:1.35), extremely detailed 8k wallpaper, (highly detailed:1.1), [anime:Impasto:0.5],
intricate, fantasy, clear sky, wind, beautiful sky, (nightsky), (galaxy), (huge blood moon in the background:1.05)
```
# **KENSHI 00**
</details>
<hr>
<details id="▼-chatgpt">
<summary style="font-size: 1.75em; font-family: monospace"><strong>ChatGPT Prompt ⚙️</strong></summary>
<img src="https://i.lensdump.com/i/RLkz3v.png" alt=”2”>
<img src="https://i1.lensdump.com/i/RLkFND.png" alt=”3”>
<img src="https://i3.lensdump.com/i/RLkulr.png" alt=”4”>
```c#
(A cursed knight, clad in black armor,) must journey through a desolate,
haunted land to reach the Elden Ring and lift the (curse that plagues their soul.)Along the way,
they encounter other travelers, (each struggling with their own demons and secrets), As they draw closer to the Elden Ring,
they are confronted with visions of their past mistakes, (all tinged with a red hue,)
looking at viewer, highres, superb, 8k wallpaper, extremely detailed, intricate, unreal engine 5, volumetric lighting, realistic, realistic lighting,
cinematic, 4k, cinematic lighting, 8k, depth of field, 3d, perfect, award-winning, hyper-detailed, photorealistic, ultra realistic, realistic light,
hard lighting, intricate details, stop motion, hyperfocus, tonemapping, sharp focus, hyper detailed, detailed eyes, eyes focus, (illustration:1.1),
highres, (extremely detailed CG unity 8k wallpaper:1.1), (beautiful face:1.15), (cowboy_shot:1.5)
(nixeu_soft:0.7), (nixeu_white:0.7),
```
# **KENSHI 00**
</details>
<hr>
<details id="▼-vivid">
<summary style="font-size: 1.75em; font-family: monospace"><strong>Vivid 🌈</strong></summary>
<img src="https://i.lensdump.com/i/RXY1Fo.png" alt=”5”>
```c#
close POV, young adult woman, blue purple green color palette, black hair with dark green shine, two symmetrical antennae on head,
big blue eyes sparkling, rings around eyes, two-tone black and red, smiling at the camera, elegant pose, looking at the viewer,
vivid stained glass window background, oil painting, character portrait, drawn in medibang paint, 4k wallpaper, aesthetic, masterpiece,
award-winning photography, macro photography vivid colors, photorealistic, atmospheric, cinematic, moody, rule of thirds, majestic, detailed, perfect anatomy
cowboy shot, contrapposto, looking at viewer, highres, superb, 8k wallpaper, extremely detailed, intricate, unreal engine 5, volumetric lighting,
realistic, realistic lighting, cinematic, 4k, cinematic lighting, 8k, depth of field, 3d, masterpiece, perfect, award-winning, hyper-detailed,
photorealistic, ultra realistic, realistic light, hard lighting, intricate details, stop motion, hyperfocus, tonemapping, sharp focus, hyper detailed,
detailed eyes, eyes focus, (illustration:1.1), highres, (extremely detailed CG unity 8k wallpaper:1.1), (mid shot1.25), (portrait:1.25), (solo:1.2), 1girl,
(beautiful face:1.15),
(nixeu_soft:0.7), (nixeu_white:0.7),
```
# **KENSHI 01**
</details>
<hr>
<details id="▼-moon">
<summary style="font-size: 1.75em; font-family: monospace"><strong>Moon 🌙</strong></summary>
<img src="https://i2.lensdump.com/i/RXYt7i.png" alt=”6”>
```c#
(on the moon, space, looking back into earth), white hair, black tank top, volumetric lighting, white jacket, glowing headphone, cyberpunk, futuristic,
multi-color eyes, detailed eyes, hyper detailed,light smile,
highly detailed, beautiful, small details, ultra detailed, best quality, intricate, hyperrealism, sharp, digital illustration, detailed, realism, intricate,
4k, 8k, trending on artstation, good anatomy, beautiful lighting, award-winning, photorealistic, realistic shadows, realistic lighting, beautiful lighting,
raytracing, intricate details, moody, rule of thirds, masterpiece, (illustration:1.1), highres, (extremely detailed CG, unity, 8k wallpaper:1.1), beautiful face,
highly detailed face, ultra realistic, masterpiece, bokeh, extremely detailed, intricate, zoomout,
colorful, vibrant colors, red nail polish, side view,
```
# **KENSHI 01**
</details>
</details>
<hr>
</h1>
<details id="▼-demo">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🌏 Demo</strong></summary>
<hr>
### <h1 style="font-size: 2em;">Test out Kenshi on <a href="https://discord.gg/pD9MKyBgNp">Discord</a> in #garden_1-kenshi server</h1>
<a href="https://discord.gg/pD9MKyBgNp"><img src="https://i.lensdump.com/i/RwAkqx.png" alt="RwAkqx.png" border="0" /></a>
</details>
<hr>
<details id="▼-merge">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🍣 Merge Recipes</strong></summary>
<hr>
<h1><strong>
<a href="
https://www.figma.com/file/aESyZAxHxBJjE63gog5ExZ/KENSHI?node-id=0%3A1&t=2ULQWeLUSIWhk1aE-0" class="no-underline" style="font-size: 1.75em;">Here is my Cookbook that you can check out.
<img src="https://i2.lensdump.com/i/RLCJIH.png" alt="COOKBOOK"></strong>
</h1>
</a>
</details>
<hr>
<details id="▼-donation">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>💛 Donate</strong></summary>
<hr>
<h1><strong>
I've been working hard to complete my college education. The thing is, paying for college is no joke and I've been feeling the pressure of the mounting bills.
I know times are tough for everyone, but if you're able to help in any way, I would be forever grateful.
Considering supporting me on <a href="https://www.patreon.com/thesweetluna">Patreon</a>
</h1>
</a>
</details>
<hr>
<details id="▼-suggestions">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>💡 Suggestions</strong></summary>
<hr>
## <h1 style="font-size: 1.75em;">Trigger Words</h1>
<hr>
<h1 style="font-size: 1.5em;">
**Trigger Words are not required** but are meant to **enhance the effectiveness of the prompt** and improve the overall outcome.
```c#
WLOP, Nixeu, Guweiz
```
</h1>
<hr>
## <h1 style="font-size: 1.75em;">WebUI</h1>
<hr>
<h1 style="font-size: 1.5em;">
<a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">AUTOMATIC1111</a> Grab it, a must-have. Have all the features you want and is easy to access.
<hr>
</h1>
## <h1 style="font-size: 1.75em;">Embeddings</h1>
<hr>
<h1 style="font-size: 1.5em;">
I recommend grabbing ***all*** <a href="https://huggingface.co/Nerfgun3">Nerfgun3</a> embeddings ***and*** Sirveggie <a href="https://huggingface.co/SirVeggie/nixeu_embeddings">nixeu_embeddings</a>
</h1>
</details>
<hr>
# License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
```
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against theprovisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
```
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
<hr>
# Disclaimer
```c#
The use of this learning model is entirely at the discretion of the user, and they have the freedom to choose whether or not to create NSFW content.
This is important to note that the model itself does not contain any explicit or inappropriate imagery that can be easily accessed with a single click.
The purpose of sharing this model is not to showcase obscene material in a public forum, but rather to provide a tool for users to utilize as they see fit.
The decision of whether to engage with SFW or NSFW content lies with the user and their own personal preferences.
``` | 1ed83da1a80c8ee1b2672af0117184f0 |
gokceuludogan/ChemBERTaLM | gokceuludogan | roberta | 12 | 3 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['molecule-generation', 'cheminformatics', 'biochemical-language-models'] | false | true | true | 1,563 | false |
## ChemBERTaLM
A molecule generator model finetuned from [ChemBERTa](https://huggingface.co/seyonec/PubChem10M_SMILES_BPE_450k) checkpoint. It was introduced in the paper, "Exploiting pretrained biochemical language models for
targeted drug design", which has been accepted for publication in *Bioinformatics* Published by Oxford University Press and first released in [this repository](https://github.com/boun-tabi/biochemical-lms-for-drug-design).
ChemBERTaLM is a RoBERTa model initialized with [ChemBERTa](https://huggingface.co/seyonec/PubChem10M_SMILES_BPE_450k) checkpoint, and then, finetuned on the MOSES dataset which comprises a collection of drug-like compounds.
## How to use
```python
from transformers import RobertaForCausalLM, RobertaTokenizer, pipeline
tokenizer = RobertaTokenizer.from_pretrained("gokceuludogan/ChemBERTaLM")
model = RobertaForCausalLM.from_pretrained("gokceuludogan/ChemBERTaLM")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
generator("", max_length=128, do_sample=True)
# Sample output
[{'generated_text': 'Cc1ccc(C(=O)N2CCN(C(=O)c3ccc(F)cc3)CC2)cc1'}]
```
## Citation
```bibtex
@article{10.1093/bioinformatics/btac482,
author = {Uludoğan, Gökçe and Ozkirimli, Elif and Ulgen, Kutlu O. and Karalı, Nilgün Lütfiye and Özgür, Arzucan},
title = "{Exploiting Pretrained Biochemical Language Models for Targeted Drug Design}",
journal = {Bioinformatics},
year = {2022},
doi = {10.1093/bioinformatics/btac482},
url = {https://doi.org/10.1093/bioinformatics/btac482}
}
``` | de4051d0cb11fa78e05c5bc7dedf1b69 |
jbetker/tortoise-tts-finetuned-lj | jbetker | null | 9 | 0 | null | 1 | null | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 2 | 1 | 1 | [] | false | true | true | 532 | false |
This repository holds the finetuned weights for Tortoise v2 for the LJSpeech voice. It is a
good demonstration of how powerful fine-tuning Tortoise can be.
Usage:
- Clone Tortoise, jbetker/tortoise-tts-v2 or https://github.com/neonbjb/tortoise-tts
- Clone this repo to download weights
- Run any Tortoise script with the flag `--model_dir=<path_to_where_you_cloned_this_repo>/models` and `--voice=lj`
- For fine-tuned models, I recommend using the `high_quality` preset. Faster rendering modes can exhibit artifacts in the output. | 0765116cfa7351ecf881f9d7aa5222b7 |
rvidaurre/ddpm-butterflies-128 | rvidaurre | null | 13 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,231 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/rvidaurre/ddpm-butterflies-128/tensorboard?#scalars)
| eaae0d5c7c55a754815ecffdda5d1d07 |
haanba/hayashida-tamaki-gfkari-concept | haanba | null | 28 | 0 | null | 0 | text-to-image | false | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 5,476 | false |
# Hayashida Tamaki (GF Kari) on Waifu Diffusion v1.3.5
This is the `<wd135-hayashida-tamaki-gfkari>` concept taught to [Waifu Diffusion v1.3.5](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/models/wd-1-3-5_80000-fp32.ckpt) via Textual Inversion.
## Credits
The model card follows the format commonly used by concepts stored at [Hugging Face SD Concepts Library](https://huggingface.co/sd-concepts-library).
The training images were taken from [GF Kari Database](https://gfkari.gamedbs.jp/).
## Concept Images
Here is the new concept you will be able to use as an `object`:





## Output Examples
!["best quality masterpiece, <wd135-hayashida-tamaki-gfkari> collarbone bare shoulders bare legs shiny skin standing, frilled white summer dress, ocean beach sunny day sunlight, cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn] " -s 64 -S 4020064356 -W 512 -H 768 -C 12 -A k_dpmpp_2](./examples/000049.d1659a74.4020064356.png)
```json
{
"model": "stable diffusion",
"model_weights": "waifu-diffusion-1.3.5",
"model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452",
"app_id": "invoke-ai/InvokeAI",
"app_version": "2.2.5",
"image": {
"prompt": [
{
"prompt": "best quality masterpiece, <wd135-hayashida-tamaki-gfkari> collarbone bare shoulders bare legs shiny skin standing, frilled white summer dress, ocean beach sunny day sunlight, cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn] ",
"weight": 1
}
],
"steps": 64,
"cfg_scale": 12,
"threshold": 0,
"perlin": 0,
"height": 768,
"width": 512,
"seed": 4020064356,
"seamless": false,
"hires_fix": false,
"type": "txt2img",
"postprocessing": null,
"sampler": "k_dpmpp_2",
"variations": []
}
}
```
!["best quality masterpiece, <wd135-hayashida-tamaki-gfkari> collarbone bare shoulders bare legs shiny skin standing, frilled white summer dress, ocean beach sunny day sunlight, cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn] " -s 64 -S 4020064356 -W 512 -H 768 -C 12 -A k_dpmpp_2_a](./examples/000050.e968c45d.4020064356.png)
```json
{
"model": "stable diffusion",
"model_weights": "waifu-diffusion-1.3.5",
"model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452",
"app_id": "invoke-ai/InvokeAI",
"app_version": "2.2.5",
"image": {
"prompt": [
{
"prompt": "best quality masterpiece, <wd135-hayashida-tamaki-gfkari> collarbone bare shoulders bare legs shiny skin standing, frilled white summer dress, ocean beach sunny day sunlight, cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn] ",
"weight": 1
}
],
"steps": 64,
"cfg_scale": 12,
"threshold": 0,
"perlin": 0,
"height": 768,
"width": 512,
"seed": 4020064356,
"seamless": false,
"hires_fix": false,
"type": "txt2img",
"postprocessing": null,
"sampler": "k_dpmpp_2_a",
"variations": []
}
}
```
!["best quality masterpiece, <wd135-hayashida-tamaki-gfkari>, school uniform collared shirt pleated skirt, lying on back on bed, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]" -s 64 -S 3329130038 -W 512 -H 768 -C 12 -A k_dpmpp_2](./examples/000061.00d75711.3329130038.png)
```json
{
"model": "stable diffusion",
"model_weights": "waifu-diffusion-1.3.5",
"model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452",
"app_id": "invoke-ai/InvokeAI",
"app_version": "2.2.5",
"image": {
"prompt": [
{
"prompt": "best quality masterpiece, <wd135-hayashida-tamaki-gfkari>, school uniform collared shirt pleated skirt, lying on back on bed, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]",
"weight": 1
}
],
"steps": 64,
"cfg_scale": 12,
"threshold": 0,
"perlin": 0,
"height": 768,
"width": 512,
"seed": 3329130038,
"seamless": false,
"hires_fix": false,
"type": "txt2img",
"postprocessing": null,
"sampler": "k_dpmpp_2",
"variations": []
}
}
```
## License
[MIT](./LICENSE).
| 08e2ec20d5dae380d86554a95a8f7b62 |
muhtasham/small-vanilla-target-tweet | muhtasham | bert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['tweet_eval'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,563 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-vanilla-target-tweet
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8718
- Accuracy: 0.7540
- F1: 0.7525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5858 | 4.9 | 500 | 0.8189 | 0.7380 | 0.7364 |
| 0.1039 | 9.8 | 1000 | 1.1965 | 0.7594 | 0.7568 |
| 0.0264 | 14.71 | 1500 | 1.5387 | 0.7433 | 0.7460 |
| 0.0142 | 19.61 | 2000 | 1.6758 | 0.7620 | 0.7551 |
| 0.0113 | 24.51 | 2500 | 1.8718 | 0.7540 | 0.7525 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
| 72144fe53d53dc5340f2cd1656d5114e |
shivam/wav2vec2-xls-r-300m-hindi | shivam | wav2vec2 | 30 | 2 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['hi'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer'] | true | true | true | 3,085 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4031
- Wer: 0.6827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.3156 | 3.4 | 500 | 4.5583 | 1.0 |
| 3.3329 | 6.8 | 1000 | 3.4274 | 1.0001 |
| 2.1275 | 10.2 | 1500 | 1.7221 | 0.8763 |
| 1.5737 | 13.6 | 2000 | 1.4188 | 0.8143 |
| 1.3835 | 17.01 | 2500 | 1.2251 | 0.7447 |
| 1.3247 | 20.41 | 3000 | 1.2827 | 0.7394 |
| 1.231 | 23.81 | 3500 | 1.2216 | 0.7074 |
| 1.1819 | 27.21 | 4000 | 1.2210 | 0.6863 |
| 1.1546 | 30.61 | 4500 | 1.3233 | 0.7308 |
| 1.0902 | 34.01 | 5000 | 1.3251 | 0.7010 |
| 1.0749 | 37.41 | 5500 | 1.3274 | 0.7235 |
| 1.0412 | 40.81 | 6000 | 1.2942 | 0.6856 |
| 1.0064 | 44.22 | 6500 | 1.2581 | 0.6732 |
| 1.0006 | 47.62 | 7000 | 1.2767 | 0.6885 |
| 0.9518 | 51.02 | 7500 | 1.2966 | 0.6925 |
| 0.9514 | 54.42 | 8000 | 1.2981 | 0.7067 |
| 0.9241 | 57.82 | 8500 | 1.3835 | 0.7124 |
| 0.9059 | 61.22 | 9000 | 1.3318 | 0.7083 |
| 0.8906 | 64.62 | 9500 | 1.3640 | 0.6962 |
| 0.8468 | 68.03 | 10000 | 1.4727 | 0.6982 |
| 0.8631 | 71.43 | 10500 | 1.3401 | 0.6809 |
| 0.8154 | 74.83 | 11000 | 1.4124 | 0.6955 |
| 0.7953 | 78.23 | 11500 | 1.4245 | 0.6950 |
| 0.818 | 81.63 | 12000 | 1.3944 | 0.6995 |
| 0.7772 | 85.03 | 12500 | 1.3735 | 0.6785 |
| 0.7857 | 88.43 | 13000 | 1.3696 | 0.6808 |
| 0.7705 | 91.84 | 13500 | 1.4101 | 0.6870 |
| 0.7537 | 95.24 | 14000 | 1.4178 | 0.6832 |
| 0.7734 | 98.64 | 14500 | 1.4027 | 0.6831 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
| 435b2bc389d70f1c34c55de1d4e98a64 |
MultiBertGunjanPatrick/multiberts-seed-1-300k | MultiBertGunjanPatrick | bert | 7 | 2 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'multiberts', 'multiberts-seed-1'] | false | true | true | 6,483 | false | # MultiBERTs Seed 1 Checkpoint 300k (uncased)
Seed 1 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-300k')
model = BertModel.from_pretrained("multiberts-seed-1-300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| 463dae68b1b4449f51effbeca90b180f |
jha2ee/riffusion-model-db | jha2ee | null | 19 | 8 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 426 | false | ### riffusion_model-db Dreambooth model trained by jha2ee with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| bc030fd29afc646a8f2026653e30f82c |
DOOGLAK/Article_100v0_NER_Model_3Epochs_UNAUGMENTED | DOOGLAK | bert | 13 | 6 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['article100v0_wikigold_split'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,559 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_100v0_NER_Model_3Epochs_UNAUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v0_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6037
- Precision: 0.25
- Recall: 0.0003
- F1: 0.0005
- Accuracy: 0.7772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 12 | 0.7472 | 0.0 | 0.0 | 0.0 | 0.7772 |
| No log | 2.0 | 24 | 0.6443 | 0.0 | 0.0 | 0.0 | 0.7772 |
| No log | 3.0 | 36 | 0.6037 | 0.25 | 0.0003 | 0.0005 | 0.7772 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
| ed350286dfb119b39e60edfa346106cb |
ayameRushia/indobert-base-uncased-finetuned-indonlu-smsa | ayameRushia | bert | 10 | 5 | transformers | 0 | text-classification | true | false | false | mit | null | ['indonlu'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,252 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-base-uncased-finetuned-indonlu-smsa
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2277
- Accuracy: 0.9302
- F1: 0.9066
- Precision: 0.8992
- Recall: 0.9147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 344 | 0.3831 | 0.8476 | 0.7715 | 0.7817 | 0.7627 |
| 0.4167 | 2.0 | 688 | 0.2809 | 0.8905 | 0.8406 | 0.8699 | 0.8185 |
| 0.2624 | 3.0 | 1032 | 0.2254 | 0.9230 | 0.8842 | 0.9004 | 0.8714 |
| 0.2624 | 4.0 | 1376 | 0.2378 | 0.9238 | 0.8797 | 0.9180 | 0.8594 |
| 0.1865 | 5.0 | 1720 | 0.2277 | 0.9302 | 0.9066 | 0.8992 | 0.9147 |
| 0.1217 | 6.0 | 2064 | 0.2444 | 0.9262 | 0.8981 | 0.9013 | 0.8957 |
| 0.1217 | 7.0 | 2408 | 0.2985 | 0.9286 | 0.8999 | 0.9035 | 0.8971 |
| 0.0847 | 8.0 | 2752 | 0.3397 | 0.9278 | 0.8969 | 0.9090 | 0.8871 |
| 0.0551 | 9.0 | 3096 | 0.3542 | 0.9270 | 0.8961 | 0.9010 | 0.8924 |
| 0.0551 | 10.0 | 3440 | 0.3862 | 0.9222 | 0.8895 | 0.8970 | 0.8846 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| dbe97a9efa32c5385db200a01b37a57a |
Nadav/bert-base-historic-multilingual-cased-squad-en | Nadav | bert | 10 | 7 | transformers | 0 | question-answering | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,307 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-historic-multilingual-cased-squad-en
This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.881 | 1.0 | 4820 | 1.5507 |
| 1.5883 | 2.0 | 9640 | 1.5307 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
| 29211edcee141ad5835c41c7f8f4678f |
juancopi81/whisper-medium-es-train-valid-bs-64 | juancopi81 | whisper | 34 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,326 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Spanish
This model is a fine-tuned version of [juancopi81/whisper-medium-es](https://huggingface.co/juancopi81/whisper-medium-es) on the mozilla-foundation/common_voice_11_0 es dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2338
- Wer: 95.6181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1432 | 1.0 | 100 | 0.2338 | 95.6181 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 2b8a55001bbbd38686130a2f57994fc3 |
HoussemSaafi/esm2_t12_35M_UR50D-finetuned-ARG-classification | HoussemSaafi | esm | 7 | 3 | transformers | 0 | text-classification | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,038 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-finetuned-ARG-classification
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.0}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
| 024c8e8c7e255973e3221130a8685f8b |
sd-concepts-library/a-tale-of-two-empires | sd-concepts-library | null | 11 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | [] | false | true | true | 1,458 | false | ### A Tale of Two Empires on Stable Diffusion
This is the `<two-empires>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:






Source: Reddit [u/mandal0re](https://www.reddit.com/r/StarWars/comments/kg6ovv/i_like_to_photoshop_old_paintings_heres_my_a_tale/) | e1b6907b52cea99281168020019400e7 |
iksenburg/andiface | iksenburg | null | 38 | 30 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 2,532 | false | ### AndiFace Dreambooth model trained by iksenburg with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-512 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
iksen (use that on your prompt)

| ef2f4a6c444e392ea9cb95ed9549025d |
rheyaas/distilbert-base-uncased-finetuned-squad | rheyaas | distilbert | 12 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,284 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2167 | 1.0 | 5533 | 1.1654 |
| 0.9559 | 2.0 | 11066 | 1.1209 |
| 0.7532 | 3.0 | 16599 | 1.1576 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| aae41b29999d85c2583d0e0f71890fdc |
sd-concepts-library/morino-hon-style | sd-concepts-library | null | 27 | 0 | null | 13 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,134 | false | ### Morino hon Style on Stable Diffusion
This is the `<morino-hon>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:






















| ff8a54505835508d3cd525ede605b6a9 |
adityavithaldas/Fashion_Category_Classifier | adityavithaldas | null | 2 | 0 | null | 4 | null | false | false | false | cc-by-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 621 | false | This model uses the Deep Fashion dataset in order to create a category classifier among the 50 or so provided categories.
https://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html
This model leverages the ViT (Vision transformer), loaded with the custom dataset and the 50 odd categoes to which they are assigned. The objective here, is to expand the same and get to
a. An accuracy level of 90+ in the top 5 categorizes
b. An accuracy of 70+ overall.
In addition, we would also look forward to creating attribute extractors, to extract key attributes (primary color, checked, sleeve, collar etc) as we proceed
| 52a4e8c8ac52e03728700b7ed665961b |
jonatasgrosman/exp_w2v2t_pt_vp-it_s996 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pt'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'pt'] | false | true | true | 469 | false | # exp_w2v2t_pt_vp-it_s996
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| f35f65b4a76f7c2173e9d94bbdd8cf90 |
AbhirupGhosh/opus-mt-finetuned-en-hi | AbhirupGhosh | marian | 10 | 27 | transformers | 0 | translation | true | true | false | apache-2.0 | ['en', 'hi', 'multilingual'] | ['HindiEnglishCorpora'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['translation', 'Hindi', 'generated_from_keras_callback'] | false | true | true | 857 | false |
# opus-mt-finetuned-hi-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on [HindiEnglish Corpora](https://www.clarin.eu/resource-families/parallel-corpora)
## Model description
The model is a transformer model similar to the [Transformer](https://arxiv.org/abs/1706.03762?context=cs) as defined in Attention Is All You Need by Vaswani et al
## Training and evaluation data
More information needed
## Training procedure
The model was trained on 2 NVIDIA_TESLA_A100 GPU's on Google's vertex AI platform.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: AdamWeightDecay
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| decfee092929bd54e8e731543cd947d3 |
troesy/distil-added-voca | troesy | distilbert | 13 | 8 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,251 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-added-voca
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 174 | 0.2577 |
| No log | 2.0 | 348 | 0.2488 |
| 0.2546 | 3.0 | 522 | 0.2515 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 36adbcf79e21a60f134aaaa9000296cb |
Helsinki-NLP/opus-mt-bzs-sv | Helsinki-NLP | marian | 10 | 10 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-bzs-sv
* source languages: bzs
* target languages: sv
* OPUS readme: [bzs-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.sv | 30.7 | 0.489 |
| f86d0c8d3722effbe6e44921a6e93af6 |
premsuresh/bart-finetuned-iirc-prem-2 | premsuresh | bart | 8 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 960 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-iirc-prem-2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| fe41a540d4e6d530a70a8c419542a2dd |
liaad/srl-pt_bertimbau-base | liaad | bert | 9 | 119 | transformers | 1 | feature-extraction | true | true | true | apache-2.0 | ['multilingual', 'pt'] | ['PropBank.Br'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert-base-portuguese-cased', 'semantic role labeling', 'finetuned'] | false | true | true | 3,585 | false |
# BERTimbau base fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`neuralmind/bert-base-portuguese-cased`](https://huggingface.co/neuralmind/bert-base-portuguese-cased) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_bertimbau-base")
model = AutoModel.from_pretrained("liaad/srl-pt_bertimbau-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | f2e6e1577919aa1d425ad4fe798f30a0 |
2NRC/Fake-New-Classifier | 2NRC | null | 5 | 0 | null | 0 | null | false | false | false | other | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 404 | false | Deep Learning for NLP: Training a text classification model to detect fake news articles!
Training and test dataset gotten from https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset
Dataset size = 44898 articles
Training set size = 35918 articles
Test set size = 8980 articles
Accuracy on the training set = 0.990394788128515
Accuracy on the test set = 0.983184855233853
| 7fe3d2f3c4ae8d56af94212128d12dfa |
Arnaudmkonan/xlm-roberta-base-finetuned-panx-de | Arnaudmkonan | xlm-roberta | 12 | 7 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
- F1: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 |
| 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 |
| 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| f19c2363cc1eca89485e7f715b21d4f4 |
Geotrend/distilbert-base-ro-cased | Geotrend | distilbert | 6 | 5 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | ['ro'] | ['wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,215 | false |
# distilbert-base-ro-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-ro-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-ro-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. | 5091f95edff25cd18111ea9e4e2ffedd |
pere/nb-nn-dev | pere | null | 70 | 3 | null | 0 | translation | true | false | true | cc-by-4.0 | False | ['oscar'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,131 | false | # Norwegian mT5 - Translation Bokmål Nynorsk - Development
## Description
This is the development version of the Bokmål-Nynorsk translator. If you want something that is stable, Please do run [this version](https://huggingface.co/pere/nb-nn-translation/) instead.
Here is an example of how to use the model from Python
```python
# Import libraries
from transformers import T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('pere/nb-nn-dev',from_flax=True)
tokenizer = AutoTokenizer.from_pretrained('pere/nb-nn-dev')
#Encode the text
text = "Hun vil ikke gi bort sine personlige data."
inputs = tokenizer.encode(text, return_tensors="pt")
outputs = model.generate(inputs, max_length=255, num_beams=4, early_stopping=True)
#Decode and print the result
print(tokenizer.decode(outputs[0]))
```
Or if you like to use the pipeline instead
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-dev')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
```
| 01ee26b1ddb8f6091672cd447d82d501 |
ksing193/t5-small-finetuned-wikisql | ksing193 | t5 | 12 | 4 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['wikisql'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,795 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1245
- Rouge2 Precision: 0.8183
- Rouge2 Recall: 0.7262
- Rouge2 Fmeasure: 0.7624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.1954 | 1.0 | 4049 | 0.1575 | 0.7934 | 0.7033 | 0.7386 |
| 0.1643 | 2.0 | 8098 | 0.1374 | 0.8083 | 0.7169 | 0.7529 |
| 0.1517 | 3.0 | 12147 | 0.1296 | 0.8135 | 0.7221 | 0.7581 |
| 0.1459 | 4.0 | 16196 | 0.1256 | 0.817 | 0.7254 | 0.7614 |
| 0.1414 | 5.0 | 20245 | 0.1245 | 0.8183 | 0.7262 | 0.7624 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| bbc2012cadcb565e36d94e74b7f36d5b |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.