modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
rasvob/distilbert-base-uncased-finetuned-cola | 2023-05-02T13:18:48.000Z | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | rasvob | null | null | rasvob/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-24T06:09:57 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: rasvob/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rasvob/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1885
- Validation Loss: 0.5311
- Train Matthews Correlation: 0.5550
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5163 | 0.4623 | 0.5139 | 0 |
| 0.3225 | 0.4522 | 0.5358 | 1 |
| 0.1885 | 0.5311 | 0.5550 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,941 | [
[
-0.040557861328125,
-0.04345703125,
0.018096923828125,
0.00943756103515625,
-0.02740478515625,
-0.0027828216552734375,
-0.0085296630859375,
-0.006534576416015625,
0.0201873779296875,
0.002716064453125,
-0.044219970703125,
-0.04241943359375,
-0.0679931640625,
... |
StivenLancheros/bert-base-arabert-BioNER-EN-AR | 2023-04-24T08:03:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | StivenLancheros | null | null | StivenLancheros/bert-base-arabert-BioNER-EN-AR | 0 | 2 | transformers | 2023-04-24T07:16:49 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-arabert-BioNER-EN-AR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-arabert-BioNER-EN-AR
This model is a fine-tuned version of [StivenLancheros/bert-base-arabert-BioNER-EN](https://huggingface.co/StivenLancheros/bert-base-arabert-BioNER-EN) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4250
- Precision: 0.7143
- Recall: 0.8209
- F1: 0.7639
- Accuracy: 0.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6376 | 1.0 | 680 | 0.7457 | 0.4379 | 0.6384 | 0.5195 | 0.8242 |
| 0.4549 | 2.0 | 1360 | 0.7120 | 0.4878 | 0.7113 | 0.5787 | 0.8346 |
| 0.3214 | 3.0 | 2040 | 0.5576 | 0.5676 | 0.7529 | 0.6473 | 0.8749 |
| 0.2883 | 4.0 | 2720 | 0.5304 | 0.5916 | 0.7745 | 0.6708 | 0.8808 |
| 0.2596 | 5.0 | 3400 | 0.4942 | 0.6117 | 0.7884 | 0.6889 | 0.8906 |
| 0.2168 | 6.0 | 4080 | 0.5229 | 0.6204 | 0.7977 | 0.6979 | 0.8898 |
| 0.2105 | 7.0 | 4760 | 0.4630 | 0.6501 | 0.7935 | 0.7147 | 0.8999 |
| 0.1889 | 8.0 | 5440 | 0.5048 | 0.6407 | 0.8066 | 0.7141 | 0.8958 |
| 0.1714 | 9.0 | 6120 | 0.4538 | 0.6909 | 0.7986 | 0.7409 | 0.9105 |
| 0.1626 | 10.0 | 6800 | 0.4433 | 0.6912 | 0.8070 | 0.7446 | 0.9130 |
| 0.1559 | 11.0 | 7480 | 0.4282 | 0.7006 | 0.8054 | 0.7493 | 0.9144 |
| 0.1451 | 12.0 | 8160 | 0.4475 | 0.6978 | 0.8150 | 0.7519 | 0.9135 |
| 0.1384 | 13.0 | 8840 | 0.4535 | 0.6928 | 0.8215 | 0.7517 | 0.9145 |
| 0.1331 | 14.0 | 9520 | 0.4250 | 0.7143 | 0.8209 | 0.7639 | 0.9197 |
| 0.1282 | 15.0 | 10200 | 0.4350 | 0.7108 | 0.8237 | 0.7631 | 0.9200 |
| 0.1216 | 16.0 | 10880 | 0.4385 | 0.7096 | 0.8231 | 0.7621 | 0.9188 |
| 0.1195 | 17.0 | 11560 | 0.4376 | 0.7134 | 0.8275 | 0.7662 | 0.9204 |
| 0.1187 | 18.0 | 12240 | 0.4461 | 0.7092 | 0.8297 | 0.7647 | 0.9183 |
| 0.1159 | 19.0 | 12920 | 0.4359 | 0.7215 | 0.8264 | 0.7704 | 0.9219 |
| 0.1121 | 20.0 | 13600 | 0.4358 | 0.7198 | 0.8264 | 0.7694 | 0.9217 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
| 3,321 | [
[
-0.049285888671875,
-0.038177490234375,
0.0163726806640625,
0.001171112060546875,
-0.0089874267578125,
-0.0115509033203125,
-0.0016345977783203125,
-0.013458251953125,
0.042144775390625,
0.0267333984375,
-0.04595947265625,
-0.05340576171875,
-0.04693603515625,
... |
chinmayapani/t5-small-finetuned-multi-news-summerize | 2023-04-24T08:10:21.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | chinmayapani | null | null | chinmayapani/t5-small-finetuned-multi-news-summerize | 0 | 2 | transformers | 2023-04-24T08:00:48 | The following model is a Pytorch pre-trained model obtained from converting pytorch checkpoint found in the official t5-small.
This is the smallest pre-trained t5 variants, that can be used for multiple tasks. This model is trained on multi-news data for text summarization. | 275 | [
[
-0.033599853515625,
-0.0193634033203125,
0.037506103515625,
-0.0153350830078125,
-0.0307464599609375,
-0.007534027099609375,
0.01032257080078125,
-0.01837158203125,
0.01146697998046875,
0.029205322265625,
-0.06329345703125,
-0.035125732421875,
-0.0423583984375,
... |
Bersk/twhin-bert-base-finetuned-twhin-epoch | 2023-04-24T09:30:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Bersk | null | null | Bersk/twhin-bert-base-finetuned-twhin-epoch | 0 | 2 | transformers | 2023-04-24T08:58:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: twhin-bert-base-finetuned-twhin-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twhin-bert-base-finetuned-twhin-epoch
This model is a fine-tuned version of [Twitter/twhin-bert-base](https://huggingface.co/Twitter/twhin-bert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8164
- Precision: 0.8381
- Recall: 0.8347
- F1: 0.8360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 36
- eval_batch_size: 36
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 52 | 0.5851 | 0.7441 | 0.7796 | 0.7613 |
| No log | 2.0 | 104 | 0.5324 | 0.7630 | 0.7861 | 0.7700 |
| No log | 3.0 | 156 | 0.4893 | 0.7774 | 0.8104 | 0.7923 |
| No log | 4.0 | 208 | 0.5204 | 0.7862 | 0.8104 | 0.7936 |
| No log | 5.0 | 260 | 0.5753 | 0.7728 | 0.8120 | 0.7907 |
| No log | 6.0 | 312 | 0.5552 | 0.7729 | 0.8071 | 0.7889 |
| No log | 7.0 | 364 | 0.5975 | 0.7768 | 0.8136 | 0.7946 |
| No log | 8.0 | 416 | 0.6527 | 0.8015 | 0.8055 | 0.7915 |
| No log | 9.0 | 468 | 0.6521 | 0.8285 | 0.8233 | 0.8252 |
| 0.3755 | 10.0 | 520 | 0.6629 | 0.8315 | 0.8104 | 0.8175 |
| 0.3755 | 11.0 | 572 | 0.7238 | 0.8260 | 0.8266 | 0.8263 |
| 0.3755 | 12.0 | 624 | 0.7782 | 0.8318 | 0.8201 | 0.8239 |
| 0.3755 | 13.0 | 676 | 0.7788 | 0.8263 | 0.8266 | 0.8260 |
| 0.3755 | 14.0 | 728 | 0.8164 | 0.8381 | 0.8347 | 0.8360 |
| 0.3755 | 15.0 | 780 | 0.8701 | 0.8238 | 0.8201 | 0.8212 |
| 0.3755 | 16.0 | 832 | 0.8774 | 0.8295 | 0.8282 | 0.8288 |
| 0.3755 | 17.0 | 884 | 0.9193 | 0.8311 | 0.8233 | 0.8259 |
| 0.3755 | 18.0 | 936 | 0.9321 | 0.8339 | 0.8282 | 0.8299 |
| 0.3755 | 19.0 | 988 | 0.9350 | 0.8307 | 0.8233 | 0.8261 |
| 0.0554 | 20.0 | 1040 | 0.9344 | 0.8256 | 0.8185 | 0.8213 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
| 3,022 | [
[
-0.0411376953125,
-0.04669189453125,
0.008209228515625,
0.0007038116455078125,
-0.007808685302734375,
-0.0102386474609375,
-0.003459930419921875,
-0.0098419189453125,
0.037841796875,
0.0229339599609375,
-0.0523681640625,
-0.0546875,
-0.046173095703125,
-0.02... |
vocabtrimmer/xlm-roberta-base-trimmed-es-15000-xnli-es | 2023-04-24T09:09:42.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-trimmed-es-15000-xnli-es | 0 | 2 | transformers | 2023-04-24T09:08:41 | # `vocabtrimmer/xlm-roberta-base-trimmed-es-15000-xnli-es`
This model is a fine-tuned version of [vocabtrimmer/xlm-roberta-base-trimmed-es-15000](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-es-15000) on the
[xnli](https://huggingface.co/datasets/xnli) (es).
Following metrics are computed on the `test` split of
[xnli](https://huggingface.co/datasets/xnli)(es).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 79.24 | 79.24 | 79.24 | 79.25 | 79.24 | 80.05 | 79.24 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-es-15000-xnli-es/raw/main/eval.json). | 977 | [
[
-0.0396728515625,
-0.03265380859375,
0.020965576171875,
-0.0044403076171875,
-0.026031494140625,
0.01267242431640625,
-0.015228271484375,
-0.0212860107421875,
0.03973388671875,
0.044921875,
-0.057464599609375,
-0.060150146484375,
-0.041717529296875,
0.000870... |
AlaaArboun/distilbert-base-uncased-finetuned-emotion | 2023-04-24T10:32:39.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | AlaaArboun | null | null | AlaaArboun/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-24T10:14:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9259175826084659
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2129
- Accuracy: 0.926
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7946 | 1.0 | 250 | 0.3013 | 0.905 | 0.9019 |
| 0.2432 | 2.0 | 500 | 0.2129 | 0.926 | 0.9259 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.03765869140625,
-0.04150390625,
0.01416015625,
0.0216217041015625,
-0.0265045166015625,
-0.0186767578125,
-0.013031005859375,
-0.00870513916015625,
0.01096343994140625,
0.00860595703125,
-0.056671142578125,
-0.052154541015625,
-0.0595703125,
-0.0078735351... |
vocabtrimmer/xlm-roberta-base-trimmed-en-15000-xnli-en | 2023-04-24T10:46:50.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-trimmed-en-15000-xnli-en | 0 | 2 | transformers | 2023-04-24T10:45:35 | # `vocabtrimmer/xlm-roberta-base-trimmed-en-15000-xnli-en`
This model is a fine-tuned version of [vocabtrimmer/xlm-roberta-base-trimmed-en-15000](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-15000) on the
[xnli](https://huggingface.co/datasets/xnli) (en).
Following metrics are computed on the `test` split of
[xnli](https://huggingface.co/datasets/xnli)(en).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 84.35 | 84.35 | 84.35 | 84.38 | 84.35 | 84.53 | 84.35 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-15000-xnli-en/raw/main/eval.json). | 977 | [
[
-0.0380859375,
-0.032012939453125,
0.0213165283203125,
-0.00266265869140625,
-0.0252838134765625,
0.0083770751953125,
-0.0175628662109375,
-0.0234375,
0.039306640625,
0.043212890625,
-0.054229736328125,
-0.061279296875,
-0.040771484375,
0.0003571510314941406... |
hardy500/distilbert-base-uncased-finetuned-emotion | 2023-04-24T11:37:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | hardy500 | null | null | hardy500/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-24T11:07:07 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9345
- name: F1
type: f1
value: 0.9346825135706527
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1528
- Accuracy: 0.9345
- F1: 0.9347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1782 | 1.0 | 250 | 0.1814 | 0.9335 | 0.9330 |
| 0.1111 | 2.0 | 500 | 0.1528 | 0.9345 | 0.9347 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,798 | [
[
-0.03814697265625,
-0.041961669921875,
0.01416778564453125,
0.023284912109375,
-0.0269927978515625,
-0.0196075439453125,
-0.0127716064453125,
-0.00787353515625,
0.01142120361328125,
0.0087738037109375,
-0.05609130859375,
-0.050994873046875,
-0.059967041015625,
... |
fredymad/roberta_estricto_2e-5_16_2 | 2023-05-29T23:01:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | fredymad | null | null | fredymad/roberta_estricto_2e-5_16_2 | 0 | 2 | transformers | 2023-04-24T11:20:43 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta_estricto_2e-5_16_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_estricto_2e-5_16_2
This model is a fine-tuned version of [fredymad/bert_laxo_2e-5_16_2](https://huggingface.co/fredymad/bert_laxo_2e-5_16_2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5477
- Accuracy: 0.8730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 400 | 0.4830 | 0.8718 |
| 0.1939 | 2.0 | 800 | 0.5477 | 0.8730 |
### Framework versions
- Transformers 4.29.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,429 | [
[
-0.030609130859375,
-0.045196533203125,
0.01186370849609375,
0.017120361328125,
-0.0264892578125,
-0.04193115234375,
-0.0145111083984375,
-0.033233642578125,
0.00783538818359375,
0.0231475830078125,
-0.05279541015625,
-0.0408935546875,
-0.0430908203125,
-0.0... |
MariaPerezCatalinas/clasificador-tweet-sentiment | 2023-04-24T11:42:02.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"dataset:tweet_sentiment_multilingual",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | MariaPerezCatalinas | null | null | MariaPerezCatalinas/clasificador-tweet-sentiment | 0 | 2 | transformers | 2023-04-24T11:41:20 | ---
license: apache-2.0
tags:
- classification
- generated_from_trainer
datasets:
- tweet_sentiment_multilingual
metrics:
- accuracy
model-index:
- name: clasificador-tweet-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_sentiment_multilingual
type: tweet_sentiment_multilingual
config: english
split: test
args: english
metrics:
- name: Accuracy
type: accuracy
value: 0.6632183908045977
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-tweet-sentiment
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tweet_sentiment_multilingual dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2664
- Accuracy: 0.6632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 230 | 0.7369 | 0.6713 |
| No log | 2.0 | 460 | 0.9109 | 0.6690 |
| 0.6916 | 3.0 | 690 | 1.2664 | 0.6632 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,867 | [
[
-0.029052734375,
-0.043487548828125,
0.01064300537109375,
0.03131103515625,
-0.03826904296875,
-0.01123046875,
-0.0269927978515625,
-0.01465606689453125,
0.01520538330078125,
0.01296234130859375,
-0.05889892578125,
-0.06646728515625,
-0.0543212890625,
-0.024... |
tanishabhagwanani/distilbert-base-uncased-finetuned-emotion | 2023-04-25T10:55:07.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | tanishabhagwanani | null | null | tanishabhagwanani/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-24T11:48:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0748
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.1253 | 1.0 | 21 | 1.8182 | 0.9391 | 0.9260 |
| 1.5009 | 2.0 | 42 | 1.0205 | 0.9652 | 0.9501 |
| 0.9143 | 3.0 | 63 | 0.5262 | 0.9957 | 0.9956 |
| 0.5215 | 4.0 | 84 | 0.2827 | 1.0 | 1.0 |
| 0.3069 | 5.0 | 105 | 0.1716 | 1.0 | 1.0 |
| 0.199 | 6.0 | 126 | 0.1194 | 1.0 | 1.0 |
| 0.147 | 7.0 | 147 | 0.0955 | 1.0 | 1.0 |
| 0.1229 | 8.0 | 168 | 0.0830 | 1.0 | 1.0 |
| 0.1076 | 9.0 | 189 | 0.0768 | 1.0 | 1.0 |
| 0.1002 | 10.0 | 210 | 0.0748 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,067 | [
[
-0.040435791015625,
-0.042205810546875,
0.01523590087890625,
0.01535797119140625,
-0.0206451416015625,
-0.01361083984375,
-0.0059967041015625,
-0.007049560546875,
0.018310546875,
0.011444091796875,
-0.056854248046875,
-0.051513671875,
-0.060546875,
-0.008728... |
fredymad/roberta_laxo_2e-5_16_2 | 2023-05-29T23:30:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | fredymad | null | null | fredymad/roberta_laxo_2e-5_16_2 | 0 | 2 | transformers | 2023-04-24T11:54:43 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta_laxo_2e-5_16_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_laxo_2e-5_16_2
This model is a fine-tuned version of [fredymad/roberta_estricto_2e-5_16_2](https://huggingface.co/fredymad/roberta_estricto_2e-5_16_2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4967
- Accuracy: 0.9106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 400 | 0.5240 | 0.9112 |
| 0.061 | 2.0 | 800 | 0.4967 | 0.9106 |
### Framework versions
- Transformers 4.29.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,435 | [
[
-0.028472900390625,
-0.047576904296875,
0.011962890625,
0.0169677734375,
-0.02850341796875,
-0.039337158203125,
-0.0167236328125,
-0.03314208984375,
0.007640838623046875,
0.0241851806640625,
-0.05108642578125,
-0.043975830078125,
-0.048828125,
0.002035140991... |
thomasavare/distilbert-ft-test2 | 2023-04-24T13:59:55.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | thomasavare | null | null | thomasavare/distilbert-ft-test2 | 0 | 2 | transformers | 2023-04-24T11:58:31 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-ft-test2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-ft-test2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,282 | [
[
-0.03839111328125,
-0.062255859375,
0.0221710205078125,
0.00954437255859375,
-0.0426025390625,
-0.0154571533203125,
-0.0072784423828125,
-0.0147705078125,
0.003650665283203125,
0.0029315948486328125,
-0.04766845703125,
-0.039520263671875,
-0.06707763671875,
... |
fredymad/siebert_estricto_2e-5_16_2 | 2023-05-30T05:32:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | fredymad | null | null | fredymad/siebert_estricto_2e-5_16_2 | 0 | 2 | transformers | 2023-04-24T12:24:09 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: siebert_estricto_2e-5_16_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# siebert_estricto_2e-5_16_2
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3052
- Accuracy: 0.8868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 400 | 0.3598 | 0.8493 |
| 0.363 | 2.0 | 800 | 0.3052 | 0.8868 |
### Framework versions
- Transformers 4.29.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,431 | [
[
-0.0283203125,
-0.044342041015625,
0.016510009765625,
0.029571533203125,
-0.03033447265625,
-0.0360107421875,
-0.0234375,
-0.01543426513671875,
0.01513671875,
0.0240631103515625,
-0.05511474609375,
-0.053436279296875,
-0.056182861328125,
-0.00807952880859375... |
pigeon-phobia/bertweet-base_finetuned_olid_a | 2023-04-24T12:45:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | pigeon-phobia | null | null | pigeon-phobia/bertweet-base_finetuned_olid_a | 0 | 2 | transformers | 2023-04-24T12:35:49 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bertweet-base_finetuned_olid_a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base_finetuned_olid_a
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the OLID dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3375
- Accuracy: 0.8535
- F1-macro: 0.8151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.4961 | 1.0 | 207 | 0.3515 | 0.85 | 0.8094 |
| 0.3932 | 2.0 | 414 | 0.3375 | 0.8535 | 0.8151 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,461 | [
[
-0.032196044921875,
-0.042236328125,
0.00800323486328125,
0.01088714599609375,
-0.0270233154296875,
-0.034454345703125,
-0.016876220703125,
-0.0164947509765625,
0.01227569580078125,
0.02349853515625,
-0.0531005859375,
-0.043731689453125,
-0.041351318359375,
... |
evasque1/roberta-base-bne-finetuned-amazon_reviews_multi | 2023-04-27T14:15:32.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | evasque1 | null | null | evasque1/roberta-base-bne-finetuned-amazon_reviews_multi | 0 | 2 | transformers | 2023-04-24T12:56:20 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9315
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2325
- Accuracy: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1932 | 1.0 | 1250 | 0.1695 | 0.937 |
| 0.0983 | 2.0 | 2500 | 0.2325 | 0.9315 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,794 | [
[
-0.0394287109375,
-0.048858642578125,
0.00954437255859375,
0.01384735107421875,
-0.0291900634765625,
-0.0301361083984375,
-0.0166473388671875,
-0.017791748046875,
0.00844573974609375,
0.028106689453125,
-0.051849365234375,
-0.045257568359375,
-0.053314208984375,... |
pigeon-phobia/bertweet-base_finetuned_olid_b | 2023-04-24T14:23:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | pigeon-phobia | null | null | pigeon-phobia/bertweet-base_finetuned_olid_b | 0 | 2 | transformers | 2023-04-24T14:15:09 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bertweet-base_finetuned_olid_b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base_finetuned_olid_b
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4449
- Accuracy: 0.8333
- F1-macro: 0.7114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.6556 | 1.0 | 69 | 0.5271 | 0.7542 | 0.6452 |
| 0.5838 | 2.0 | 138 | 0.4723 | 0.7292 | 0.6284 |
| 0.5031 | 3.0 | 207 | 0.4223 | 0.8417 | 0.7258 |
| 0.454 | 4.0 | 276 | 0.4391 | 0.8083 | 0.6855 |
| 0.3966 | 5.0 | 345 | 0.4449 | 0.8333 | 0.7114 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,680 | [
[
-0.0330810546875,
-0.041748046875,
0.005634307861328125,
0.01157379150390625,
-0.0262603759765625,
-0.0309906005859375,
-0.01416778564453125,
-0.01203155517578125,
0.018463134765625,
0.0215911865234375,
-0.05755615234375,
-0.045867919921875,
-0.043731689453125,
... |
pigeon-phobia/bertweet-base_finetuned_olid_c | 2023-04-24T14:32:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | pigeon-phobia | null | null | pigeon-phobia/bertweet-base_finetuned_olid_c | 0 | 2 | transformers | 2023-04-24T14:29:29 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bertweet-base_finetuned_olid_c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base_finetuned_olid_c
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7880
- Accuracy: 0.7324
- F1-macro: 0.6299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.9373 | 1.0 | 61 | 0.8441 | 0.6948 | 0.5042 |
| 0.7817 | 2.0 | 122 | 0.8038 | 0.7230 | 0.5247 |
| 0.7258 | 3.0 | 183 | 0.7837 | 0.7324 | 0.5772 |
| 0.6596 | 4.0 | 244 | 0.7812 | 0.7371 | 0.6255 |
| 0.6247 | 5.0 | 305 | 0.7880 | 0.7324 | 0.6299 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,680 | [
[
-0.032379150390625,
-0.041107177734375,
0.003276824951171875,
0.01214599609375,
-0.02532958984375,
-0.03021240234375,
-0.01523590087890625,
-0.013671875,
0.017425537109375,
0.0206298828125,
-0.05731201171875,
-0.04791259765625,
-0.044921875,
-0.0252075195312... |
husseinMoh/bart-base-finetuned-text-simplification | 2023-04-24T20:45:56.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:wiki_auto",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | husseinMoh | null | null | husseinMoh/bart-base-finetuned-text-simplification | 0 | 2 | transformers | 2023-04-24T14:41:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wiki_auto
model-index:
- name: bart-base-finetuned-text-simplification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-text-simplification
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the wiki_auto dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4564
- Sari: 58.8687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sari |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1758 | 1.0 | 23363 | 6.7617 | 58.9526 |
| 0.1474 | 2.0 | 46726 | 7.1742 | 58.8800 |
| 0.1349 | 3.0 | 70089 | 7.4564 | 58.8687 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,535 | [
[
-0.03802490234375,
-0.055023193359375,
0.011322021484375,
0.01276397705078125,
-0.02838134765625,
-0.018524169921875,
-0.01444244384765625,
-0.0164031982421875,
0.01983642578125,
0.0304718017578125,
-0.057281494140625,
-0.04620361328125,
-0.047607421875,
-0.... |
sam34738/mBERT_hasoc | 2023-04-24T15:15:00.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | sam34738 | null | null | sam34738/mBERT_hasoc | 0 | 2 | transformers | 2023-04-24T14:58:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: mBERT_hasoc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_hasoc
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9812
- Accuracy: 0.6583
- F1: 0.6948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-05
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.749 | 1.0 | 2100 | 0.7068 | 0.4994 | 0.0131 |
| 0.7707 | 2.0 | 4200 | 0.9812 | 0.6583 | 0.6948 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,485 | [
[
-0.036041259765625,
-0.043853759765625,
0.019989013671875,
0.0230865478515625,
-0.031951904296875,
-0.01336669921875,
-0.0210418701171875,
-0.0198822021484375,
0.01251220703125,
0.0286102294921875,
-0.04217529296875,
-0.043701171875,
-0.049163818359375,
-0.0... |
mojemai/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-25T09:08:48.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | mojemai | null | null | mojemai/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-24T15:09:26 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 642.50 +/- 254.36
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mojemai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mojemai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mojemai
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,689 | [
[
-0.04168701171875,
-0.03680419921875,
0.0213470458984375,
0.0259857177734375,
-0.0115814208984375,
-0.0199127197265625,
0.010528564453125,
-0.01419830322265625,
0.01380157470703125,
0.0253143310546875,
-0.07000732421875,
-0.037445068359375,
-0.027557373046875,
... |
fredymad/Financial_estricto_2e-5_16_2 | 2023-05-30T06:23:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | fredymad | null | null | fredymad/Financial_estricto_2e-5_16_2 | 0 | 2 | transformers | 2023-04-24T15:13:55 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Financial_estricto_2e-5_16_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Financial_estricto_2e-5_16_2
This model is a fine-tuned version of [ahmedrachid/FinancialBERT-Sentiment-Analysis](https://huggingface.co/ahmedrachid/FinancialBERT-Sentiment-Analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3930
- Accuracy: 0.8355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 400 | 0.4219 | 0.8143 |
| 0.4884 | 2.0 | 800 | 0.3930 | 0.8355 |
### Framework versions
- Transformers 4.29.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,445 | [
[
-0.02716064453125,
-0.0426025390625,
0.00571441650390625,
0.0218658447265625,
-0.0250091552734375,
-0.0207061767578125,
-0.0131988525390625,
-0.0207061767578125,
0.006298065185546875,
0.0284576416015625,
-0.056732177734375,
-0.057647705078125,
-0.050140380859375... |
dyosh/distilbert-base-uncased-finetuned-emotion | 2023-04-24T17:22:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | dyosh | null | null | dyosh/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-24T15:30:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9271664736493986
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. The model is trained in Chapter 2: Text Classification in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/02_classification.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.2192
- Accuracy: 0.927
- F1: 0.9272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8569 | 1.0 | 250 | 0.3386 | 0.894 | 0.8888 |
| 0.2639 | 2.0 | 500 | 0.2192 | 0.927 | 0.9272 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
| 2,137 | [
[
-0.036041259765625,
-0.040679931640625,
0.0121917724609375,
0.0162506103515625,
-0.0160980224609375,
-0.017730712890625,
-0.0169830322265625,
-0.01371002197265625,
0.002044677734375,
0.01549530029296875,
-0.050567626953125,
-0.052032470703125,
-0.06365966796875,... |
pabagcha/finetuning-sentiment-model-3 | 2023-04-24T16:57:49.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | pabagcha | null | null | pabagcha/finetuning-sentiment-model-3 | 0 | 2 | transformers | 2023-04-24T16:49:18 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1781
- Accuracy: 0.6225
- F1: 0.5292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 16 | 1.1234 | 0.5474 | 0.3031 |
| No log | 2.0 | 32 | 0.9975 | 0.6008 | 0.3433 |
| No log | 3.0 | 48 | 0.9438 | 0.6383 | 0.4604 |
| No log | 4.0 | 64 | 0.9385 | 0.6462 | 0.4692 |
| No log | 5.0 | 80 | 0.9864 | 0.6364 | 0.5066 |
| No log | 6.0 | 96 | 1.0309 | 0.6146 | 0.4968 |
| No log | 7.0 | 112 | 1.0853 | 0.6186 | 0.5246 |
| No log | 8.0 | 128 | 1.1456 | 0.6166 | 0.5208 |
| No log | 9.0 | 144 | 1.1860 | 0.6087 | 0.5206 |
| No log | 10.0 | 160 | 1.1781 | 0.6225 | 0.5292 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,053 | [
[
-0.045928955078125,
-0.046356201171875,
0.007183074951171875,
0.019775390625,
-0.022247314453125,
-0.0152740478515625,
-0.01544952392578125,
-0.01032257080078125,
0.0199127197265625,
0.018829345703125,
-0.06298828125,
-0.0640869140625,
-0.048309326171875,
-0... |
maxmustermannde/distilbert-base-uncased-finetuned-emotion | 2023-04-29T06:44:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | maxmustermannde | null | null | maxmustermannde/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-24T17:37:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2155
- Accuracy: 0.9215
- F1: 0.9213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.829 | 1.0 | 250 | 0.3135 | 0.9085 | 0.9068 |
| 0.2431 | 2.0 | 500 | 0.2155 | 0.9215 | 0.9213 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,498 | [
[
-0.038299560546875,
-0.042999267578125,
0.0180816650390625,
0.0254974365234375,
-0.028045654296875,
-0.0203857421875,
-0.013580322265625,
-0.007122039794921875,
0.00974273681640625,
0.00830841064453125,
-0.057403564453125,
-0.050201416015625,
-0.0618896484375,
... |
navidmadani/mpnet-twitter-freq100 | 2023-04-24T17:48:30.000Z | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | sentence-similarity | navidmadani | null | null | navidmadani/mpnet-twitter-freq100 | 0 | 2 | sentence-transformers | 2023-04-24T17:42:06 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# mpnet-twitter-freq100
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('navidmadani/mpnet-twitter-freq100 ')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 24855 with parameters:
```
{'batch_size': 256, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 6000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 2,384 | [
[
-0.0236663818359375,
-0.044830322265625,
0.027008056640625,
0.0268707275390625,
-0.014129638671875,
-0.02532958984375,
-0.01337432861328125,
0.0203094482421875,
0.016357421875,
0.0290374755859375,
-0.05572509765625,
-0.038360595703125,
-0.05389404296875,
-0.... |
9wimu9/retriever-model | 2023-04-24T17:55:11.000Z | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 9wimu9 | null | null | 9wimu9/retriever-model | 0 | 2 | sentence-transformers | 2023-04-24T17:53:43 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 1481 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 148,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 3,724 | [
[
-0.020294189453125,
-0.061859130859375,
0.0229339599609375,
0.025909423828125,
-0.021026611328125,
-0.031768798828125,
-0.015960693359375,
0.0029582977294921875,
0.0179595947265625,
0.02899169921875,
-0.05096435546875,
-0.046905517578125,
-0.051788330078125,
... |
minimax123/albert-base-v2-finetuned-tweets | 2023-04-24T20:55:46.000Z | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | minimax123 | null | null | minimax123/albert-base-v2-finetuned-tweets | 0 | 2 | transformers | 2023-04-24T18:06:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
model-index:
- name: albert-base-v2-finetuned-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-tweets
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5737
- Precision: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 0.0437 | 1.0 | 140 | 0.5309 | 0.9313 |
| 0.0087 | 2.0 | 280 | 0.5737 | 0.9295 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,416 | [
[
-0.03436279296875,
-0.0390625,
0.009857177734375,
0.015960693359375,
-0.02142333984375,
-0.033538818359375,
-0.00830078125,
-0.0145416259765625,
0.006595611572265625,
0.036224365234375,
-0.04974365234375,
-0.046600341796875,
-0.048431396484375,
-0.0159149169... |
vocabtrimmer/xlm-roberta-base-xnli-fr-trimmed-fr-15000 | 2023-04-24T18:17:45.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-xnli-fr-trimmed-fr-15000 | 0 | 2 | transformers | 2023-04-24T18:14:33 | # Vocabulary Trimmed [vocabtrimmer/xlm-roberta-base-xnli-fr](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-fr): `vocabtrimmer/xlm-roberta-base-xnli-fr-trimmed-fr-15000`
This model is a trimmed version of [vocabtrimmer/xlm-roberta-base-xnli-fr](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | vocabtrimmer/xlm-roberta-base-xnli-fr | vocabtrimmer/xlm-roberta-base-xnli-fr-trimmed-fr-15000 |
|:---------------------------|:----------------------------------------|:---------------------------------------------------------|
| parameter_size_full | 278,045,955 | 97,565,955 |
| parameter_size_embedding | 192,001,536 | 11,521,536 |
| vocab_size | 250,002 | 15,002 |
| compression_rate_full | 100.0 | 35.09 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 15000 | 2 | | 1,927 | [
[
-0.061187744140625,
-0.045318603515625,
-0.003662109375,
0.00943756103515625,
-0.0296630859375,
-0.011962890625,
-0.0191650390625,
-0.00936126708984375,
0.036712646484375,
0.042205810546875,
-0.0609130859375,
-0.047821044921875,
-0.03399658203125,
0.00068044... |
rebeccayhu/dreambooth_riffusion_model_afrotechno_v1 | 2023-04-24T21:24:24.000Z | [
"keras",
"region:us"
] | null | rebeccayhu | null | null | rebeccayhu/dreambooth_riffusion_model_afrotechno_v1 | 0 | 2 | keras | 2023-04-24T21:21:00 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 292 | [
[
-0.0220184326171875,
-0.01904296875,
0.0302734375,
0.02191162109375,
-0.04791259765625,
-0.0211181640625,
0.03839111328125,
-0.0124664306640625,
0.01788330078125,
0.0751953125,
-0.051971435546875,
-0.03863525390625,
-0.049041748046875,
-0.033355712890625,
... |
NourEldin-Osama/bart-base-finetuned-text-simplification | 2023-04-25T04:16:07.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:wiki_auto",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | text2text-generation | NourEldin-Osama | null | null | NourEldin-Osama/bart-base-finetuned-text-simplification | 0 | 2 | transformers | 2023-04-24T22:28:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wiki_auto
model-index:
- name: bart-base-finetuned-text-simplification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-text-simplification
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the wiki_auto dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4564
- Sari: 58.8687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sari |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1758 | 1.0 | 23363 | 6.7617 | 58.9526 |
| 0.1474 | 2.0 | 46726 | 7.1742 | 58.8800 |
| 0.1349 | 3.0 | 70089 | 7.4564 | 58.8687 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,535 | [
[
-0.03802490234375,
-0.055023193359375,
0.011322021484375,
0.01276397705078125,
-0.02838134765625,
-0.018524169921875,
-0.01444244384765625,
-0.0164031982421875,
0.01983642578125,
0.0304718017578125,
-0.057281494140625,
-0.04620361328125,
-0.047607421875,
-0.... |
angelajyeung/results | 2023-04-24T22:48:15.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | angelajyeung | null | null | angelajyeung/results | 0 | 2 | transformers | 2023-04-24T22:45:12 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2404
- Accuracy: 0.87
- F1: 0.0
- Precision: 0.0
- Recall: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6792 | 1.0 | 50 | 0.6665 | 0.0 | 0.1593 | 0.1117 | 0.4717 |
| 0.4419 | 2.0 | 100 | 0.4092 | 0.87 | 0.0 | 0.0 | 0.0 |
| 0.2437 | 3.0 | 150 | 0.2404 | 0.87 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,662 | [
[
-0.038177490234375,
-0.039947509765625,
0.0146636962890625,
0.0145416259765625,
-0.0269775390625,
-0.0296783447265625,
-0.01812744140625,
-0.0200958251953125,
0.01250457763671875,
0.0228271484375,
-0.05487060546875,
-0.04864501953125,
-0.049896240234375,
-0.... |
rebolforces/distilbert-base-uncased-finetuned-emotion | 2023-04-25T01:49:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | rebolforces | null | null | rebolforces/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-25T01:30:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9265372899076229
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2217
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.845 | 1.0 | 250 | 0.3300 | 0.9065 | 0.9036 |
| 0.2565 | 2.0 | 500 | 0.2217 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.0.0+cu118
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,803 | [
[
-0.037811279296875,
-0.0418701171875,
0.01502227783203125,
0.0213623046875,
-0.025665283203125,
-0.0190277099609375,
-0.01318359375,
-0.00830841064453125,
0.0108642578125,
0.0082855224609375,
-0.056060791015625,
-0.0518798828125,
-0.05938720703125,
-0.008666... |
MITCriticalData/Sentinel-2_Resnet50V2_Autoencoder_RGB | 2023-04-25T21:07:56.000Z | [
"keras",
"region:us"
] | null | MITCriticalData | null | null | MITCriticalData/Sentinel-2_Resnet50V2_Autoencoder_RGB | 0 | 2 | keras | 2023-04-25T02:20:00 | ---
library_name: keras
---
## Model description
Autoencoder model trained to compress information from sentinel-2 satellite images using Resnet50 V2 as encoder backbone to extract features.
The latent space of the model is given by 1024 neurons which can be used to generate embeddings from the sentinel-2 satellite images.
The model was trained using bands RGB (2, 3 and 4) (Red, Green and Blue) of the Sentinel-2 satellites and using 10 municipalities of Colombia with most dengue cases.
The input shape of the model is 224, 224, 3. To extract features you should remove the last layer.
## Intended uses & limitations
The model was trained with images of 10 different cities in Colombia with most dengue cases, however it may require fine tuning or retraining to learn from other contexts such as countries and other continents.
## Training and evaluation data
The model was trained with satellite images of 10 different cities in Colombia extracted from sentinel-2 using RGB bands using an asymmetric autoencoder. Images with information that could result in noise such as black images were filtered prior to training to avoid noise in the data.
The dataset was split into train and test using 80% for train and 20% to test.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.0010000000474974513 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 1,606 | [
[
-0.0428466796875,
-0.0173492431640625,
0.0062408447265625,
-0.0038623809814453125,
-0.023590087890625,
-0.00786590576171875,
0.00982666015625,
-0.029083251953125,
0.0216827392578125,
0.01690673828125,
-0.036041259765625,
-0.035888671875,
-0.0748291015625,
-0... |
mHossain/bangla-para-v3 | 2023-04-25T05:29:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | mHossain | null | null | mHossain/bangla-para-v3 | 0 | 2 | transformers | 2023-04-25T02:44:16 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bangla-para-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla-para-v3
This model is a fine-tuned version of [mHossain/mt5-base-bangla-para-v1-bangla-para-v2](https://huggingface.co/mHossain/mt5-base-bangla-para-v1-bangla-para-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1002
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 18.32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.472 | 1.0 | 11250 | 1.1002 | 0.0 | 0.0 | 0.0 | 0.0 | 18.32 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,585 | [
[
-0.034149169921875,
-0.039794921875,
0.0029811859130859375,
0.0273284912109375,
-0.03778076171875,
-0.0231781005859375,
-0.00467681884765625,
-0.020111083984375,
0.01446533203125,
0.033172607421875,
-0.047698974609375,
-0.040008544921875,
-0.05072021484375,
... |
MITCriticalData/Sentinel-2_ViT_Autoencoder_12Bands | 2023-05-01T20:54:29.000Z | [
"keras",
"region:us"
] | null | MITCriticalData | null | null | MITCriticalData/Sentinel-2_ViT_Autoencoder_12Bands | 0 | 2 | keras | 2023-04-25T03:54:47 | ---
library_name: keras
---
## Model description
Autoencoder model trained to compress information from sentinel-2 satellite images using Vision Transformer (ViT) as encoder backbone to extract features.
The latent space of the model is given by 1024 neurons which can be used to generate embeddings from the sentinel-2 satellite images.
The model was trained using bands 1-12 of the Sentinel-2 satellites and using the top 10 municipalities of Colombia with most dengue cases.
The input shape of the model is 224, 224, 12. To extract features you should remove the last layer.
The model can be read as (example in jupyer):
```
!git lfs install
!git clone https://huggingface.co/MITCriticalData/Sentinel-2_ViT_Autoencoder_12Bands
import tensorflow as tf
from transformers import TFViTModel
model = tf.keras.models.load_model('Sentinel-2_ViT_Autoencoder_12Bands', custom_objects={"TFViTModel": TFViTModel})
```
You can extract the embeddings removing the last layer using:
```
import tensorflow as tf
backbone = tf.keras.Sequential()
for layer in model.layers[:-1]: # just exclude last layer from copying
backbone.add(layer)
```
## Intended uses & limitations
The model was trained with images of 10 different cities in Colombia, however it may require fine tuning or retraining to learn from other contexts such as countries and other continents.
## Training and evaluation data
The model was trained with satellite images of 10 different cities in Colombia extracted from sentinel-2 using 12 bands using an asymmetric autoencoder. Images with information that could result in noise such as black images were filtered prior to training to avoid noise in the data..
The dataset was split into train and test using 80% for train and 20% to test.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.0010000000474974513 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 2,151 | [
[
-0.04443359375,
-0.0266571044921875,
0.0098876953125,
0.0018634796142578125,
-0.0243988037109375,
-0.0033588409423828125,
0.0033512115478515625,
-0.01910400390625,
0.00792694091796875,
0.018280029296875,
-0.0330810546875,
-0.03936767578125,
-0.08551025390625,
... |
MITCriticalData/Sentinel-2_ViT_Autoencoder_RGB | 2023-05-01T20:48:36.000Z | [
"keras",
"region:us"
] | null | MITCriticalData | null | null | MITCriticalData/Sentinel-2_ViT_Autoencoder_RGB | 0 | 2 | keras | 2023-04-25T04:53:11 | ---
library_name: keras
---
## Model description
Autoencoder model trained to compress information from sentinel-2 satellite images using Vision Transformer (ViT) as encoder backbone to extract features.
The latent space of the model is given by 1024 neurons which can be used to generate embeddings from the sentinel-2 satellite images.
The model was trained using bands RGB (2, 3 and 4) (Red, Green and Blue) of the Sentinel-2 satellites and using 10 municipalities of Colombia with most dengue cases.
The input shape of the model is 224, 224, 3. To extract features you should remove the last layer.
The model can be read as (example in jupyer):
```
!git lfs install
!git clone https://huggingface.co/MITCriticalData/Sentinel-2_ViT_Autoencoder_RGB
import tensorflow as tf
from transformers import TFViTModel
model = tf.keras.models.load_model('Sentinel-2_ViT_Autoencoder_RGB', custom_objects={"TFViTModel": TFViTModel})
```
You can extract the embeddings removing the last layer using:
```
import tensorflow as tf
model = tf.keras.Sequential()
for layer in autoencoder.layers[:-1]: # just exclude last layer from copying
model.add(layer)
```
## Intended uses & limitations
The model was trained with images of 10 different cities in Colombia, however it may require fine tuning or retraining to learn from other contexts such as countries and other continents.
## Training and evaluation data
The model was trained with satellite images of 10 different cities with most dengue cses in Colombia extracted from sentinel-2 using RGB bands using an asymmetric autoencoder. Images with information that could result in noise such as black images were filtered prior to training to avoid noise in the data.
The dataset was split into train and test using 80% for train and 20% to test.
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.0010000000474974513 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 2,167 | [
[
-0.039703369140625,
-0.031768798828125,
0.0106964111328125,
0.0026073455810546875,
-0.0283050537109375,
0.0005288124084472656,
0.00722503662109375,
-0.0241241455078125,
0.00978851318359375,
0.0114288330078125,
-0.0301055908203125,
-0.042449951171875,
-0.08392333... |
StevenLimcorn/bert-large-uncased-semeval2016-restaurants | 2023-04-25T05:17:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:Yaxin/SemEval2016Task5Raw",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | StevenLimcorn | null | null | StevenLimcorn/bert-large-uncased-semeval2016-restaurants | 0 | 2 | transformers | 2023-04-25T05:05:53 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- Yaxin/SemEval2016Task5Raw
metrics:
- accuracy
model-index:
- name: bert-large-uncased-semeval2016-restaurants
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: Yaxin/SemEval2016Task5Raw restaurants_english
type: Yaxin/SemEval2016Task5Raw
config: restaurants_english
split: validation
args: restaurants_english
metrics:
- name: Accuracy
type: accuracy
value: 0.7796610169491526
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-semeval2016-restaurants
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the Yaxin/SemEval2016Task5Raw restaurants_english dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0702
- Accuracy: 0.7797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.0
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,624 | [
[
-0.0250091552734375,
-0.042266845703125,
0.0217742919921875,
0.00830078125,
-0.027435302734375,
-0.045928955078125,
-0.0247955322265625,
-0.0243988037109375,
0.0217437744140625,
0.0340576171875,
-0.040771484375,
-0.037750244140625,
-0.0391845703125,
-0.00369... |
huggingtweets/adrianachechik | 2023-04-25T07:05:23.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | huggingtweets | null | null | huggingtweets/adrianachechik | 0 | 2 | transformers | 2023-04-25T07:05:14 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1346502546668486658/S73iVQ5l_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">adriana chechik</div>
<div style="text-align: center; font-size: 14px;">@adrianachechik</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from adriana chechik.
| Data | adriana chechik |
| --- | --- |
| Tweets downloaded | 2287 |
| Retweets | 269 |
| Short tweets | 242 |
| Tweets kept | 1776 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ulcd0aj0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @adrianachechik's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/x3983z3x) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/x3983z3x/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/adrianachechik')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| 3,521 | [
[
-0.02423095703125,
-0.0626220703125,
0.025970458984375,
0.0178985595703125,
-0.0200347900390625,
0.00960540771484375,
-0.005260467529296875,
-0.0379638671875,
0.027130126953125,
0.006069183349609375,
-0.07562255859375,
-0.03472900390625,
-0.048858642578125,
... |
JWP/distilbert-base-uncased-finetuned-emotion | 2023-09-01T20:21:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | JWP | null | null | JWP/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-25T07:38:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.8505
- name: F1
type: f1
value: 0.8373332943610814
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5014
- Accuracy: 0.8505
- F1: 0.8373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8776 | 1.0 | 250 | 0.5014 | 0.8505 | 0.8373 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,777 | [
[
-0.037872314453125,
-0.04248046875,
0.01544952392578125,
0.0225372314453125,
-0.0288238525390625,
-0.0206756591796875,
-0.01422119140625,
-0.00836944580078125,
0.009796142578125,
0.008148193359375,
-0.055450439453125,
-0.051239013671875,
-0.05975341796875,
-... |
m8than/bert-base-multilingual-cased-finetuned-emotion | 2023-04-25T09:56:36.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | m8than | null | null | m8than/bert-base-multilingual-cased-finetuned-emotion | 1 | 2 | transformers | 2023-04-25T09:41:24 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: bert-base-multilingual-cased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9195
- name: F1
type: f1
value: 0.9204823251325381
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-emotion
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2369
- Accuracy: 0.9195
- F1: 0.9205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9212 | 1.0 | 250 | 0.3466 | 0.8965 | 0.8966 |
| 0.2893 | 2.0 | 500 | 0.2369 | 0.9195 | 0.9205 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,827 | [
[
-0.04083251953125,
-0.04180908203125,
0.0102386474609375,
0.0266265869140625,
-0.0258636474609375,
-0.0257720947265625,
-0.0287933349609375,
-0.0164794921875,
0.016326904296875,
0.01177978515625,
-0.058013916015625,
-0.05560302734375,
-0.04937744140625,
-0.0... |
tanishabhagwanani/distilbert-base-uncased-finetuned-FYP | 2023-04-30T04:16:00.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | tanishabhagwanani | null | null | tanishabhagwanani/distilbert-base-uncased-finetuned-FYP | 0 | 2 | transformers | 2023-04-25T11:27:44 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-FYP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-FYP
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0921
- Accuracy: 0.9957
- F1: 0.9957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.1435 | 1.0 | 20 | 1.7903 | 0.7696 | 0.7462 |
| 1.5449 | 2.0 | 40 | 1.0549 | 0.9565 | 0.9603 |
| 1.0008 | 3.0 | 60 | 0.5800 | 0.9913 | 0.9912 |
| 0.6252 | 4.0 | 80 | 0.3311 | 0.9957 | 0.9957 |
| 0.3833 | 5.0 | 100 | 0.2076 | 0.9957 | 0.9957 |
| 0.2496 | 6.0 | 120 | 0.1470 | 0.9957 | 0.9957 |
| 0.182 | 7.0 | 140 | 0.1173 | 0.9957 | 0.9957 |
| 0.1475 | 8.0 | 160 | 0.1017 | 0.9957 | 0.9957 |
| 0.1279 | 9.0 | 180 | 0.0944 | 0.9957 | 0.9957 |
| 0.1197 | 10.0 | 200 | 0.0921 | 0.9957 | 0.9957 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,065 | [
[
-0.036712646484375,
-0.044952392578125,
0.0130157470703125,
0.0137786865234375,
-0.0195770263671875,
-0.01314544677734375,
-0.004364013671875,
-0.0058135986328125,
0.0167999267578125,
0.0187530517578125,
-0.05194091796875,
-0.0487060546875,
-0.05694580078125,
... |
dexion/distilbert-base-uncased-finetuned-emotions-7th | 2023-04-25T12:34:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | dexion | null | null | dexion/distilbert-base-uncased-finetuned-emotions-7th | 0 | 2 | transformers | 2023-04-25T11:49:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotions-7th
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9252933643733475
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotions-7th
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2243
- Accuracy: 0.9255
- F1: 0.9253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8779 | 1.0 | 250 | 0.3294 | 0.9055 | 0.9028 |
| 0.263 | 2.0 | 500 | 0.2243 | 0.9255 | 0.9253 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
| 1,939 | [
[
-0.038238525390625,
-0.03826904296875,
0.015289306640625,
0.0213623046875,
-0.0263824462890625,
-0.0202178955078125,
-0.0129852294921875,
-0.00757598876953125,
0.0062713623046875,
0.00981903076171875,
-0.0565185546875,
-0.050811767578125,
-0.058013916015625,
... |
abradolf/autotrain-text_c-52381123464 | 2023-04-25T12:14:32.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:abradolf/autotrain-data-text_c",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | abradolf | null | null | abradolf/autotrain-text_c-52381123464 | 0 | 2 | transformers | 2023-04-25T12:04:23 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- abradolf/autotrain-data-text_c
co2_eq_emissions:
emissions: 0.0198314797068548
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 52381123464
- CO2 Emissions (in grams): 0.0198
## Validation Metrics
- Loss: 0.634
- Accuracy: 0.840
- Macro F1: 0.836
- Micro F1: 0.840
- Weighted F1: 0.838
- Macro Precision: 0.838
- Micro Precision: 0.840
- Weighted Precision: 0.839
- Macro Recall: 0.838
- Micro Recall: 0.840
- Weighted Recall: 0.840
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/abradolf/autotrain-text_c-52381123464
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abradolf/autotrain-text_c-52381123464", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abradolf/autotrain-text_c-52381123464", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,272 | [
[
-0.031463623046875,
-0.02789306640625,
0.0107269287109375,
0.0134429931640625,
-0.00356292724609375,
0.003337860107421875,
-0.0025157928466796875,
-0.0162811279296875,
-0.005828857421875,
0.006011962890625,
-0.047698974609375,
-0.0362548828125,
-0.05850219726562... |
conorjudge/distilbert-base-uncased-finetuned-sprint-meds | 2023-07-12T00:11:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | conorjudge | null | null | conorjudge/distilbert-base-uncased-finetuned-sprint-meds | 0 | 2 | transformers | 2023-04-25T13:08:32 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sprint-meds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sprint-meds
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8427
- Accuracy: 0.8790
- F1: 0.8630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.8256 | 1.0 | 21 | 1.9309 | 0.6868 | 0.5992 |
| 1.7067 | 2.0 | 42 | 1.8220 | 0.6993 | 0.6190 |
| 1.5327 | 3.0 | 63 | 1.7250 | 0.7189 | 0.6489 |
| 1.4475 | 4.0 | 84 | 1.6374 | 0.7509 | 0.6903 |
| 1.3108 | 5.0 | 105 | 1.5627 | 0.7438 | 0.6843 |
| 1.1881 | 6.0 | 126 | 1.4905 | 0.7669 | 0.7135 |
| 1.1726 | 7.0 | 147 | 1.4287 | 0.7847 | 0.7379 |
| 1.0681 | 8.0 | 168 | 1.3705 | 0.7829 | 0.7368 |
| 0.9392 | 9.0 | 189 | 1.3214 | 0.7954 | 0.7513 |
| 0.9603 | 10.0 | 210 | 1.2741 | 0.8043 | 0.7613 |
| 0.8349 | 11.0 | 231 | 1.2415 | 0.8185 | 0.7793 |
| 0.8094 | 12.0 | 252 | 1.2028 | 0.8256 | 0.7883 |
| 0.787 | 13.0 | 273 | 1.1673 | 0.8310 | 0.7951 |
| 0.7128 | 14.0 | 294 | 1.1412 | 0.8381 | 0.8056 |
| 0.6821 | 15.0 | 315 | 1.1091 | 0.8399 | 0.8074 |
| 0.6177 | 16.0 | 336 | 1.0906 | 0.8399 | 0.8098 |
| 0.633 | 17.0 | 357 | 1.0645 | 0.8434 | 0.8170 |
| 0.5734 | 18.0 | 378 | 1.0415 | 0.8470 | 0.8199 |
| 0.5181 | 19.0 | 399 | 1.0233 | 0.8416 | 0.8153 |
| 0.4926 | 20.0 | 420 | 1.0076 | 0.8470 | 0.8209 |
| 0.4773 | 21.0 | 441 | 0.9896 | 0.8434 | 0.8184 |
| 0.4361 | 22.0 | 462 | 0.9768 | 0.8470 | 0.8216 |
| 0.4385 | 23.0 | 483 | 0.9624 | 0.8505 | 0.8261 |
| 0.3962 | 24.0 | 504 | 0.9520 | 0.8559 | 0.8309 |
| 0.392 | 25.0 | 525 | 0.9392 | 0.8577 | 0.8339 |
| 0.4095 | 26.0 | 546 | 0.9331 | 0.8577 | 0.8359 |
| 0.3389 | 27.0 | 567 | 0.9242 | 0.8577 | 0.8348 |
| 0.3296 | 28.0 | 588 | 0.9117 | 0.8577 | 0.8344 |
| 0.3527 | 29.0 | 609 | 0.9026 | 0.8665 | 0.8465 |
| 0.315 | 30.0 | 630 | 0.9008 | 0.8648 | 0.8431 |
| 0.2891 | 31.0 | 651 | 0.8923 | 0.8648 | 0.8433 |
| 0.3283 | 32.0 | 672 | 0.8818 | 0.8701 | 0.8507 |
| 0.2967 | 33.0 | 693 | 0.8799 | 0.8683 | 0.8479 |
| 0.2657 | 34.0 | 714 | 0.8750 | 0.8683 | 0.8479 |
| 0.3015 | 35.0 | 735 | 0.8727 | 0.8719 | 0.8526 |
| 0.2847 | 36.0 | 756 | 0.8656 | 0.8754 | 0.8575 |
| 0.2614 | 37.0 | 777 | 0.8630 | 0.8772 | 0.8589 |
| 0.26 | 38.0 | 798 | 0.8604 | 0.8754 | 0.8598 |
| 0.2557 | 39.0 | 819 | 0.8588 | 0.8772 | 0.8612 |
| 0.2389 | 40.0 | 840 | 0.8562 | 0.8790 | 0.8619 |
| 0.2464 | 41.0 | 861 | 0.8529 | 0.8790 | 0.8615 |
| 0.2304 | 42.0 | 882 | 0.8529 | 0.8772 | 0.8613 |
| 0.2356 | 43.0 | 903 | 0.8514 | 0.8790 | 0.8636 |
| 0.2291 | 44.0 | 924 | 0.8479 | 0.8790 | 0.8631 |
| 0.2323 | 45.0 | 945 | 0.8457 | 0.8790 | 0.8631 |
| 0.2281 | 46.0 | 966 | 0.8454 | 0.8790 | 0.8638 |
| 0.2163 | 47.0 | 987 | 0.8432 | 0.8790 | 0.8633 |
| 0.226 | 48.0 | 1008 | 0.8433 | 0.8790 | 0.8631 |
| 0.229 | 49.0 | 1029 | 0.8431 | 0.8790 | 0.8631 |
| 0.2388 | 50.0 | 1050 | 0.8427 | 0.8790 | 0.8630 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 4,921 | [
[
-0.042266845703125,
-0.038909912109375,
0.0155029296875,
0.006313323974609375,
-0.001209259033203125,
0.00695037841796875,
0.0021572113037109375,
0.0017757415771484375,
0.051361083984375,
0.023040771484375,
-0.047393798828125,
-0.0477294921875,
-0.04693603515625... |
ardaaras99/distilbert-base-uncased-finetuned-cola | 2023-04-27T15:35:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ardaaras99 | null | null | ardaaras99/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-25T13:19:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.4267925131950283
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5047
- Matthews Correlation: 0.4268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5241 | 1.0 | 535 | 0.5047 | 0.4268 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,740 | [
[
-0.02001953125,
-0.05377197265625,
0.014251708984375,
0.02349853515625,
-0.0258941650390625,
-0.01053619384765625,
-0.007556915283203125,
-0.0045318603515625,
0.0215606689453125,
0.01107025146484375,
-0.04278564453125,
-0.03314208984375,
-0.0616455078125,
-0... |
MITCriticalData/Sentinel-2_Resnet50V2_Autoencoder_RGB_full_Colombia_Dataset | 2023-04-25T21:01:08.000Z | [
"keras",
"region:us"
] | null | MITCriticalData | null | null | MITCriticalData/Sentinel-2_Resnet50V2_Autoencoder_RGB_full_Colombia_Dataset | 0 | 2 | keras | 2023-04-25T13:48:43 | ---
library_name: keras
---
## Model description
Autoencoder model trained to compress information from sentinel-2 satellite images using Resnet50 V2 as encoder backbone to extract features.
The latent space of the model is given by 1024 neurons which can be used to generate embeddings from the sentinel-2 satellite images.
The model was trained using bands RGB (2, 3 and 4) (Red, Green and Blue) of the Sentinel-2 satellites and using 81 municipalities of Colombia with most dengue cases.
The input shape of the model is 224, 224, 3. To extract features you should remove the last layer.
## Intended uses & limitations
The model was trained with images of 81 different cities in Colombia, however it may require fine tuning or retraining to learn from other contexts such as countries and other continents.
## Training and evaluation data
The model was trained with satellite images of 81 different cities in Colombia extracted from sentinel-2 using RGB bands using an asymmetric autoencoder. Images with information that could result in noise such as black images were filtered prior to training to avoid noise in the data.
The dataset was split into train and test using 80% for train and 20% to test.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.0010000000474974513 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 1,583 | [
[
-0.042572021484375,
-0.01526641845703125,
0.00868988037109375,
-0.0033321380615234375,
-0.02325439453125,
-0.00811767578125,
0.0087127685546875,
-0.0279541015625,
0.018890380859375,
0.0188751220703125,
-0.037750244140625,
-0.036407470703125,
-0.07537841796875,
... |
MITCriticalData/Sentinel-2_Resnet50V2_VariationalAutoencoder_RGB | 2023-04-25T21:11:04.000Z | [
"keras",
"region:us"
] | null | MITCriticalData | null | null | MITCriticalData/Sentinel-2_Resnet50V2_VariationalAutoencoder_RGB | 0 | 2 | keras | 2023-04-25T14:03:08 | ---
library_name: keras
---
## Model description
Variational Autoencoder model trained to compress information from sentinel-2 satellite images using Resnet50 V2 as encoder backbone to extract features.
The latent space of the model is given by 1024 neurons which can be used to generate embeddings from the sentinel-2 satellite images.
The model was trained using bands RGB (2, 3 and 4) (Red, Green and Blue) of the Sentinel-2 satellites and using 10 municipalities of Colombia with most dengue cases.
The input shape of the model is 224, 224, 3. To extract features you should remove the last layer.
## Intended uses & limitations
The model was trained with images of 10 different cities in Colombia with most dengue cases, however it may require fine tuning or retraining to learn from other contexts such as countries and other continents.
## Training and evaluation data
The model was trained with satellite images of 10 different cities in Colombia extracted from sentinel-2 using RGB bands using an asymmetric variational autoencoder. Images with information that could result in noise such as black images were filtered prior to training to avoid noise in the data.
The dataset was split into train and test using 80% for train and 20% to test.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 9.999999747378752e-05 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 1,630 | [
[
-0.042022705078125,
-0.019683837890625,
0.0037937164306640625,
-0.0019311904907226562,
-0.026336669921875,
-0.0016021728515625,
0.01038360595703125,
-0.023834228515625,
0.0200042724609375,
0.018157958984375,
-0.047821044921875,
-0.036224365234375,
-0.06457519531... |
StevenLimcorn/bert-large-uncased-facebook-election-ads | 2023-04-26T14:59:11.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | StevenLimcorn | null | null | StevenLimcorn/bert-large-uncased-facebook-election-ads | 0 | 2 | transformers | 2023-04-25T14:21:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-uncased-facebook-election-ads
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-facebook-election-ads
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5924
- Accuracy: 0.6776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,258 | [
[
-0.032562255859375,
-0.05712890625,
0.0200958251953125,
0.0128936767578125,
-0.037811279296875,
-0.0169525146484375,
-0.031280517578125,
-0.015533447265625,
0.0286102294921875,
0.0263519287109375,
-0.05029296875,
-0.04119873046875,
-0.050537109375,
-0.027740... |
Anwaarma/PROJECT_SPAM | 2023-04-25T14:50:17.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Anwaarma/autotrain-data-sms_arr",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Anwaarma | null | null | Anwaarma/PROJECT_SPAM | 0 | 2 | transformers | 2023-04-25T14:48:19 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Anwaarma/autotrain-data-sms_arr
co2_eq_emissions:
emissions: 0.8734880096848107
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 52431123662
- CO2 Emissions (in grams): 0.8735
## Validation Metrics
- Loss: 0.031
- Accuracy: 0.994
- Precision: 1.000
- Recall: 0.953
- AUC: 0.998
- F1: 0.976
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Anwaarma/autotrain-sms_arr-52431123662
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anwaarma/autotrain-sms_arr-52431123662", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anwaarma/autotrain-sms_arr-52431123662", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,127 | [
[
-0.024993896484375,
-0.0338134765625,
0.01117706298828125,
0.01739501953125,
-0.004695892333984375,
-0.0009093284606933594,
0.007244110107421875,
-0.013275146484375,
-0.0018453598022460938,
0.009521484375,
-0.06109619140625,
-0.0360107421875,
-0.062744140625,
... |
vvsotnikov/stablelm-7b-sft-v7-epoch-3-8bit | 2023-04-25T15:18:43.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"sft",
"en",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | vvsotnikov | null | null | vvsotnikov/stablelm-7b-sft-v7-epoch-3-8bit | 0 | 2 | transformers | 2023-04-25T15:00:45 | ---
license: apache-2.0
language:
- en
tags:
- sft
pipeline_tag: text-generation
widget:
- text: <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
- text: <|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>
---
# Open-Assistant StableLM-7B SFT-7 Model 8-bit
Quantized version of https://huggingface.co/OpenAssistant/stablelm-7b-sft-v7-epoch-3 | 510 | [
[
-0.006519317626953125,
-0.0640869140625,
0.033538818359375,
0.04132080078125,
-0.0309295654296875,
-0.00835418701171875,
0.0556640625,
-0.012237548828125,
0.0362548828125,
0.0616455078125,
-0.01983642578125,
-0.01055908203125,
-0.01482391357421875,
-0.006797... |
sarahflan/distilbert-base-uncased-finetuned-sprint-meds | 2023-04-27T09:40:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | sarahflan | null | null | sarahflan/distilbert-base-uncased-finetuned-sprint-meds | 0 | 2 | transformers | 2023-04-25T16:05:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sprint-meds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sprint-meds
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8121
- Accuracy: 0.8843
- F1: 0.8655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4894 | 1.0 | 21 | 0.9107 | 0.8612 | 0.8354 |
| 0.4471 | 2.0 | 42 | 0.8964 | 0.8630 | 0.8363 |
| 0.4086 | 3.0 | 63 | 0.8796 | 0.8612 | 0.8348 |
| 0.3651 | 4.0 | 84 | 0.8581 | 0.8665 | 0.8415 |
| 0.3365 | 5.0 | 105 | 0.8546 | 0.8683 | 0.8429 |
| 0.3241 | 6.0 | 126 | 0.8448 | 0.8701 | 0.8467 |
| 0.299 | 7.0 | 147 | 0.8372 | 0.8683 | 0.8461 |
| 0.2498 | 8.0 | 168 | 0.8340 | 0.8737 | 0.8500 |
| 0.2579 | 9.0 | 189 | 0.8199 | 0.8737 | 0.8498 |
| 0.2526 | 10.0 | 210 | 0.8191 | 0.8772 | 0.8549 |
| 0.2243 | 11.0 | 231 | 0.8227 | 0.8719 | 0.8476 |
| 0.1888 | 12.0 | 252 | 0.8254 | 0.8719 | 0.8489 |
| 0.2159 | 13.0 | 273 | 0.8163 | 0.8772 | 0.8541 |
| 0.1845 | 14.0 | 294 | 0.8117 | 0.8754 | 0.8533 |
| 0.1774 | 15.0 | 315 | 0.8107 | 0.8772 | 0.8529 |
| 0.1503 | 16.0 | 336 | 0.8109 | 0.8790 | 0.8589 |
| 0.1565 | 17.0 | 357 | 0.8141 | 0.8772 | 0.8533 |
| 0.1539 | 18.0 | 378 | 0.8174 | 0.8772 | 0.8556 |
| 0.1393 | 19.0 | 399 | 0.8132 | 0.8790 | 0.8587 |
| 0.1279 | 20.0 | 420 | 0.8171 | 0.8826 | 0.8602 |
| 0.1231 | 21.0 | 441 | 0.8134 | 0.8808 | 0.8603 |
| 0.119 | 22.0 | 462 | 0.8132 | 0.8843 | 0.8628 |
| 0.1058 | 23.0 | 483 | 0.8043 | 0.8826 | 0.8631 |
| 0.1106 | 24.0 | 504 | 0.8159 | 0.8808 | 0.8596 |
| 0.1036 | 25.0 | 525 | 0.8090 | 0.8826 | 0.8612 |
| 0.0895 | 26.0 | 546 | 0.8093 | 0.8879 | 0.8666 |
| 0.1001 | 27.0 | 567 | 0.8121 | 0.8843 | 0.8636 |
| 0.0956 | 28.0 | 588 | 0.8113 | 0.8808 | 0.8609 |
| 0.0954 | 29.0 | 609 | 0.8099 | 0.8790 | 0.8581 |
| 0.0856 | 30.0 | 630 | 0.8169 | 0.8826 | 0.8616 |
| 0.0819 | 31.0 | 651 | 0.8204 | 0.8790 | 0.8590 |
| 0.0888 | 32.0 | 672 | 0.8125 | 0.8826 | 0.8644 |
| 0.0806 | 33.0 | 693 | 0.8144 | 0.8826 | 0.8628 |
| 0.0836 | 34.0 | 714 | 0.8153 | 0.8790 | 0.8583 |
| 0.0832 | 35.0 | 735 | 0.8139 | 0.8843 | 0.8644 |
| 0.0719 | 36.0 | 756 | 0.8134 | 0.8826 | 0.8623 |
| 0.0843 | 37.0 | 777 | 0.8141 | 0.8826 | 0.8637 |
| 0.0768 | 38.0 | 798 | 0.8157 | 0.8826 | 0.8616 |
| 0.0765 | 39.0 | 819 | 0.8183 | 0.8808 | 0.8621 |
| 0.0685 | 40.0 | 840 | 0.8139 | 0.8808 | 0.8628 |
| 0.0696 | 41.0 | 861 | 0.8149 | 0.8808 | 0.8631 |
| 0.0747 | 42.0 | 882 | 0.8144 | 0.8843 | 0.8655 |
| 0.0709 | 43.0 | 903 | 0.8136 | 0.8843 | 0.8655 |
| 0.0666 | 44.0 | 924 | 0.8140 | 0.8843 | 0.8661 |
| 0.071 | 45.0 | 945 | 0.8123 | 0.8808 | 0.8634 |
| 0.0682 | 46.0 | 966 | 0.8137 | 0.8843 | 0.8661 |
| 0.0743 | 47.0 | 987 | 0.8119 | 0.8843 | 0.8661 |
| 0.069 | 48.0 | 1008 | 0.8113 | 0.8843 | 0.8661 |
| 0.0624 | 49.0 | 1029 | 0.8119 | 0.8843 | 0.8655 |
| 0.0713 | 50.0 | 1050 | 0.8121 | 0.8843 | 0.8655 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 4,921 | [
[
-0.043121337890625,
-0.037384033203125,
0.0157623291015625,
0.008453369140625,
-0.0014629364013671875,
0.0086669921875,
0.00339508056640625,
-0.00032591819763183594,
0.0521240234375,
0.0244140625,
-0.04852294921875,
-0.0467529296875,
-0.047393798828125,
-0.0... |
thomasavare/distilroberta-ft-test1 | 2023-04-25T16:28:57.000Z | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | thomasavare | null | null | thomasavare/distilroberta-ft-test1 | 0 | 2 | transformers | 2023-04-25T16:28:46 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilroberta-ft-test1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilroberta-ft-test1
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,278 | [
[
-0.04107666015625,
-0.06317138671875,
0.01947021484375,
0.01251220703125,
-0.042816162109375,
-0.019378662109375,
-0.006500244140625,
-0.01318359375,
0.0040130615234375,
0.00203704833984375,
-0.04632568359375,
-0.0396728515625,
-0.06500244140625,
-0.00564575... |
NicholasSynovic/AutoTrain-LUC-COMP429-VEAA-Classification | 2023-07-28T14:54:54.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:NicholasSynovic/autotrain-data-luc-comp429-victorian-authorship-classification",
"license:agpl-3.0",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | NicholasSynovic | null | null | NicholasSynovic/AutoTrain-LUC-COMP429-VEAA-Classification | 0 | 2 | transformers | 2023-04-25T17:37:57 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: I love AutoTrain
datasets:
- NicholasSynovic/autotrain-data-luc-comp429-victorian-authorship-classification
co2_eq_emissions:
emissions: 4.1359796275464005
license: agpl-3.0
metrics:
- accuracy
- f1
- recall
- bertscore
pipeline_tag: text-classification
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 52472123757
- CO2 Emissions (in grams): 4.1360
This model reuses and extends a Bert model trained on [NicholasSynovic/Free-AutoTrain-VEAA](https://huggingface.co/datasets/NicholasSynovic/Free-AutoTrain-VEAA)
## Validation Metrics
- Loss: 1.425
- Accuracy: 0.636
- Macro F1: 0.504
- Micro F1: 0.636
- Weighted F1: 0.624
- Macro Precision: 0.523
- Micro Precision: 0.636
- Weighted Precision: 0.630
- Macro Recall: 0.508
- Micro Recall: 0.636
- Weighted Recall: 0.636
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/NicholasSynovic/autotrain-luc-comp429-victorian-authorship-classification-52472123757
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("NicholasSynovic/AutoTrain-LUC-COMP429-VEAA-Classification", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("NicholasSynovic/autotrain-luc-comp429-victorian-authorship-classification-52472123757", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,692 | [
[
-0.026275634765625,
-0.020843505859375,
0.0194244384765625,
0.005702972412109375,
0.006069183349609375,
0.0005168914794921875,
-0.0025272369384765625,
-0.0247650146484375,
0.006500244140625,
0.0099334716796875,
-0.04931640625,
-0.0308074951171875,
-0.05114746093... |
andyP/sf-it-xxl-submission_20230425_175048 | 2023-04-25T17:52:09.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | andyP | null | null | andyP/sf-it-xxl-submission_20230425_175048 | 0 | 2 | sentence-transformers | 2023-04-25T17:51:27 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# andyP/sf-it-xxl-submission_20230425_175048
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("andyP/sf-it-xxl-submission_20230425_175048")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,573 | [
[
-0.00787353515625,
-0.05877685546875,
0.027740478515625,
-0.01325225830078125,
-0.008636474609375,
-0.019561767578125,
-0.0184326171875,
-0.0159454345703125,
-0.0008592605590820312,
0.035003662109375,
-0.041748046875,
-0.0202789306640625,
-0.039154052734375,
... |
pascalhuerten/t5-small-finetuned-esco-summarisation | 2023-04-26T22:15:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | pascalhuerten | null | null | pascalhuerten/t5-small-finetuned-esco-summarisation | 0 | 2 | transformers | 2023-04-25T18:49:04 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-esco-summarisation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-esco-summarisation
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- epoch: 2.0
- eval_accuracy: 0.0694
- eval_loss: 1.8363
- eval_runtime: 209.841
- eval_samples_per_second: 10.436
- eval_steps_per_second: 2.612
- step: 7614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,336 | [
[
-0.034515380859375,
-0.038299560546875,
0.0216064453125,
0.0034198760986328125,
-0.034515380859375,
-0.03607177734375,
-0.0158538818359375,
-0.0275726318359375,
0.01397705078125,
0.02386474609375,
-0.0477294921875,
-0.049530029296875,
-0.043548583984375,
0.0... |
dgalik/finetuning-distilbert-hate-speech-score-model-all-samples-250423 | 2023-04-25T20:06:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | dgalik | null | null | dgalik/finetuning-distilbert-hate-speech-score-model-all-samples-250423 | 0 | 2 | transformers | 2023-04-25T19:10:43 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-distilbert-hate-speech-score-model-all-samples-250423
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-hate-speech-score-model-all-samples-250423
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2979
- Mse: 0.2979
- Rmse: 0.5458
- Mae: 0.2755
- R2: 0.9475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,265 | [
[
-0.04217529296875,
-0.06268310546875,
0.0132904052734375,
0.01297760009765625,
-0.02606201171875,
-0.0156402587890625,
-0.0141754150390625,
-0.01104736328125,
0.0022411346435546875,
0.0103302001953125,
-0.04132080078125,
-0.049530029296875,
-0.0771484375,
-0... |
andyP/sf-it-submission_20230425_191818 | 2023-04-25T19:23:28.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | andyP | null | null | andyP/sf-it-submission_20230425_191818 | 0 | 2 | sentence-transformers | 2023-04-25T19:22:48 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# andyP/sf-it-submission_20230425_191818
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("andyP/sf-it-submission_20230425_191818")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,565 | [
[
-0.0074615478515625,
-0.058837890625,
0.028167724609375,
-0.0144805908203125,
-0.00896453857421875,
-0.021240234375,
-0.0190582275390625,
-0.014801025390625,
-0.0007171630859375,
0.035003662109375,
-0.04193115234375,
-0.0200347900390625,
-0.039093017578125,
... |
mrfakename/tweetgpt-15k-v1 | 2023-11-02T23:38:33.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"dataset:tweet_eval",
"dataset:other",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | mrfakename | null | null | mrfakename/tweetgpt-15k-v1 | 0 | 2 | transformers | 2023-04-25T19:47:37 | ---
license: other
license_name: omlv1
license_link: https://github.com/fakerybakery/OpenModelLicense
datasets:
- tweet_eval
- other
---
[tweetgpt-5k-v1](https://huggingface.co/mrfakename/tweetgpt-5k-v1) - **tweetgpt-15k-v1**
# tweetgpt-15k-v1
## usage
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="mrfakename/tweetgpt-15k-v1")
res = (pipe("", max_length=140, num_return_sequences=5))
for r in res:
print(r['generated_text'])
```
## training data
tweetgpt-5k-v1 was trained on 15k tweets (that's why it's called tweetgpt-**15k**-v1). a [5k version is available](https://huggingface.co/mrfakename/tweetgpt-5k-v1).
## disclaimer
this model may output offensive content. offensive content is not endorsed nor condoned by the creator of this model. use at your own risk!
## license
share your model under the permissive open model license, a new approach to ai model licensing. stop trying to fit software licenses to your ai model. does "source code" apply to ai models? don't worry about that - just use the open model license!
Open Model License 1.0 (OMLv1)
https://github.com/fakerybakery/OpenModelLicense
Copyright 2023 mrfakename
Permission is hereby granted, free of charge, to any person obtaining a copy of this model and associated documentation files (the "Model"), to deal in the Model without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Model, and to permit persons to whom the Model is furnished to do so, subject to the following conditions:
* The above copyright notice and this permission notice shall be included in all copies of the model in all formats, including quantized models and models in different formats from the original.
THE MODEL IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE MODEL OR THE USE OR OTHER DEALINGS IN THE MODEL. | 2,255 | [
[
-0.001705169677734375,
-0.038055419921875,
0.021209716796875,
0.032989501953125,
-0.032135009765625,
-0.02191162109375,
-0.00609588623046875,
-0.0268096923828125,
0.0018548965454101562,
0.037933349609375,
-0.053741455078125,
-0.04241943359375,
-0.051544189453125... |
emmaenglish/finetuned_distilbert | 2023-04-25T23:22:00.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | emmaenglish | null | null | emmaenglish/finetuned_distilbert | 0 | 2 | transformers | 2023-04-25T19:57:44 | Model Trained from Toxic Comment Classification Challenge - data from Kaggle
GPU in google colab used to do training | 119 | [
[
-0.00862884521484375,
-0.05645751953125,
0.020416259765625,
-0.006267547607421875,
0.000042319297790527344,
-0.0018749237060546875,
0.00955963134765625,
-0.0240325927734375,
-0.0196533203125,
0.038848876953125,
-0.04827880859375,
-0.01922607421875,
-0.0329284667... |
dgalik/finetuning-distilbert-hate-speech-score-model-all-samples-dropout005-250423 | 2023-04-25T21:47:41.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | dgalik | null | null | dgalik/finetuning-distilbert-hate-speech-score-model-all-samples-dropout005-250423 | 0 | 2 | transformers | 2023-04-25T20:39:30 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-distilbert-hate-speech-score-model-all-samples-dropout005-250423
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-hate-speech-score-model-all-samples-dropout005-250423
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2752
- Mse: 0.2752
- Rmse: 0.5246
- Mae: 0.2421
- R2: 0.9515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,287 | [
[
-0.0428466796875,
-0.06109619140625,
0.014373779296875,
0.01213836669921875,
-0.0250396728515625,
-0.016845703125,
-0.0142669677734375,
-0.0092620849609375,
0.0015363693237304688,
0.010009765625,
-0.04437255859375,
-0.050079345703125,
-0.07586669921875,
-0.0... |
MITCriticalData/Sentinel-2_Resnet50V2_VariationalAutoencoder_RGB_full_Colombia_Dataset | 2023-04-26T13:50:20.000Z | [
"keras",
"region:us"
] | null | MITCriticalData | null | null | MITCriticalData/Sentinel-2_Resnet50V2_VariationalAutoencoder_RGB_full_Colombia_Dataset | 0 | 2 | keras | 2023-04-25T21:14:25 | ---
library_name: keras
---
## Model description
Variational Autoencoder model trained to compress information from sentinel-2 satellite images using Resnet50 V2 as encoder backbone to extract features.
The latent space of the model is given by 1024 neurons which can be used to generate embeddings from the sentinel-2 satellite images.
The model was trained using bands RGB (2, 3 and 4) (Red, Green and Blue) of the Sentinel-2 satellites and using 81 municipalities of Colombia with most dengue cases.
The input shape of the model is 224, 224, 3. To extract features you should remove the last layer.
## Intended uses & limitations
The model was trained with images of 81 different cities in Colombia with most dengue cases, however it may require fine tuning or retraining to learn from other contexts such as countries and other continents.
## Training and evaluation data
The model was trained with satellite images of 81 different cities in Colombia extracted from sentinel-2 using RGB bands using an asymmetric variational autoencoder. Images with information that could result in noise such as black images were filtered prior to training to avoid noise in the data.
The dataset was split into train and test using 80% for train and 20% to test.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 9.999999747378752e-05 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 1,630 | [
[
-0.04217529296875,
-0.0197601318359375,
0.004058837890625,
-0.00202178955078125,
-0.0258331298828125,
-0.001537322998046875,
0.01050567626953125,
-0.0235748291015625,
0.0198211669921875,
0.018310546875,
-0.04669189453125,
-0.036956787109375,
-0.064453125,
-0... |
khadija267/distilbert-base-uncased-finetuned-clinc | 2023-04-26T20:02:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | khadija267 | null | null | khadija267/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-04-25T23:46:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7754
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2893 | 1.0 | 318 | 3.2831 | 0.7397 |
| 2.6289 | 2.0 | 636 | 1.8731 | 0.8345 |
| 1.5481 | 3.0 | 954 | 1.1580 | 0.89 |
| 1.0137 | 4.0 | 1272 | 0.8584 | 0.9077 |
| 0.7969 | 5.0 | 1590 | 0.7754 | 0.9161 |
### Framework versions
- Transformers 4.11.3
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,889 | [
[
-0.03472900390625,
-0.040069580078125,
0.011688232421875,
0.00557708740234375,
-0.028411865234375,
-0.0254364013671875,
-0.01345062255859375,
-0.00772857666015625,
0.0023212432861328125,
0.021484375,
-0.047149658203125,
-0.0489501953125,
-0.05755615234375,
-... |
khadija267/distilbert-base-uncased-distilled-clinc | 2023-04-26T20:25:28.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | khadija267 | null | null | khadija267/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-04-26T00:36:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.947741935483871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2830
- Accuracy: 0.9477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.8723 | 1.0 | 318 | 2.8941 | 0.7461 |
| 2.2155 | 2.0 | 636 | 1.4516 | 0.8613 |
| 1.0985 | 3.0 | 954 | 0.7466 | 0.9152 |
| 0.5635 | 4.0 | 1272 | 0.4707 | 0.9358 |
| 0.3294 | 5.0 | 1590 | 0.3628 | 0.9429 |
| 0.221 | 6.0 | 1908 | 0.3173 | 0.9439 |
| 0.1671 | 7.0 | 2226 | 0.2968 | 0.9477 |
| 0.14 | 8.0 | 2544 | 0.2876 | 0.9484 |
| 0.1263 | 9.0 | 2862 | 0.2838 | 0.9471 |
| 0.1189 | 10.0 | 3180 | 0.2830 | 0.9477 |
### Framework versions
- Transformers 4.11.3
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.10.3
| 2,199 | [
[
-0.035675048828125,
-0.038604736328125,
0.0161895751953125,
0.005157470703125,
-0.021728515625,
-0.0163726806640625,
-0.0082244873046875,
-0.0031681060791015625,
0.0109100341796875,
0.02197265625,
-0.042999267578125,
-0.049102783203125,
-0.06085205078125,
-0... |
catalpa/codecapybara-4bit-128g-gptq | 2023-04-26T07:52:28.000Z | [
"transformers",
"llama",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | catalpa | null | null | catalpa/codecapybara-4bit-128g-gptq | 4 | 2 | transformers | 2023-04-26T01:07:23 | Based on https://huggingface.co/Fsoft-AIC/CodeCapybara
Using https://github.com/qwopqwop200/GPTQ-for-LLaMa triton branch
python llama.py CodeCapybara/ c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors codecapybara-4bit-128g-gptq.safetensors | 270 | [
[
-0.01442718505859375,
-0.0245819091796875,
0.0182647705078125,
0.043701171875,
-0.0149993896484375,
0.038177490234375,
0.0234375,
-0.026611328125,
0.026641845703125,
0.022796630859375,
-0.009063720703125,
-0.038238525390625,
-0.029510498046875,
0.01400756835... |
AlekseyKorshuk/chatml-test | 2023-04-26T02:46:27.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | AlekseyKorshuk | null | null | AlekseyKorshuk/chatml-test | 0 | 2 | transformers | 2023-04-26T01:09:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: chatml-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chatml-test
This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5098
- Accuracy: 0.7709
- Entropy: 0.4833
- Samples: 715
- Perplexity: 1.6649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 99
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Entropy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------:|
| 0.4749 | 1.0 | 1730 | 0.5098 | 0.7709 | 0.4833 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0-rc1
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,583 | [
[
-0.0274200439453125,
-0.05767822265625,
0.0015287399291992188,
0.0074615478515625,
-0.0279083251953125,
-0.0335693359375,
-0.0194854736328125,
-0.021728515625,
0.0145416259765625,
0.00853729248046875,
-0.047210693359375,
-0.036956787109375,
-0.043792724609375,
... |
jap2/bert-base-sst-2 | 2023-04-26T11:29:33.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jap2 | null | null | jap2/bert-base-sst-2 | 0 | 2 | transformers | 2023-04-26T01:26:53 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert-base-sst-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.930045871559633
- name: F1
type: f1
value: 0.9299971705127952
- name: Precision
type: precision
value: 0.9302394783826914
- name: Recall
type: recall
value: 0.9298749684263703
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-sst-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4216
- Accuracy: 0.9300
- F1: 0.9300
- Precision: 0.9302
- Recall: 0.9299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 160
- eval_batch_size: 160
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2366 | 1.0 | 105 | 0.2193 | 0.9117 | 0.9115 | 0.9139 | 0.9111 |
| 0.1104 | 2.0 | 210 | 0.2174 | 0.9243 | 0.9243 | 0.9243 | 0.9243 |
| 0.0685 | 2.99 | 315 | 0.2441 | 0.9186 | 0.9185 | 0.9186 | 0.9185 |
| 0.0476 | 4.0 | 421 | 0.2524 | 0.9232 | 0.9232 | 0.9233 | 0.9234 |
| 0.0319 | 5.0 | 526 | 0.2832 | 0.9220 | 0.9219 | 0.9226 | 0.9217 |
| 0.0227 | 6.0 | 631 | 0.3093 | 0.9289 | 0.9289 | 0.9289 | 0.9289 |
| 0.0169 | 6.99 | 736 | 0.3755 | 0.9209 | 0.9209 | 0.9208 | 0.9210 |
| 0.0112 | 8.0 | 842 | 0.3793 | 0.9220 | 0.9219 | 0.9234 | 0.9215 |
| 0.0079 | 9.0 | 947 | 0.3980 | 0.9255 | 0.9254 | 0.9255 | 0.9254 |
| 0.007 | 9.98 | 1050 | 0.4216 | 0.9300 | 0.9300 | 0.9302 | 0.9299 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,911 | [
[
-0.033660888671875,
-0.04705810546875,
0.012603759765625,
0.00762176513671875,
-0.01776123046875,
-0.00971221923828125,
-0.005886077880859375,
-0.01070404052734375,
0.033660888671875,
0.0186767578125,
-0.05279541015625,
-0.044097900390625,
-0.053741455078125,
... |
Sleoruiz/roberta-base-fine-tuned-text-classification-pesos-fixed | 2023-04-26T04:26:11.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Sleoruiz | null | null | Sleoruiz/roberta-base-fine-tuned-text-classification-pesos-fixed | 0 | 2 | transformers | 2023-04-26T01:44:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-fine-tuned-text-classification-pesos-fixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-fine-tuned-text-classification-pesos-fixed
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,110 | [
[
-0.02056884765625,
-0.0615234375,
0.0242156982421875,
0.01239776611328125,
-0.031280517578125,
-0.0188446044921875,
-0.0242919921875,
-0.02099609375,
0.00777435302734375,
0.03961181640625,
-0.038665771484375,
-0.044677734375,
-0.058074951171875,
-0.000504493... |
andyqin18/test-finetuned | 2023-04-28T02:52:53.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | andyqin18 | null | null | andyqin18/test-finetuned | 0 | 2 | transformers | 2023-04-26T03:21:24 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-finetuned
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0881 | 1.0 | 500 | 0.0550 |
| 0.0452 | 2.0 | 1000 | 0.0503 |
| 0.0313 | 3.0 | 1500 | 0.0527 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,332 | [
[
-0.042449951171875,
-0.05511474609375,
0.01183319091796875,
0.01287841796875,
-0.030731201171875,
-0.03839111328125,
-0.01800537109375,
-0.0083465576171875,
0.00316619873046875,
0.0278167724609375,
-0.06365966796875,
-0.041595458984375,
-0.0423583984375,
-0.... |
andyqin18/finetuned-bert-uncased | 2023-04-30T05:34:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | andyqin18 | null | null | andyqin18/finetuned-bert-uncased | 0 | 2 | transformers | 2023-04-26T04:06:07 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuned-bert-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model description
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on this [Kaggle dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge).
It achieves the following results on the evaluation set:
- Loss: 0.0507
## Intended uses
The model is intended to be used for detecting 6 labels of toxicity.
The model takes in a comment as string and predicts the probabilities of the 6 types of toxicity (as float between 0 and 1)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0525 | 1.0 | 1250 | 0.0482 |
| 0.037 | 2.0 | 2500 | 0.0445 |
| 0.0275 | 3.0 | 3750 | 0.0489 |
| 0.0188 | 4.0 | 5000 | 0.0491 |
| 0.0146 | 5.0 | 6250 | 0.0507 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,573 | [
[
-0.0138702392578125,
-0.02838134765625,
0.028594970703125,
0.00493621826171875,
-0.0179443359375,
-0.025390625,
-0.002655029296875,
-0.02764892578125,
0.0087432861328125,
0.0330810546875,
-0.049346923828125,
-0.053314208984375,
-0.04833984375,
-0.00574493408... |
Sleoruiz/roberta-base-fine-tuned-text-classification-pesos-fixed-2 | 2023-04-26T12:00:10.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Sleoruiz | null | null | Sleoruiz/roberta-base-fine-tuned-text-classification-pesos-fixed-2 | 0 | 2 | transformers | 2023-04-26T05:08:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: roberta-base-fine-tuned-text-classification-pesos-fixed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-fine-tuned-text-classification-pesos-fixed-2
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0640
- F1: 0.5201
- Accuracy: 0.3302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|
| 0.0626 | 1.0 | 6527 | 0.0628 | 0.3484 | 0.1556 |
| 0.0522 | 2.0 | 13054 | 0.0568 | 0.4758 | 0.2903 |
| 0.0389 | 3.0 | 19581 | 0.0581 | 0.5229 | 0.3294 |
| 0.0264 | 4.0 | 26108 | 0.0640 | 0.5201 | 0.3302 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,699 | [
[
-0.024627685546875,
-0.046417236328125,
0.0172119140625,
0.00815582275390625,
-0.0240631103515625,
-0.0218505859375,
-0.022003173828125,
-0.017578125,
0.00406646728515625,
0.03045654296875,
-0.04345703125,
-0.0496826171875,
-0.059661865234375,
-0.00992584228... |
quickman/mt5-base-finetuned-novel-chinese-to-spanish-v1 | 2023-04-26T07:44:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | quickman | null | null | quickman/mt5-base-finetuned-novel-chinese-to-spanish-v1 | 0 | 2 | transformers | 2023-04-26T05:39:46 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-base-finetuned-novel-chinese-to-spanish-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-novel-chinese-to-spanish-v1
This model is a fine-tuned version of [quickman/mt5-base-finetuned-chinese-to-spanish](https://huggingface.co/quickman/mt5-base-finetuned-chinese-to-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2288
- Score: 0.0063
- Counts: [609, 331, 205, 120]
- Totals: [838, 774, 710, 646]
- Precisions: [72.67303102625299, 42.76485788113695, 28.87323943661972, 18.575851393188856]
- Bp: 0.0002
- Sys Len: 838
- Ref Len: 8089
- Bleu: 0.0063
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- training_steps: 20000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Score | Counts | Totals | Precisions | Bp | Sys Len | Ref Len | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------------------:|:--------------------:|:-------------------------------------------------------------------------------:|:------:|:-------:|:-------:|:------:|:-------:|
| 2.7093 | 0.28 | 500 | 1.9080 | 0.0035 | [510, 185, 91, 37] | [848, 784, 720, 656] | [60.14150943396226, 23.596938775510203, 12.63888888888889, 5.640243902439025] | 0.0002 | 848 | 8089 | 0.0035 | 19.0 |
| 2.4994 | 0.55 | 1000 | 1.7520 | 0.0036 | [524, 199, 100, 46] | [842, 778, 714, 650] | [62.23277909738717, 25.57840616966581, 14.005602240896359, 7.076923076923077] | 0.0002 | 842 | 8089 | 0.0036 | 19.0 |
| 2.3427 | 0.83 | 1500 | 1.6632 | 0.0040 | [530, 212, 109, 53] | [844, 780, 716, 652] | [62.796208530805686, 27.17948717948718, 15.223463687150838, 8.128834355828221] | 0.0002 | 844 | 8089 | 0.0040 | 19.0 |
| 2.211 | 1.1 | 2000 | 1.5980 | 0.0050 | [548, 230, 123, 66] | [855, 791, 727, 663] | [64.09356725146199, 29.077117572692792, 16.91884456671252, 9.95475113122172] | 0.0002 | 855 | 8089 | 0.0050 | 19.0 |
| 2.1536 | 1.38 | 2500 | 1.5442 | 0.0053 | [552, 239, 137, 77] | [852, 788, 724, 660] | [64.78873239436619, 30.32994923857868, 18.92265193370166, 11.666666666666666] | 0.0002 | 852 | 8089 | 0.0053 | 19.0 |
| 2.079 | 1.66 | 3000 | 1.5088 | 0.0055 | [551, 244, 142, 84] | [854, 790, 726, 662] | [64.51990632318501, 30.88607594936709, 19.55922865013774, 12.688821752265861] | 0.0002 | 854 | 8089 | 0.0055 | 19.0 |
| 2.0374 | 1.93 | 3500 | 1.4768 | 0.0054 | [557, 259, 149, 83] | [849, 785, 721, 657] | [65.60659599528857, 32.99363057324841, 20.665742024965326, 12.633181126331811] | 0.0002 | 849 | 8089 | 0.0054 | 19.0 |
| 2.0064 | 2.21 | 4000 | 1.4418 | 0.0054 | [559, 266, 157, 91] | [844, 780, 716, 652] | [66.23222748815166, 34.1025641025641, 21.92737430167598, 13.957055214723926] | 0.0002 | 844 | 8089 | 0.0054 | 19.0 |
| 1.9536 | 2.48 | 4500 | 1.4194 | 0.0056 | [557, 260, 157, 87] | [849, 785, 721, 657] | [65.60659599528857, 33.12101910828026, 21.7753120665742, 13.242009132420092] | 0.0002 | 849 | 8089 | 0.0056 | 19.0 |
| 1.9436 | 2.76 | 5000 | 1.4030 | 0.0051 | [561, 262, 151, 85] | [841, 777, 713, 649] | [66.70630202140309, 33.71943371943372, 21.1781206171108, 13.097072419106317] | 0.0002 | 841 | 8089 | 0.0051 | 19.0 |
| 1.8939 | 3.04 | 5500 | 1.3826 | 0.0059 | [568, 277, 169, 99] | [848, 784, 720, 656] | [66.98113207547169, 35.33163265306123, 23.47222222222222, 15.091463414634147] | 0.0002 | 848 | 8089 | 0.0059 | 19.0 |
| 1.8497 | 3.31 | 6000 | 1.3649 | 0.0059 | [576, 288, 180, 107] | [843, 779, 715, 651] | [68.32740213523131, 36.97047496790757, 25.174825174825173, 16.43625192012289] | 0.0002 | 843 | 8089 | 0.0059 | 19.0 |
| 1.8177 | 3.59 | 6500 | 1.3575 | 0.0060 | [585, 285, 173, 98] | [847, 783, 719, 655] | [69.06729634002362, 36.39846743295019, 24.061196105702365, 14.961832061068701] | 0.0002 | 847 | 8089 | 0.0060 | 19.0 |
| 1.8368 | 3.86 | 7000 | 1.3428 | 0.0061 | [583, 285, 171, 95] | [851, 787, 723, 659] | [68.50763807285547, 36.213468869123254, 23.651452282157678, 14.41578148710167] | 0.0002 | 851 | 8089 | 0.0061 | 19.0 |
| 1.7906 | 4.14 | 7500 | 1.3295 | 0.0059 | [581, 284, 167, 88] | [850, 786, 722, 658] | [68.3529411764706, 36.1323155216285, 23.130193905817176, 13.373860182370821] | 0.0002 | 850 | 8089 | 0.0059 | 19.0 |
| 1.766 | 4.42 | 8000 | 1.3204 | 0.0057 | [575, 279, 161, 89] | [848, 784, 720, 656] | [67.80660377358491, 35.58673469387755, 22.36111111111111, 13.567073170731707] | 0.0002 | 848 | 8089 | 0.0057 | 19.0 |
| 1.7615 | 4.69 | 8500 | 1.3124 | 0.0061 | [590, 293, 176, 100] | [848, 784, 720, 656] | [69.5754716981132, 37.37244897959184, 24.444444444444443, 15.24390243902439] | 0.0002 | 848 | 8089 | 0.0061 | 19.0 |
| 1.7741 | 4.97 | 9000 | 1.3057 | 0.0062 | [590, 298, 180, 105] | [846, 782, 718, 654] | [69.73995271867612, 38.107416879795394, 25.069637883008358, 16.05504587155963] | 0.0002 | 846 | 8089 | 0.0062 | 19.0 |
| 1.7266 | 5.24 | 9500 | 1.2969 | 0.0062 | [592, 304, 182, 104] | [846, 782, 718, 654] | [69.97635933806147, 38.87468030690537, 25.348189415041784, 15.902140672782874] | 0.0002 | 846 | 8089 | 0.0062 | 19.0 |
| 1.7309 | 5.52 | 10000 | 1.2904 | 0.0054 | [580, 287, 166, 88] | [840, 776, 712, 648] | [69.04761904761905, 36.98453608247423, 23.314606741573034, 13.580246913580247] | 0.0002 | 840 | 8089 | 0.0054 | 19.0 |
| 1.6973 | 5.79 | 10500 | 1.2818 | 0.0059 | [591, 302, 179, 100] | [842, 778, 714, 650] | [70.19002375296913, 38.81748071979435, 25.07002801120448, 15.384615384615385] | 0.0002 | 842 | 8089 | 0.0059 | 19.0 |
| 1.6613 | 6.07 | 11000 | 1.2757 | 0.0058 | [596, 302, 185, 102] | [840, 776, 712, 648] | [70.95238095238095, 38.91752577319588, 25.98314606741573, 15.74074074074074] | 0.0002 | 840 | 8089 | 0.0058 | 19.0 |
| 1.6699 | 6.35 | 11500 | 1.2689 | 0.0063 | [600, 316, 197, 113] | [842, 778, 714, 650] | [71.25890736342043, 40.616966580976865, 27.591036414565828, 17.384615384615383] | 0.0002 | 842 | 8089 | 0.0063 | 19.0 |
| 1.6566 | 6.62 | 12000 | 1.2630 | 0.0064 | [610, 320, 194, 109] | [844, 780, 716, 652] | [72.27488151658768, 41.02564102564103, 27.094972067039105, 16.717791411042946] | 0.0002 | 844 | 8089 | 0.0064 | 19.0 |
| 1.6417 | 6.9 | 12500 | 1.2592 | 0.0065 | [606, 325, 201, 116] | [843, 779, 715, 651] | [71.88612099644128, 41.7201540436457, 28.111888111888113, 17.81874039938556] | 0.0002 | 843 | 8089 | 0.0065 | 19.0 |
| 1.6703 | 7.17 | 13000 | 1.2531 | 0.0072 | [616, 325, 198, 113] | [855, 791, 727, 663] | [72.046783625731, 41.08723135271808, 27.235213204951858, 17.043740573152338] | 0.0002 | 855 | 8089 | 0.0072 | 19.0 |
| 1.6283 | 7.45 | 13500 | 1.2508 | 0.0069 | [614, 334, 209, 122] | [846, 782, 718, 654] | [72.57683215130024, 42.710997442455245, 29.108635097493035, 18.654434250764528] | 0.0002 | 846 | 8089 | 0.0069 | 19.0 |
| 1.6139 | 7.73 | 14000 | 1.2485 | 0.0056 | [595, 315, 192, 111] | [833, 769, 705, 641] | [71.42857142857143, 40.96228868660598, 27.23404255319149, 17.316692667706707] | 0.0002 | 833 | 8089 | 0.0056 | 19.0 |
| 1.6203 | 8.0 | 14500 | 1.2425 | 0.0067 | [613, 329, 203, 119] | [845, 781, 717, 653] | [72.54437869822485, 42.12548015364917, 28.312412831241282, 18.223583460949463] | 0.0002 | 845 | 8089 | 0.0067 | 19.0 |
| 1.6289 | 8.28 | 15000 | 1.2414 | 0.0061 | [603, 322, 200, 119] | [837, 773, 709, 645] | [72.04301075268818, 41.65588615782665, 28.208744710860366, 18.449612403100776] | 0.0002 | 837 | 8089 | 0.0061 | 19.0 |
| 1.6301 | 8.55 | 15500 | 1.2386 | 0.0063 | [610, 328, 205, 123] | [838, 774, 710, 646] | [72.79236276849642, 42.377260981912144, 28.87323943661972, 19.040247678018577] | 0.0002 | 838 | 8089 | 0.0063 | 19.0 |
| 1.5992 | 8.83 | 16000 | 1.2379 | 0.0061 | [603, 323, 200, 119] | [837, 773, 709, 645] | [72.04301075268818, 41.785252263906855, 28.208744710860366, 18.449612403100776] | 0.0002 | 837 | 8089 | 0.0061 | 19.0 |
| 1.5984 | 9.11 | 16500 | 1.2367 | 0.0060 | [597, 317, 195, 116] | [837, 773, 709, 645] | [71.32616487455198, 41.00905562742562, 27.50352609308886, 17.984496124031008] | 0.0002 | 837 | 8089 | 0.0060 | 19.0 |
| 1.6026 | 9.38 | 17000 | 1.2336 | 0.0063 | [606, 326, 204, 124] | [838, 774, 710, 646] | [72.31503579952268, 42.11886304909561, 28.732394366197184, 19.195046439628484] | 0.0002 | 838 | 8089 | 0.0063 | 19.0 |
| 1.6059 | 9.66 | 17500 | 1.2319 | 0.0061 | [606, 330, 206, 123] | [835, 771, 707, 643] | [72.57485029940119, 42.80155642023346, 29.13719943422914, 19.12908242612753] | 0.0002 | 835 | 8089 | 0.0061 | 19.0 |
| 1.6227 | 9.93 | 18000 | 1.2294 | 0.0063 | [609, 334, 209, 122] | [837, 773, 709, 645] | [72.75985663082437, 43.20827943078913, 29.478138222849083, 18.914728682170544] | 0.0002 | 837 | 8089 | 0.0063 | 19.0 |
| 1.6031 | 10.21 | 18500 | 1.2300 | 0.0060 | [605, 328, 203, 120] | [835, 771, 707, 643] | [72.45508982035928, 42.54215304798962, 28.712871287128714, 18.662519440124417] | 0.0002 | 835 | 8089 | 0.0060 | 19.0 |
| 1.5746 | 10.49 | 19000 | 1.2301 | 0.0064 | [612, 335, 209, 123] | [838, 774, 710, 646] | [73.0310262529833, 43.281653746770026, 29.43661971830986, 19.040247678018577] | 0.0002 | 838 | 8089 | 0.0064 | 19.0 |
| 1.5689 | 10.76 | 19500 | 1.2288 | 0.0063 | [609, 331, 205, 120] | [838, 774, 710, 646] | [72.67303102625299, 42.76485788113695, 28.87323943661972, 18.575851393188856] | 0.0002 | 838 | 8089 | 0.0063 | 19.0 |
| 1.5928 | 11.04 | 20000 | 1.2288 | 0.0063 | [609, 331, 205, 120] | [838, 774, 710, 646] | [72.67303102625299, 42.76485788113695, 28.87323943661972, 18.575851393188856] | 0.0002 | 838 | 8089 | 0.0063 | 19.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 11,470 | [
[
-0.038177490234375,
-0.0230865478515625,
0.03466796875,
0.009033203125,
-0.0032215118408203125,
-0.01303863525390625,
0.00337982177734375,
-0.019439697265625,
0.061859130859375,
0.022735595703125,
-0.01971435546875,
-0.051116943359375,
-0.030609130859375,
0.... |
minoosh/AST2-finetuned-on-shEMO | 2023-04-26T11:23:19.000Z | [
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | audio-classification | minoosh | null | null | minoosh/AST2-finetuned-on-shEMO | 0 | 2 | transformers | 2023-04-26T06:01:50 | ---
license: bsd-3-clause
tags:
- generated_from_trainer
model-index:
- name: AST2-finetuned-on-shEMO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AST2-finetuned-on-shEMO
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.6144
- eval_accuracy: 0.7933
- eval_runtime: 36.3896
- eval_samples_per_second: 8.244
- eval_steps_per_second: 2.061
- epoch: 18.13
- step: 2719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,369 | [
[
-0.0338134765625,
-0.0391845703125,
0.0006308555603027344,
0.012664794921875,
-0.0312347412109375,
-0.031219482421875,
-0.02801513671875,
-0.016082763671875,
-0.011962890625,
0.0229644775390625,
-0.053375244140625,
-0.032867431640625,
-0.05035400390625,
-0.0... |
tihimsm/distilbert-base-uncased-finetuned-emotion | 2023-04-26T07:24:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | tihimsm | null | null | tihimsm/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-26T07:14:04 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9275012469136824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9275
- F1: 0.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8326 | 1.0 | 250 | 0.3185 | 0.902 | 0.8983 |
| 0.2499 | 2.0 | 500 | 0.2201 | 0.9275 | 0.9275 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.0.0+cu118
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,803 | [
[
-0.037994384765625,
-0.04217529296875,
0.0159912109375,
0.021270751953125,
-0.025726318359375,
-0.020233154296875,
-0.01287841796875,
-0.00893402099609375,
0.01064300537109375,
0.009307861328125,
-0.0570068359375,
-0.052001953125,
-0.059295654296875,
-0.0087... |
Adoley/covid-tweets-sentiment-analysis | 2023-05-11T18:30:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Adoley | null | null | Adoley/covid-tweets-sentiment-analysis | 0 | 2 | transformers | 2023-04-26T08:09:45 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: covid-tweets-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-tweets-sentiment-analysis
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6091
- Rmse: 0.6632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7648 | 2.0 | 500 | 0.6091 | 0.6632 |
| 0.4033 | 4.0 | 1000 | 0.7708 | 0.6632 |
| 0.1444 | 6.0 | 1500 | 1.0443 | 0.6563 |
| 0.0625 | 8.0 | 2000 | 1.3089 | 0.6628 |
| 0.0324 | 10.0 | 2500 | 1.3869 | 0.6673 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,657 | [
[
-0.034332275390625,
-0.046661376953125,
-0.004638671875,
0.013885498046875,
-0.0251007080078125,
-0.0139312744140625,
-0.01110076904296875,
-0.00667572021484375,
0.0117950439453125,
0.00727081298828125,
-0.0643310546875,
-0.055267333984375,
-0.04779052734375,
... |
dgalik/finetuning-distilbert-hate-speech-score-model-all-samples-dropout005-epochs-10-260423 | 2023-04-26T10:07:00.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | dgalik | null | null | dgalik/finetuning-distilbert-hate-speech-score-model-all-samples-dropout005-epochs-10-260423 | 0 | 2 | transformers | 2023-04-26T08:51:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-distilbert-hate-speech-score-model-all-samples-dropout005-epochs-10-260423
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-hate-speech-score-model-all-samples-dropout005-epochs-10-260423
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2453
- Mse: 0.2453
- Rmse: 0.4953
- Mae: 0.2019
- R2: 0.9568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,308 | [
[
-0.04241943359375,
-0.06256103515625,
0.0138702392578125,
0.01355743408203125,
-0.0264892578125,
-0.0176849365234375,
-0.01485443115234375,
-0.01186370849609375,
0.0011091232299804688,
0.01084136962890625,
-0.04547119140625,
-0.0494384765625,
-0.07476806640625,
... |
nickovchinnikov/distilbert-base-uncased-finetuned-emotion | 2023-08-04T13:46:47.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | nickovchinnikov | null | null | nickovchinnikov/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-26T09:00:32 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9211896734909573
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2208
- Accuracy: 0.921
- F1: 0.9212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8355 | 1.0 | 250 | 0.3187 | 0.902 | 0.8991 |
| 0.2544 | 2.0 | 500 | 0.2208 | 0.921 | 0.9212 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1
- Datasets 2.11.0
- Tokenizers 0.11.0
| 1,841 | [
[
-0.038726806640625,
-0.041259765625,
0.0158843994140625,
0.0216064453125,
-0.02630615234375,
-0.0208740234375,
-0.0127105712890625,
-0.00836181640625,
0.010284423828125,
0.0089111328125,
-0.057159423828125,
-0.052337646484375,
-0.058929443359375,
-0.00820922... |
jorgefedzhedz/distilbert-base-uncased-finetuned-cola | 2023-04-26T12:33:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jorgefedzhedz | null | null | jorgefedzhedz/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-26T12:09:33 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.541934635424655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8224
- Matthews Correlation: 0.5419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5231 | 1.0 | 535 | 0.5305 | 0.4003 |
| 0.348 | 2.0 | 1070 | 0.5013 | 0.4885 |
| 0.2353 | 3.0 | 1605 | 0.5578 | 0.5299 |
| 0.1846 | 4.0 | 2140 | 0.7711 | 0.5176 |
| 0.1363 | 5.0 | 2675 | 0.8224 | 0.5419 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,041 | [
[
-0.02325439453125,
-0.050018310546875,
0.01184844970703125,
0.0191497802734375,
-0.0216064453125,
-0.00848388671875,
-0.00522613525390625,
-0.0032215118408203125,
0.022552490234375,
0.01091766357421875,
-0.04559326171875,
-0.03558349609375,
-0.062225341796875,
... |
Dewa/dqn-SpaceInvadersNoFrameskip-v4-version-6 | 2023-04-26T13:39:48.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Dewa | null | null | Dewa/dqn-SpaceInvadersNoFrameskip-v4-version-6 | 0 | 2 | stable-baselines3 | 2023-04-26T12:50:28 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 274.50 +/- 31.50
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Dewa -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Dewa -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Dewa
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,679 | [
[
-0.041168212890625,
-0.037017822265625,
0.021820068359375,
0.0244903564453125,
-0.01024627685546875,
-0.01849365234375,
0.01297760009765625,
-0.01369476318359375,
0.01342010498046875,
0.0249176025390625,
-0.0701904296875,
-0.035675048828125,
-0.0269622802734375,... |
cafbr/distilbert-base-uncased-finetuned-emotion | 2023-05-06T23:59:37.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | cafbr | null | null | cafbr/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-26T13:19:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.939
- name: F1
type: f1
value: 0.9389480299119135
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1742
- Accuracy: 0.939
- F1: 0.9389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4255 | 1.0 | 2000 | 0.2257 | 0.9245 | 0.9240 |
| 0.1494 | 2.0 | 4000 | 0.1742 | 0.939 | 0.9389 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.11.0+cu113
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,845 | [
[
-0.038055419921875,
-0.04248046875,
0.01389312744140625,
0.0224456787109375,
-0.026092529296875,
-0.0189056396484375,
-0.01312255859375,
-0.0086669921875,
0.0112152099609375,
0.00847625732421875,
-0.056854248046875,
-0.051239013671875,
-0.06005859375,
-0.007... |
Jamesonn/DialoGPT-small-jumin | 2023-04-26T14:58:43.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"Deep Story",
"Jumin",
"Dating Sim",
"conversational",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | conversational | Jamesonn | null | null | Jamesonn/DialoGPT-small-jumin | 0 | 2 | transformers | 2023-04-26T14:15:55 | ---
language:
- en
tags:
- Deep Story
- Jumin
- Dating Sim
- conversational
---
#JuminBot DialoGPT Model | 105 | [
[
-0.01065826416015625,
-0.054443359375,
0.03863525390625,
-0.0222015380859375,
-0.032470703125,
0.0132293701171875,
0.028411865234375,
0.0038394927978515625,
0.018463134765625,
0.0882568359375,
-0.0174102783203125,
-0.007076263427734375,
-0.0260772705078125,
... |
cornut/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-26T14:18:44.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | cornut | null | null | cornut/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-26T14:18:00 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 759.00 +/- 293.16
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga numcat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga numcat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga numcat
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,686 | [
[
-0.042205810546875,
-0.037200927734375,
0.0216827392578125,
0.02392578125,
-0.01081085205078125,
-0.0165252685546875,
0.01261138916015625,
-0.0126495361328125,
0.01323699951171875,
0.0249481201171875,
-0.070556640625,
-0.035491943359375,
-0.0268707275390625,
... |
ardaaras99/bert-base-uncased-finetuned-cola | 2023-05-01T16:35:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ardaaras99 | null | null | ardaaras99/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-26T14:37:45 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5163776290121631
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4644
- Matthews Correlation: 0.5164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4915 | 1.0 | 535 | 0.4644 | 0.5164 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,722 | [
[
-0.0255126953125,
-0.0528564453125,
0.01136016845703125,
0.021514892578125,
-0.0277252197265625,
-0.0220184326171875,
-0.0194549560546875,
-0.01560211181640625,
0.0254974365234375,
0.016357421875,
-0.0498046875,
-0.031005859375,
-0.05084228515625,
-0.0209503... |
TheLastProgrammerStanding/distilbert-base-uncased-finetuned-clinc | 2023-04-27T15:32:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | TheLastProgrammerStanding | null | null | TheLastProgrammerStanding/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-04-26T15:10:43 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,932 | [
[
-0.034423828125,
-0.04156494140625,
0.012451171875,
0.00667572021484375,
-0.027557373046875,
-0.025421142578125,
-0.01268768310546875,
-0.00937652587890625,
0.0024127960205078125,
0.02215576171875,
-0.04656982421875,
-0.047943115234375,
-0.0582275390625,
-0.... |
bright1/fine-tuned-distilbert-base-uncased | 2023-04-29T13:07:05.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | bright1 | null | null | bright1/fine-tuned-distilbert-base-uncased | 0 | 2 | transformers | 2023-04-26T16:12:58 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: fine-tuned-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5839
- eval_accuracy: {'accuracy': 0.7735}
- eval_f1score: {'f1': 0.7659648935757575}
- eval_runtime: 36.2627
- eval_samples_per_second: 55.153
- eval_steps_per_second: 6.894
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 399
- num_epochs: 2
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,341 | [
[
-0.0330810546875,
-0.0546875,
0.01442718505859375,
0.016265869140625,
-0.03350830078125,
-0.01007080078125,
-0.016845703125,
-0.009490966796875,
0.0007262229919433594,
0.0230255126953125,
-0.046112060546875,
-0.041748046875,
-0.056732177734375,
-0.0056571960... |
shahukareem/coral-classification | 2023-04-26T17:24:29.000Z | [
"transformers",
"pytorch",
"beit",
"image-classification",
"autotrain",
"vision",
"dataset:shahukareem/autotrain-data-coral-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | shahukareem | null | null | shahukareem/coral-classification | 0 | 2 | transformers | 2023-04-26T17:23:23 | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- shahukareem/autotrain-data-coral-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.589456647079595
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 52977124783
- CO2 Emissions (in grams): 0.5895
## Validation Metrics
- Loss: 0.175
- Accuracy: 0.949
- Macro F1: 0.950
- Micro F1: 0.949
- Weighted F1: 0.950
- Macro Precision: 0.957
- Micro Precision: 0.949
- Weighted Precision: 0.956
- Macro Recall: 0.948
- Micro Recall: 0.949
- Weighted Recall: 0.949 | 892 | [
[
-0.0213775634765625,
-0.00775909423828125,
0.013916015625,
0.0009450912475585938,
0.0037021636962890625,
0.01296234130859375,
0.00867462158203125,
-0.01605224609375,
-0.0175628662109375,
-0.0032196044921875,
-0.0330810546875,
-0.04364013671875,
-0.04556274414062... |
rafsankabir/Finetuned_NLI_TeacherModel | 2023-04-27T01:17:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:xnli_bn",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | rafsankabir | null | null | rafsankabir/Finetuned_NLI_TeacherModel | 0 | 2 | transformers | 2023-04-26T17:25:55 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xnli_bn
metrics:
- accuracy
model-index:
- name: rafsankabir/Finetuned_NLI_TeacherModel
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: xnli_bn
type: xnli_bn
config: xnli_bn
split: validation
args: xnli_bn
metrics:
- name: Accuracy
type: accuracy
value: 0.6878875568416701
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rafsankabir/Finetuned_NLI_TeacherModel
This model is a fine-tuned version of [sagorsarker/bangla-bert-base](https://huggingface.co/sagorsarker/bangla-bert-base) on the xnli_bn dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5831
- Accuracy: 0.6879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.8381 | 1.0 | 11921 | 0.7554 | 0.6701 |
| 0.7221 | 2.0 | 23842 | 0.7214 | 0.6833 |
| 0.643 | 3.0 | 35763 | 0.7164 | 0.6920 |
| 0.5614 | 4.0 | 47684 | 0.7536 | 0.6862 |
| 0.4761 | 5.0 | 59605 | 0.8104 | 0.6875 |
| 0.395 | 6.0 | 71526 | 0.9219 | 0.6891 |
| 0.3239 | 7.0 | 83447 | 1.0047 | 0.6833 |
| 0.2627 | 8.0 | 95368 | 1.0624 | 0.6900 |
| 0.2138 | 9.0 | 107289 | 1.2522 | 0.6714 |
| 0.1768 | 10.0 | 119210 | 1.2947 | 0.6763 |
| 0.1455 | 11.0 | 131131 | 1.4790 | 0.6838 |
| 0.1246 | 12.0 | 143052 | 1.5446 | 0.6813 |
| 0.1073 | 13.0 | 154973 | 1.7562 | 0.6742 |
| 0.094 | 14.0 | 166894 | 1.8442 | 0.6891 |
| 0.0822 | 15.0 | 178815 | 1.9902 | 0.6842 |
| 0.0707 | 16.0 | 190736 | 2.2021 | 0.6825 |
| 0.0589 | 17.0 | 202657 | 2.2803 | 0.6854 |
| 0.0497 | 18.0 | 214578 | 2.4199 | 0.6804 |
| 0.0407 | 19.0 | 226499 | 2.5014 | 0.6850 |
| 0.0337 | 20.0 | 238420 | 2.5831 | 0.6879 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| 2,944 | [
[
-0.047088623046875,
-0.032745361328125,
0.00014150142669677734,
0.0031528472900390625,
-0.01214599609375,
-0.0137481689453125,
-0.00543975830078125,
-0.0105438232421875,
0.0304107666015625,
0.0258636474609375,
-0.052398681640625,
-0.042388916015625,
-0.044677734... |
jackoyoungblood/distilbert-base-uncased-finetuned-emotion | 2023-04-27T16:35:00.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jackoyoungblood | null | null | jackoyoungblood/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-26T17:57:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9235420558977202
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.9235
- F1: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8044 | 1.0 | 250 | 0.3091 | 0.9025 | 0.8995 |
| 0.2429 | 2.0 | 500 | 0.2161 | 0.9235 | 0.9235 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.10.3
| 1,804 | [
[
-0.0380859375,
-0.041168212890625,
0.01482391357421875,
0.0217437744140625,
-0.0257415771484375,
-0.019500732421875,
-0.01245880126953125,
-0.00801849365234375,
0.01043701171875,
0.007965087890625,
-0.05615234375,
-0.05126953125,
-0.060089111328125,
-0.00868... |
LecJackS/distilbert-base-uncased-finetuned-emotion | 2023-04-26T18:36:36.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | LecJackS | null | null | LecJackS/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-26T18:23:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9244610483889744
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2193
- Accuracy: 0.9245
- F1: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8598 | 1.0 | 250 | 0.3274 | 0.9005 | 0.8966 |
| 0.2584 | 2.0 | 500 | 0.2193 | 0.9245 | 0.9245 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.0.0+cu118
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,803 | [
[
-0.0379638671875,
-0.041656494140625,
0.01508331298828125,
0.021484375,
-0.0259857177734375,
-0.0194244384765625,
-0.0130462646484375,
-0.00899505615234375,
0.01030731201171875,
0.00830841064453125,
-0.056182861328125,
-0.05133056640625,
-0.06048583984375,
-... |
htriedman/wiki-sparql-models | 2023-05-01T15:41:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | htriedman | null | null | htriedman/wiki-sparql-models | 1 | 2 | transformers | 2023-04-26T19:27:09 | ---
tags:
- generated_from_trainer
model-index:
- name: wiki-sparql-models
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wiki-sparql-models
This model is a fine-tuned version of [htriedman/wiki-sparql-models](https://huggingface.co/htriedman/wiki-sparql-models) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0189
- Rouge2 Precision: 0.8846
- Rouge2 Recall: 0.1611
- Rouge2 Fmeasure: 0.2648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.0303 | 1.0 | 55180 | 0.0258 | 0.8688 | 0.1586 | 0.2605 |
| 0.0231 | 2.0 | 110360 | 0.0218 | 0.8776 | 0.1597 | 0.2625 |
| 0.02 | 3.0 | 165540 | 0.0201 | 0.8821 | 0.1607 | 0.2641 |
| 0.0164 | 4.0 | 220720 | 0.0192 | 0.8842 | 0.1611 | 0.2646 |
| 0.0175 | 5.0 | 275900 | 0.0189 | 0.8846 | 0.1611 | 0.2648 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,922 | [
[
-0.0379638671875,
-0.04541015625,
0.005809783935546875,
0.00804901123046875,
-0.0145263671875,
-0.02972412109375,
0.00829315185546875,
-0.0095062255859375,
0.0191497802734375,
0.05340576171875,
-0.056884765625,
-0.048248291015625,
-0.044464111328125,
-0.0022... |
amitrajitbh1/distilroberta-base-finetuned-teen-2 | 2023-04-26T20:49:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | amitrajitbh1 | null | null | amitrajitbh1/distilroberta-base-finetuned-teen-2 | 0 | 2 | transformers | 2023-04-26T20:14:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-teen-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-teen-2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5736 | 1.0 | 157 | 3.3554 |
| 3.1559 | 2.0 | 314 | 3.1532 |
| 3.0252 | 3.0 | 471 | 3.0850 |
| 2.858 | 4.0 | 628 | 2.9401 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,487 | [
[
-0.03411865234375,
-0.051544189453125,
0.0096435546875,
0.017486572265625,
-0.0269775390625,
-0.01922607421875,
-0.00868988037109375,
-0.0011234283447265625,
0.0014629364013671875,
0.019622802734375,
-0.058624267578125,
-0.040313720703125,
-0.0579833984375,
... |
bostiadm/distilbert-base-uncased-finetuned-clinc | 2023-04-26T22:53:41.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | bostiadm | null | null | bostiadm/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-04-26T20:18:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,614 | [
[
-0.03521728515625,
-0.0452880859375,
0.0148468017578125,
0.0126800537109375,
-0.0274200439453125,
-0.0212249755859375,
-0.00965118408203125,
-0.006221771240234375,
0.003421783447265625,
0.02032470703125,
-0.0504150390625,
-0.046539306640625,
-0.05889892578125,
... |
Abubakari/finetuned-Sentiment-classfication-BERT-model | 2023-04-26T23:13:14.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Abubakari | null | null | Abubakari/finetuned-Sentiment-classfication-BERT-model | 0 | 2 | transformers | 2023-04-26T20:44:31 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuned-Sentiment-classfication-BERT-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-Sentiment-classfication-BERT-model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6033
- Rmse: 0.6751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7547 | 2.0 | 500 | 0.6033 | 0.6751 |
| 0.3852 | 4.0 | 1000 | 0.7173 | 0.6777 |
| 0.1411 | 6.0 | 1500 | 1.0985 | 0.6977 |
| 0.0677 | 8.0 | 2000 | 1.2270 | 0.6552 |
| 0.0323 | 10.0 | 2500 | 1.3478 | 0.6567 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,722 | [
[
-0.053131103515625,
-0.05224609375,
0.00782012939453125,
0.0143585205078125,
-0.0289306640625,
-0.0265045166015625,
-0.025238037109375,
-0.00252532958984375,
0.01480865478515625,
0.02294921875,
-0.06573486328125,
-0.04827880859375,
-0.051239013671875,
-0.024... |
cartesinus/iva_mt_wslot-m2m100_418M-en-es-massive_unfiltered | 2023-04-27T01:25:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:iva_mt_wslot-exp",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | cartesinus | null | null | cartesinus/iva_mt_wslot-m2m100_418M-en-es-massive_unfiltered | 0 | 2 | transformers | 2023-04-26T21:59:49 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- iva_mt_wslot-exp
metrics:
- bleu
model-index:
- name: iva_mt_wslot-m2m100_418M-en-es-massive_unfiltered
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: iva_mt_wslot-exp
type: iva_mt_wslot-exp
config: en-es
split: validation
args: en-es
metrics:
- name: Bleu
type: bleu
value: 67.6426
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iva_mt_wslot-m2m100_418M-en-es-massive_unfiltered
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the iva_mt_wslot-exp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0114
- Bleu: 67.6426
- Gen Len: 18.9134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0129 | 1.0 | 2879 | 0.0118 | 65.4383 | 18.8697 |
| 0.009 | 2.0 | 5758 | 0.0109 | 66.6878 | 18.9331 |
| 0.0066 | 3.0 | 8637 | 0.0107 | 66.6143 | 18.8687 |
| 0.0049 | 4.0 | 11516 | 0.0108 | 66.9832 | 18.8067 |
| 0.0037 | 5.0 | 14395 | 0.0109 | 67.452 | 18.8598 |
| 0.0028 | 6.0 | 17274 | 0.0112 | 67.4281 | 18.9213 |
| 0.0023 | 7.0 | 20153 | 0.0114 | 67.6426 | 18.9134 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,233 | [
[
-0.03948974609375,
-0.046356201171875,
0.01132965087890625,
0.007305145263671875,
-0.0238189697265625,
-0.0194244384765625,
-0.00316619873046875,
-0.0182342529296875,
0.0291748046875,
0.0262603759765625,
-0.06549072265625,
-0.045318603515625,
-0.048675537109375,... |
fieldms/distilbert-base-uncased-finetuned-clinc | 2023-04-27T04:35:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | fieldms | null | null | fieldms/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-04-27T00:09:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,614 | [
[
-0.03521728515625,
-0.0452880859375,
0.0148468017578125,
0.0126800537109375,
-0.0274200439453125,
-0.0212249755859375,
-0.00965118408203125,
-0.006221771240234375,
0.003421783447265625,
0.02032470703125,
-0.0504150390625,
-0.046539306640625,
-0.05889892578125,
... |
fieldms/distilbert-base-uncased-distilled-clinc | 2023-04-27T04:47:35.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | fieldms | null | null | fieldms/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-04-27T01:07:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2215
- Accuracy: 0.9461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.2602 | 0.7477 |
| 1.5392 | 2.0 | 636 | 0.6650 | 0.8719 |
| 1.5392 | 3.0 | 954 | 0.3990 | 0.9174 |
| 0.6086 | 4.0 | 1272 | 0.2905 | 0.9342 |
| 0.3055 | 5.0 | 1590 | 0.2497 | 0.9416 |
| 0.3055 | 6.0 | 1908 | 0.2313 | 0.9461 |
| 0.2219 | 7.0 | 2226 | 0.2233 | 0.9468 |
| 0.1962 | 8.0 | 2544 | 0.2215 | 0.9461 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,800 | [
[
-0.03399658203125,
-0.0421142578125,
0.01800537109375,
0.01107025146484375,
-0.0241241455078125,
-0.01385498046875,
-0.006328582763671875,
-0.0026187896728515625,
0.0084075927734375,
0.02020263671875,
-0.046295166015625,
-0.046844482421875,
-0.0621337890625,
... |
carolinetfls/plant-seedlings-model-mit | 2023-04-27T05:33:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | carolinetfls | null | null | carolinetfls/plant-seedlings-model-mit | 0 | 2 | transformers | 2023-04-27T01:57:11 | ---
license: other
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: plant-seedlings-model-mit
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9400785854616895
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plant-seedlings-model-mit
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2052
- Accuracy: 0.9401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.459 | 0.2 | 100 | 2.4084 | 0.1424 |
| 1.7264 | 0.39 | 200 | 1.5604 | 0.4430 |
| 1.427 | 0.59 | 300 | 1.2719 | 0.5447 |
| 1.1796 | 0.79 | 400 | 0.9608 | 0.6469 |
| 0.6449 | 0.98 | 500 | 0.9086 | 0.6783 |
| 0.819 | 1.18 | 600 | 0.8235 | 0.7230 |
| 0.711 | 1.38 | 700 | 0.8286 | 0.7161 |
| 0.6829 | 1.57 | 800 | 0.6853 | 0.7829 |
| 0.7093 | 1.77 | 900 | 0.8823 | 0.7112 |
| 0.6265 | 1.96 | 1000 | 0.5434 | 0.8129 |
| 0.6062 | 2.16 | 1100 | 0.4865 | 0.8301 |
| 0.6318 | 2.36 | 1200 | 0.5239 | 0.8256 |
| 0.5195 | 2.55 | 1300 | 0.5997 | 0.7809 |
| 0.5847 | 2.75 | 1400 | 0.5282 | 0.8099 |
| 0.4684 | 2.95 | 1500 | 0.4301 | 0.8502 |
| 0.7026 | 3.14 | 1600 | 0.4628 | 0.8522 |
| 0.443 | 3.34 | 1700 | 0.4201 | 0.8492 |
| 0.6532 | 3.54 | 1800 | 0.4979 | 0.8330 |
| 0.5021 | 3.73 | 1900 | 0.5098 | 0.8202 |
| 0.4203 | 3.93 | 2000 | 0.4277 | 0.8512 |
| 0.4201 | 4.13 | 2100 | 0.4046 | 0.8649 |
| 0.397 | 4.32 | 2200 | 0.5747 | 0.8158 |
| 0.472 | 4.52 | 2300 | 0.5175 | 0.8237 |
| 0.5614 | 4.72 | 2400 | 0.4351 | 0.8443 |
| 0.3184 | 4.91 | 2500 | 0.3635 | 0.8787 |
| 0.3409 | 5.11 | 2600 | 0.4374 | 0.8571 |
| 0.3132 | 5.3 | 2700 | 0.3622 | 0.8767 |
| 0.3928 | 5.5 | 2800 | 0.3522 | 0.8797 |
| 0.4538 | 5.7 | 2900 | 0.3652 | 0.8718 |
| 0.5516 | 5.89 | 3000 | 0.4128 | 0.8689 |
| 0.4113 | 6.09 | 3100 | 0.3973 | 0.8649 |
| 0.3365 | 6.29 | 3200 | 0.4116 | 0.8635 |
| 0.4611 | 6.48 | 3300 | 0.3312 | 0.8846 |
| 0.312 | 6.68 | 3400 | 0.3888 | 0.8679 |
| 0.3811 | 6.88 | 3500 | 0.3388 | 0.8841 |
| 0.3711 | 7.07 | 3600 | 0.3300 | 0.8954 |
| 0.4593 | 7.27 | 3700 | 0.3491 | 0.8831 |
| 0.5211 | 7.47 | 3800 | 0.3682 | 0.8895 |
| 0.2319 | 7.66 | 3900 | 0.3326 | 0.8861 |
| 0.3811 | 7.86 | 4000 | 0.3407 | 0.8910 |
| 0.4044 | 8.06 | 4100 | 0.3076 | 0.9028 |
| 0.367 | 8.25 | 4200 | 0.3126 | 0.9023 |
| 0.3862 | 8.45 | 4300 | 0.3281 | 0.8954 |
| 0.2489 | 8.64 | 4400 | 0.3166 | 0.8929 |
| 0.3197 | 8.84 | 4500 | 0.3564 | 0.8802 |
| 0.3114 | 9.04 | 4600 | 0.2978 | 0.8969 |
| 0.3589 | 9.23 | 4700 | 0.3438 | 0.8895 |
| 0.3075 | 9.43 | 4800 | 0.2894 | 0.9082 |
| 0.3862 | 9.63 | 4900 | 0.2880 | 0.9047 |
| 0.3319 | 9.82 | 5000 | 0.3628 | 0.8915 |
| 0.3022 | 10.02 | 5100 | 0.2624 | 0.9145 |
| 0.2697 | 10.22 | 5200 | 0.3866 | 0.8851 |
| 0.218 | 10.41 | 5300 | 0.2632 | 0.9101 |
| 0.3331 | 10.61 | 5400 | 0.3117 | 0.9023 |
| 0.3043 | 10.81 | 5500 | 0.3604 | 0.8900 |
| 0.3105 | 11.0 | 5600 | 0.2847 | 0.9111 |
| 0.1758 | 11.2 | 5700 | 0.3144 | 0.9082 |
| 0.2081 | 11.39 | 5800 | 0.2898 | 0.9101 |
| 0.4005 | 11.59 | 5900 | 0.3138 | 0.8998 |
| 0.264 | 11.79 | 6000 | 0.2792 | 0.9136 |
| 0.2765 | 11.98 | 6100 | 0.3021 | 0.9003 |
| 0.2595 | 12.18 | 6200 | 0.2625 | 0.9091 |
| 0.2745 | 12.38 | 6300 | 0.3078 | 0.9057 |
| 0.2437 | 12.57 | 6400 | 0.2533 | 0.9194 |
| 0.3765 | 12.77 | 6500 | 0.2972 | 0.9008 |
| 0.2911 | 12.97 | 6600 | 0.2909 | 0.9096 |
| 0.2335 | 13.16 | 6700 | 0.2684 | 0.9136 |
| 0.3099 | 13.36 | 6800 | 0.3057 | 0.9086 |
| 0.2377 | 13.56 | 6900 | 0.2862 | 0.9140 |
| 0.3159 | 13.75 | 7000 | 0.2271 | 0.9273 |
| 0.1893 | 13.95 | 7100 | 0.2519 | 0.9244 |
| 0.1703 | 14.15 | 7200 | 0.2616 | 0.9209 |
| 0.2527 | 14.34 | 7300 | 0.2393 | 0.9293 |
| 0.3772 | 14.54 | 7400 | 0.2662 | 0.9160 |
| 0.2574 | 14.73 | 7500 | 0.2724 | 0.9155 |
| 0.1803 | 14.93 | 7600 | 0.2549 | 0.9199 |
| 0.2935 | 15.13 | 7700 | 0.2561 | 0.9185 |
| 0.2105 | 15.32 | 7800 | 0.2202 | 0.9244 |
| 0.2877 | 15.52 | 7900 | 0.2428 | 0.9234 |
| 0.2467 | 15.72 | 8000 | 0.2531 | 0.9229 |
| 0.2955 | 15.91 | 8100 | 0.3258 | 0.9194 |
| 0.3136 | 16.11 | 8200 | 0.2430 | 0.9263 |
| 0.2543 | 16.31 | 8300 | 0.2502 | 0.9204 |
| 0.161 | 16.5 | 8400 | 0.2241 | 0.9352 |
| 0.194 | 16.7 | 8500 | 0.2313 | 0.9298 |
| 0.1951 | 16.9 | 8600 | 0.2446 | 0.9219 |
| 0.2515 | 17.09 | 8700 | 0.2476 | 0.9224 |
| 0.1274 | 17.29 | 8800 | 0.2445 | 0.9273 |
| 0.3035 | 17.49 | 8900 | 0.2704 | 0.9239 |
| 0.2253 | 17.68 | 9000 | 0.2436 | 0.9332 |
| 0.0982 | 17.88 | 9100 | 0.2523 | 0.9327 |
| 0.1778 | 18.07 | 9200 | 0.2425 | 0.9322 |
| 0.1362 | 18.27 | 9300 | 0.2653 | 0.9219 |
| 0.2342 | 18.47 | 9400 | 0.2076 | 0.9401 |
| 0.2231 | 18.66 | 9500 | 0.2238 | 0.9361 |
| 0.2159 | 18.86 | 9600 | 0.2115 | 0.9357 |
| 0.1826 | 19.06 | 9700 | 0.2079 | 0.9332 |
| 0.2221 | 19.25 | 9800 | 0.2003 | 0.9366 |
| 0.136 | 19.45 | 9900 | 0.2170 | 0.9401 |
| 0.0959 | 19.65 | 10000 | 0.1891 | 0.9440 |
| 0.1525 | 19.84 | 10100 | 0.2052 | 0.9401 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 7,985 | [
[
-0.04376220703125,
-0.0374755859375,
0.0178070068359375,
0.01099395751953125,
-0.0010690689086914062,
0.0038242340087890625,
0.00894927978515625,
-0.0017728805541992188,
0.050201416015625,
0.024505615234375,
-0.0416259765625,
-0.041595458984375,
-0.0438232421875... |
clayygodd/distilbert-base-uncased-finetuned-clinc | 2023-04-27T05:54:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | clayygodd | null | null | clayygodd/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-04-27T03:32:14 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,932 | [
[
-0.034393310546875,
-0.04156494140625,
0.012420654296875,
0.00667572021484375,
-0.027557373046875,
-0.025390625,
-0.01267242431640625,
-0.00936126708984375,
0.0024089813232421875,
0.02215576171875,
-0.046539306640625,
-0.0479736328125,
-0.0582275390625,
-0.0... |
riho1710/distilbert-base-uncased-finetuned-emotion | 2023-05-06T14:54:42.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | riho1710 | null | null | riho1710/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-27T03:36:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240047123379981
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2239
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8403 | 1.0 | 250 | 0.3219 | 0.9085 | 0.9059 |
| 0.2549 | 2.0 | 500 | 0.2239 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1
- Datasets 2.11.0
- Tokenizers 0.13.0.dev0
| 1,846 | [
[
-0.0382080078125,
-0.040679931640625,
0.0128326416015625,
0.022552490234375,
-0.02618408203125,
-0.0198974609375,
-0.012420654296875,
-0.00855255126953125,
0.01033782958984375,
0.00925445556640625,
-0.056793212890625,
-0.052703857421875,
-0.0595703125,
-0.00... |
pamelapaolacb/roberta-base-bne-jou-amazon_reviews_multi | 2023-04-27T03:54:14.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | pamelapaolacb | null | null | pamelapaolacb/roberta-base-bne-jou-amazon_reviews_multi | 0 | 2 | transformers | 2023-04-27T03:39:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-jou-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93275
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-jou-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2195
- Accuracy: 0.9327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1981 | 1.0 | 1250 | 0.1763 | 0.9325 |
| 0.106 | 2.0 | 2500 | 0.2195 | 0.9327 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,783 | [
[
-0.036651611328125,
-0.04730224609375,
0.01113128662109375,
0.01450347900390625,
-0.0300140380859375,
-0.029876708984375,
-0.01520538330078125,
-0.01806640625,
0.010650634765625,
0.0284423828125,
-0.049346923828125,
-0.045013427734375,
-0.053985595703125,
-0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.