modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
intanm/mlm_v1_20230327_fin_sa_90 | 2023-03-27T05:58:15.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | intanm | null | null | intanm/mlm_v1_20230327_fin_sa_90 | 0 | 2 | transformers | 2023-03-27T05:53:14 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mlm_v1_20230327_fin_sa_90
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm_v1_20230327_fin_sa_90
This model is a fine-tuned version of [intanm/mlm-v1-fin-lm-20230327-001](https://huggingface.co/intanm/mlm-v1-fin-lm-20230327-001) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1439
- Accuracy: 0.9560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 92 | 0.1879 | 0.9396 |
| No log | 2.0 | 184 | 0.1439 | 0.9560 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,430 | [
[
-0.029815673828125,
-0.041259765625,
0.00843048095703125,
0.01953125,
-0.0216522216796875,
-0.033843994140625,
-0.006504058837890625,
-0.0224151611328125,
0.00909423828125,
0.041961669921875,
-0.059356689453125,
-0.04693603515625,
-0.047576904296875,
-0.0074... |
intanm/mlm_v1_20230327_fin_sa_80 | 2023-03-27T06:10:15.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | intanm | null | null | intanm/mlm_v1_20230327_fin_sa_80 | 0 | 2 | transformers | 2023-03-27T06:04:34 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mlm_v1_20230327_fin_sa_80
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm_v1_20230327_fin_sa_80
This model is a fine-tuned version of [intanm/mlm-v1-fin-lm-20230327-001](https://huggingface.co/intanm/mlm-v1-fin-lm-20230327-001) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1673
- Accuracy: 0.9341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 82 | 0.1843 | 0.9451 |
| No log | 2.0 | 164 | 0.1673 | 0.9341 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,430 | [
[
-0.0301055908203125,
-0.041259765625,
0.0079803466796875,
0.0200347900390625,
-0.02276611328125,
-0.033782958984375,
-0.0066375732421875,
-0.0223541259765625,
0.0086822509765625,
0.042022705078125,
-0.058135986328125,
-0.046844482421875,
-0.0487060546875,
-0... |
JiaqiLee/robust-bert-jigsaw | 2023-03-28T08:06:28.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:jigsaw_toxicity_pred",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"region:us"
] | text-classification | JiaqiLee | null | null | JiaqiLee/robust-bert-jigsaw | 1 | 2 | transformers | 2023-03-27T06:07:02 | ---
license: bigscience-bloom-rail-1.0
datasets:
- jigsaw_toxicity_pred
language:
- en
metrics:
- accuracy
- f1
library_name: transformers
pipeline_tag: text-classification
---
## Model description
This model is a fine-tuned version of the [bert-base-uncased](https://huggingface.co/transformers/model_doc/bert.html) model to classify toxic comments. \
The BERT model is finetuned using adversarial training to boost robustness against textual adversarial attacks.
## How to use
You can use the model with the following code.
```python
from transformers import BertForSequenceClassification, BertTokenizer, TextClassificationPipeline
model_path = "JiaqiLee/robust-bert-jigsaw"
tokenizer = BertTokenizer.from_pretrained(model_path)
model = BertForSequenceClassification.from_pretrained(model_path, num_labels=2)
pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(pipeline("You're a fucking nerd."))
```
## Training data
The training data comes from this [Kaggle competition](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data). We use 90% of the `train.csv` data to train the model. \
We augment original training data with adversarial examples generated by PWWS, TextBugger and TextFooler.
## Evaluation results
The model achieves 0.95 AUC in a 1500 rows held-out test set. | 1,337 | [
[
-0.018402099609375,
-0.062744140625,
0.00864410400390625,
-0.005268096923828125,
-0.022216796875,
-0.017181396484375,
-0.01448822021484375,
-0.0201873779296875,
0.00669097900390625,
0.044830322265625,
-0.041351318359375,
-0.0305938720703125,
-0.059783935546875,
... |
intanm/mlm_v1_20230327_fin_sa_70 | 2023-03-27T06:22:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | intanm | null | null | intanm/mlm_v1_20230327_fin_sa_70 | 0 | 2 | transformers | 2023-03-27T06:14:56 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mlm_v1_20230327_fin_sa_70
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm_v1_20230327_fin_sa_70
This model is a fine-tuned version of [intanm/mlm-v1-fin-lm-20230327-001](https://huggingface.co/intanm/mlm-v1-fin-lm-20230327-001) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1737
- Accuracy: 0.9451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 72 | 0.2082 | 0.9286 |
| No log | 2.0 | 144 | 0.1737 | 0.9451 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,430 | [
[
-0.02996826171875,
-0.0411376953125,
0.007843017578125,
0.02032470703125,
-0.022918701171875,
-0.033843994140625,
-0.006534576416015625,
-0.0228271484375,
0.00925445556640625,
0.041534423828125,
-0.05853271484375,
-0.0472412109375,
-0.0477294921875,
-0.00702... |
junsor/whisper-small-aishell | 2023-03-27T10:16:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:aishell",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | junsor | null | null | junsor/whisper-small-aishell | 0 | 2 | transformers | 2023-03-27T06:19:15 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- aishell
metrics:
- wer
model-index:
- name: whisper-small-aishell
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: aishell
type: aishell
config: zh-cn
split: test
args: zh-cn
metrics:
- name: Wer
type: wer
value: 0.4067725752508361
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-aishell
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the aishell zh-cn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1770
- Wer: 0.4068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
train data:aishell train
test data:aishell test
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 0.042 | 4.26 | 1000 | 0.1227 | 0.3990 |
| 0.0134 | 8.52 | 2000 | 0.1312 | 0.4004 |
| 0.0042 | 12.78 | 3000 | 0.1402 | 0.4027 |0.051 |
| 0.0022 | 17.04 | 4000 | 0.1479 | 0.4045 |
| 0.001 | 21.3 | 5000 | 0.1568 | 0.4069 |
| 0.0007 | 25.56 | 6000 | 0.1568 | 0.3990 |
| 0.0004 | 29.82 | 7000 | 0.1644 | 0.4037 |
| 0.0003 | 34.08 | 8000 | 0.1697 | 0.4045 |
| 0.0002 | 38.34 | 9000 | 0.1751 | 0.4072 |
| 0.0002 | 42.6 | 10000 | 0.1770 | 0.4068 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,450 | [
[
-0.034027099609375,
-0.036407470703125,
0.0018606185913085938,
-0.00212860107421875,
-0.0191802978515625,
-0.028564453125,
-0.0074310302734375,
-0.02081298828125,
0.016357421875,
0.0220947265625,
-0.05096435546875,
-0.048797607421875,
-0.046142578125,
-0.015... |
intanm/mlm_v1_20230327_fin_sa_60 | 2023-03-27T06:33:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | intanm | null | null | intanm/mlm_v1_20230327_fin_sa_60 | 0 | 2 | transformers | 2023-03-27T06:29:55 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mlm_v1_20230327_fin_sa_60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm_v1_20230327_fin_sa_60
This model is a fine-tuned version of [intanm/mlm-v1-fin-lm-20230327-001](https://huggingface.co/intanm/mlm-v1-fin-lm-20230327-001) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1673
- Accuracy: 0.9505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 62 | 0.1830 | 0.9451 |
| No log | 2.0 | 124 | 0.1673 | 0.9505 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,430 | [
[
-0.030029296875,
-0.040679931640625,
0.00675201416015625,
0.0190887451171875,
-0.022613525390625,
-0.033172607421875,
-0.006343841552734375,
-0.0222930908203125,
0.009674072265625,
0.041046142578125,
-0.05859375,
-0.047149658203125,
-0.048583984375,
-0.00704... |
intanm/mlm_v1_20230327_fin_sa_50 | 2023-03-27T06:45:24.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | intanm | null | null | intanm/mlm_v1_20230327_fin_sa_50 | 0 | 2 | transformers | 2023-03-27T06:39:09 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mlm_v1_20230327_fin_sa_50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm_v1_20230327_fin_sa_50
This model is a fine-tuned version of [intanm/mlm-v1-fin-lm-20230327-001](https://huggingface.co/intanm/mlm-v1-fin-lm-20230327-001) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.9396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 51 | 0.2675 | 0.9121 |
| No log | 2.0 | 102 | 0.2202 | 0.9396 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,430 | [
[
-0.0310821533203125,
-0.0400390625,
0.006580352783203125,
0.0202484130859375,
-0.0224609375,
-0.03277587890625,
-0.006549835205078125,
-0.021942138671875,
0.00873565673828125,
0.040435791015625,
-0.058380126953125,
-0.04766845703125,
-0.048248291015625,
-0.0... |
intanm/mlm_v1_20230327_fin_sa_30 | 2023-03-27T07:11:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | intanm | null | null | intanm/mlm_v1_20230327_fin_sa_30 | 0 | 2 | transformers | 2023-03-27T07:06:15 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mlm_v1_20230327_fin_sa_30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm_v1_20230327_fin_sa_30
This model is a fine-tuned version of [intanm/mlm-v1-fin-lm-20230327-001](https://huggingface.co/intanm/mlm-v1-fin-lm-20230327-001) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Accuracy: 0.9396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 31 | 0.3200 | 0.8956 |
| No log | 2.0 | 62 | 0.2220 | 0.9396 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,430 | [
[
-0.030487060546875,
-0.040496826171875,
0.006866455078125,
0.020751953125,
-0.022186279296875,
-0.03302001953125,
-0.0065765380859375,
-0.022003173828125,
0.0083465576171875,
0.04052734375,
-0.0582275390625,
-0.047332763671875,
-0.048736572265625,
-0.0065879... |
YSKartal/scibert_scivocab_uncased-finetuned-2-ref_disam | 2023-04-03T22:25:13.000Z | [
"transformers",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | YSKartal | null | null | YSKartal/scibert_scivocab_uncased-finetuned-2-ref_disam | 0 | 2 | transformers | 2023-03-27T07:17:52 | ---
tags:
- generated_from_keras_callback
model-index:
- name: YSKartal/scibert_scivocab_uncased-finetuned-2-ref_disam
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# YSKartal/scibert_scivocab_uncased-finetuned-2-ref_disam
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.3345
- Validation Loss: 5.5243
- Train Accuracy: 0.1562
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16308, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 6.8503 | 6.9170 | 0.0323 | 0 |
| 5.7494 | 6.3086 | 0.0738 | 1 |
| 4.9365 | 5.8427 | 0.1206 | 2 |
| 4.3345 | 5.5243 | 0.1562 | 3 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,946 | [
[
-0.037567138671875,
-0.032318115234375,
0.0188751220703125,
0.0093536376953125,
-0.026947021484375,
-0.0172882080078125,
-0.00804901123046875,
-0.019744873046875,
0.0168304443359375,
0.00286102294921875,
-0.047882080078125,
-0.048675537109375,
-0.054473876953125... |
xinyixiuxiu/albert-xxlarge-v2-SST2-finetuned-epoch-2 | 2023-03-27T07:53:52.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | xinyixiuxiu | null | null | xinyixiuxiu/albert-xxlarge-v2-SST2-finetuned-epoch-2 | 0 | 2 | transformers | 2023-03-27T07:20:57 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-xxlarge-v2-SST2-finetuned-epoch-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-xxlarge-v2-SST2-finetuned-epoch-2
This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2721
- Train Accuracy: 0.8858
- Validation Loss: 0.1265
- Validation Accuracy: 0.9564
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2721 | 0.8858 | 0.1265 | 0.9564 | 0 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,493 | [
[
-0.04180908203125,
-0.033233642578125,
0.0238494873046875,
0.00910186767578125,
-0.031982421875,
-0.0384521484375,
-0.0145721435546875,
-0.0301513671875,
0.00986480712890625,
0.0199432373046875,
-0.04779052734375,
-0.038970947265625,
-0.055206298828125,
-0.0... |
zhezhang92/finetuning-sentiment-model-3000-samples | 2023-03-27T14:17:18.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | zhezhang92 | null | null | zhezhang92/finetuning-sentiment-model-3000-samples | 0 | 2 | transformers | 2023-03-27T08:40:55 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8566666666666667
- name: F1
type: f1
value: 0.8571428571428571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7463
- Accuracy: 0.8567
- F1: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.27.2
- Pytorch 1.12.1
- Datasets 2.10.1
- Tokenizers 0.11.0
| 1,558 | [
[
-0.05169677734375,
-0.044036865234375,
0.0130767822265625,
0.0178985595703125,
-0.04022216796875,
-0.01476287841796875,
-0.016693115234375,
0.006679534912109375,
0.0176544189453125,
0.020355224609375,
-0.06121826171875,
-0.04949951171875,
-0.0631103515625,
-... |
harshitaskh/test_trainer | 2023-03-27T11:02:41.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | harshitaskh | null | null | harshitaskh/test_trainer | 0 | 2 | transformers | 2023-03-27T10:05:30 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the Fakenews dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.0084 | 1.0 | 625 | 0.0007 | 1.0 | 1.0 |
| 0.0036 | 2.0 | 1250 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,413 | [
[
-0.034271240234375,
-0.047271728515625,
0.011444091796875,
0.020111083984375,
-0.0269622802734375,
-0.03118896484375,
-0.0149688720703125,
-0.0150604248046875,
0.01560211181640625,
0.017364501953125,
-0.056610107421875,
-0.03900146484375,
-0.04693603515625,
... |
bitextor/bicleaner-ai-full-en-fi | 2023-03-27T10:42:59.000Z | [
"transformers",
"tf",
"xlm-roberta",
"bicleaner-ai",
"en",
"fi",
"multilingual",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | bitextor | null | null | bitextor/bicleaner-ai-full-en-fi | 0 | 2 | transformers | 2023-03-27T10:37:43 | ---
language:
- en
- fi
- multilingual
license: cc-by-sa-4.0
tags:
- bicleaner-ai
tasks:
- text-classification
---
# Bicleaner AI full model for en-fi
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
| 554 | [
[
-0.03436279296875,
-0.0777587890625,
0.0269622802734375,
0.0185546875,
-0.0255126953125,
0.0120849609375,
-0.01465606689453125,
-0.04449462890625,
0.0240936279296875,
0.0264892578125,
-0.0267486572265625,
-0.0230560302734375,
-0.053070068359375,
0.0221557617... |
anqiyy/BERT-SA | 2023-04-05T08:38:15.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | anqiyy | null | null | anqiyy/BERT-SA | 0 | 2 | transformers | 2023-03-27T11:31:30 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT-SA
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT-SA
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.10.0
- Tokenizers 0.11.0
| 916 | [
[
-0.040435791015625,
-0.04559326171875,
0.017913818359375,
0.00351715087890625,
-0.052581787109375,
-0.0252532958984375,
-0.01335906982421875,
-0.02911376953125,
0.01413726806640625,
0.031829833984375,
-0.0526123046875,
-0.03924560546875,
-0.056671142578125,
... |
bitextor/bicleaner-ai-full-en-pl | 2023-03-27T11:55:00.000Z | [
"transformers",
"tf",
"xlm-roberta",
"bicleaner-ai",
"en",
"pl",
"multilingual",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | bitextor | null | null | bitextor/bicleaner-ai-full-en-pl | 0 | 2 | transformers | 2023-03-27T11:54:37 | ---
language:
- en
- pl
- multilingual
license: cc-by-sa-4.0
tags:
- bicleaner-ai
tasks:
- text-classification
---
# Bicleaner AI full model for en-pl
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
| 554 | [
[
-0.031707763671875,
-0.07379150390625,
0.028717041015625,
0.0220489501953125,
-0.029052734375,
0.01103973388671875,
-0.0157470703125,
-0.04266357421875,
0.02008056640625,
0.0277099609375,
-0.0246429443359375,
-0.0255279541015625,
-0.052337646484375,
0.022735... |
bitextor/bicleaner-ai-full-en-pt | 2023-03-27T11:56:37.000Z | [
"transformers",
"tf",
"xlm-roberta",
"bicleaner-ai",
"en",
"pt",
"multilingual",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | bitextor | null | null | bitextor/bicleaner-ai-full-en-pt | 0 | 2 | transformers | 2023-03-27T11:56:14 | ---
language:
- en
- pt
- multilingual
license: cc-by-sa-4.0
tags:
- bicleaner-ai
tasks:
- text-classification
---
# Bicleaner AI full model for en-pt
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
| 554 | [
[
-0.03045654296875,
-0.07464599609375,
0.0289764404296875,
0.0219573974609375,
-0.0301361083984375,
0.01084136962890625,
-0.01503753662109375,
-0.041839599609375,
0.0205078125,
0.025360107421875,
-0.0233001708984375,
-0.0267791748046875,
-0.056793212890625,
0... |
bitextor/bicleaner-ai-full-en-ro | 2023-03-27T11:58:10.000Z | [
"transformers",
"tf",
"xlm-roberta",
"bicleaner-ai",
"en",
"ro",
"multilingual",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | bitextor | null | null | bitextor/bicleaner-ai-full-en-ro | 0 | 2 | transformers | 2023-03-27T11:57:48 | ---
language:
- en
- ro
- multilingual
license: cc-by-sa-4.0
tags:
- bicleaner-ai
tasks:
- text-classification
---
# Bicleaner AI full model for en-ro
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
| 554 | [
[
-0.0279998779296875,
-0.07513427734375,
0.02685546875,
0.0203857421875,
-0.024993896484375,
0.01195526123046875,
-0.01629638671875,
-0.043975830078125,
0.021453857421875,
0.02734375,
-0.024200439453125,
-0.026641845703125,
-0.05303955078125,
0.02110290527343... |
bertin-project/bertin-alpaca-lora-7b | 2023-09-19T11:32:13.000Z | [
"peft",
"text-generation",
"es",
"dataset:bertin-project/alpaca-spanish",
"license:openrail",
"region:us"
] | text-generation | bertin-project | null | null | bertin-project/bertin-alpaca-lora-7b | 4 | 2 | peft | 2023-03-27T13:58:50 | ---
language:
- es
license: openrail
library_name: peft
datasets:
- bertin-project/alpaca-spanish
pipeline_tag: text-generation
base_model: decapoda-research/llama-7b-hf
---
# BERTIN-Alpaca-LoRA 7B
This is a Spanish adapter generated by fine-tuning LLaMA-7B on a [Spanish Alpaca](https://huggingface.co/datasets/bertin-project/alpaca-spanish) dataset.
## Usage
```python
from peft import PeftModel
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
base_model = "decapoda-research/llama-7b-hf"
tokenizer = LLaMATokenizer.from_pretrained(base_model)
model = LLaMAForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "bertin-project/bertin-alpaca-lora-7b")
```
Until `PEFT` is fully supported in Hugginface's pipelines, for generation we can either consolidate the LoRA weights into the LLaMA model weights, or use the adapter's `generate()` method. Remember that the prompt still needs the English template:
```python
# Generate responses
def generate(instruction, input=None):
if input:
prompt = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. # noqa: E501
### Instruction:
{instruction}
### Input:
{input}
### Response:
"""
else:
prompt = f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. # noqa: E501
### Instruction:
{instruction}
### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(temperature=0.2, top_p=0.75, num_beams=4),
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
for seq in generation_output.sequences:
output = tokenizer.decode(seq)
print(output.split("### Response:")[1].strip())
generate("Escribe un correo electrónico dando la bienvenida a un nuevo empleado llamado Manolo.")
# Estimado Manolo,
#
# ¡Bienvenido a nuestro equipo! Estamos muy contentos de que hayas decidido unirse a nosotros y estamos ansiosos por comenzar a trabajar juntos.
#
# Nos gustaría darte las gracias por tu interés en nuestro equipo y esperamos que tengas un gran tiempo aquí.
#
# Si tienes alguna pregunta o duda, no dudes en contactarnos.
#
# Atentamente,
# Equipo de [Nombre del Departamento]
```
## Data
The dataset is a translation to Spanish of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json) (a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca)) using OpenAI's `gpt-3.5-turbo` model. We translated using a full-sample prompt instead of per strings, which resulted in more coherent tuples of `(instruction, input, output)` and costed around $60.0.
**This dataset cannot be used to create models that compete in any way with OpenAI.**
## Finetuning
To fine-tune the LLaMA model we used the code available on [Alpaca Lora](https://github.com/tloen/alpaca-lora), which provides code to finetune the LLaMA model using PEFT from Hugging Face. We run finetuning for 3 epochs using sequence length of 512 with no gradient accumulation on a single A6000 with 48GB VRAM for 12 hours on top of LLaMA-7B.
## Example outputs
Stanford Alpaca:
```
Instruction: Inventa una excusa creativa para decir que no tengo que ir a la fiesta.
Response: I'm sorry, I can't make it to the party. I have to stay home and take care of my sick pet.
```
BERTIN Alpaca LoRA:
```
Instruction: Inventa una excusa creativa para decir que no tengo que ir a la fiesta.
Response: No puedo ir a la fiesta porque tengo que terminar un proyecto urgente.
```
Stanford Alpaca:
```
Instruction: ¿Cómo puedo comenzar una carrera como científico de datos? Responde con una lista de pasos a seguir.
Response: To begin a career as a data scientist, there are a few steps to take. First, you should have a bachelor's degree in a STEM-related field, such as computer science, mathematics, statistics, or engineering. You should also have experience with programming languages such as Python, R, or Java. Additionally, it is helpful to have knowledge of machine learning, data visualization, and data analytics. Finally, it is important to stay up to date with the latest trends and technologies in the field.
```
BERTIN Alpaca LoRA:
```
Instruction: ¿Cómo puedo comenzar una carrera como científico de datos? Responde con una lista de pasos a seguir.
Response: Para comenzar una carrera como científico de datos, hay varios pasos a seguir. Primero, obtener una licenciatura en matemáticas, informática o ciencias de la computación. Luego, obtener una maestría o doctorado en ciencias de la computación, informática o alguna otra área relacionada. Finalmente, obtener experiencia en el campo trabajando en proyectos de investigación o desarrollando aplicaciones.
```
You can test it using the eval notebook [here](https://colab.research.google.com/github/22-hours/cabrita/blob/main/notebooks/cabrita-lora.ipynb).
## References
- [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [BERTIN Alpaca](https://huggingface.co/datasets/bertin-project/alpaca-spanish)
- [Alpaca LoRA](https://github.com/tloen/alpaca-lora)
- [ChatGPT](https://openai.com/blog/chatgpt)
- [Hugging Face](https://huggingface.co/)
## Hardware Requirements
For training we have used an A6000 48GB VRAM Nvidia GPU. For eval, you can use a T4. | 5,744 | [
[
-0.043304443359375,
-0.07672119140625,
0.0206298828125,
0.03338623046875,
-0.0194091796875,
-0.00418853759765625,
-0.005405426025390625,
-0.03973388671875,
0.04425048828125,
0.0229339599609375,
-0.042755126953125,
-0.04486083984375,
-0.04241943359375,
0.0226... |
evegarcianz/eega-embedding | 2023-03-27T14:33:41.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"dataset:embedding-data/sentence-compression",
"endpoints_compatible",
"region:us"
] | sentence-similarity | evegarcianz | null | null | evegarcianz/eega-embedding | 0 | 2 | sentence-transformers | 2023-03-27T14:33:34 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
datasets:
- embedding-data/sentence-compression
---
# evegarcianz/eega-embedding
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('evegarcianz/eega-embedding')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=evegarcianz/eega-embedding)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 2,432 | [
[
-0.026275634765625,
-0.068115234375,
0.0325927734375,
0.0179290771484375,
-0.00670623779296875,
-0.0338134765625,
-0.0213775634765625,
-0.0014438629150390625,
0.0190582275390625,
0.024017333984375,
-0.054351806640625,
-0.061004638671875,
-0.03741455078125,
-... |
serkanBurakOrs/dqn-SpaceInvadersNoFrameskip-v4 | 2023-03-27T14:50:51.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | serkanBurakOrs | null | null | serkanBurakOrs/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-03-27T14:50:07 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 668.00 +/- 156.08
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga serkanBurakOrs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga serkanBurakOrs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga serkanBurakOrs
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 700000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,708 | [
[
-0.04107666015625,
-0.037322998046875,
0.02099609375,
0.0257415771484375,
-0.0112457275390625,
-0.0175018310546875,
0.01255035400390625,
-0.01361846923828125,
0.01265716552734375,
0.025238037109375,
-0.068359375,
-0.035888671875,
-0.0274200439453125,
-0.0021... |
shawmoon/EkattorBert-multilingual-finetuned-squad_v2 | 2023-03-28T10:31:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"bn",
"en",
"dataset:squad_v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | shawmoon | null | null | shawmoon/EkattorBert-multilingual-finetuned-squad_v2 | 3 | 2 | transformers | 2023-03-27T15:53:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: EkattorBert-multilingual-finetuned-squad_v2
results: []
language:
- bn
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EkattorBert-multilingual-finetuned-squad_v2
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0202 | 1.0 | 8258 | 0.9630 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2 | 1,377 | [
[
-0.0290374755859375,
-0.04046630859375,
0.00742340087890625,
0.0293731689453125,
-0.0271759033203125,
-0.01198577880859375,
-0.033050537109375,
-0.030914306640625,
0.002826690673828125,
0.0210418701171875,
-0.06329345703125,
-0.042999267578125,
-0.04058837890625... |
RyanGoslenko/FinBERT-Twitter-BTC | 2023-04-02T16:38:27.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | RyanGoslenko | null | null | RyanGoslenko/FinBERT-Twitter-BTC | 0 | 2 | transformers | 2023-03-27T16:02:45 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: FinBERT-Twitter-BTC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FinBERT-Twitter-BTC
This model is a fine-tuned version of [yiyanghkust/finbert-pretrain](https://huggingface.co/yiyanghkust/finbert-pretrain) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1871
- F1: 0.9589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2051 | 1.0 | 3556 | 0.1965 | 0.9378 |
| 0.1475 | 2.0 | 7112 | 0.1586 | 0.9527 |
| 0.1004 | 3.0 | 10668 | 0.1674 | 0.9572 |
| 0.0612 | 4.0 | 14224 | 0.1871 | 0.9589 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,494 | [
[
-0.039398193359375,
-0.048583984375,
0.01444244384765625,
-0.002201080322265625,
-0.035308837890625,
-0.0184478759765625,
-0.0101470947265625,
-0.0142974853515625,
0.01134490966796875,
0.034942626953125,
-0.05511474609375,
-0.05572509765625,
-0.04718017578125,
... |
ViditRaj/XLM_Roberta_Hindi_Ads_Classifier | 2023-03-27T17:22:05.000Z | [
"transformers",
"tf",
"xlm-roberta",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | ViditRaj | null | null | ViditRaj/XLM_Roberta_Hindi_Ads_Classifier | 0 | 2 | transformers | 2023-03-27T17:08:00 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: ViditRaj/XLM_Roberta_Hindi_Ads_Classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ViditRaj/XLM_Roberta_Hindi_Ads_Classifier
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3258
- Validation Loss: 0.2867
- Train Accuracy: 0.9149
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.3738 | 0.2117 | 0.9301 | 0 |
| 0.2323 | 0.1927 | 0.9347 | 1 |
| 0.2013 | 0.1739 | 0.9377 | 2 |
| 0.4551 | 0.5800 | 0.7219 | 3 |
| 0.3258 | 0.2867 | 0.9149 | 4 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,793 | [
[
-0.035614013671875,
-0.04547119140625,
0.016632080078125,
-0.0140533447265625,
-0.03021240234375,
-0.0220947265625,
-0.01378631591796875,
-0.019378662109375,
0.00461578369140625,
0.01434326171875,
-0.041900634765625,
-0.050567626953125,
-0.06707763671875,
-0... |
Cleighton071/autotrain-detection-for-product-location-44269111681 | 2023-03-27T17:50:11.000Z | [
"transformers",
"pytorch",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:Cleighton071/autotrain-data-detection-for-product-location",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Cleighton071 | null | null | Cleighton071/autotrain-detection-for-product-location-44269111681 | 0 | 2 | transformers | 2023-03-27T17:44:20 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Cleighton071/autotrain-data-detection-for-product-location
co2_eq_emissions:
emissions: 2.30199726014708
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 44269111681
- CO2 Emissions (in grams): 2.3020
## Validation Metrics
- Loss: 0.005
- Accuracy: 0.999
- Macro F1: 0.999
- Micro F1: 0.999
- Weighted F1: 0.999
- Macro Precision: 0.999
- Micro Precision: 0.999
- Weighted Precision: 0.999
- Macro Recall: 0.999
- Micro Recall: 0.999
- Weighted Recall: 0.999
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Cleighton071/autotrain-detection-for-product-location-44269111681
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Cleighton071/autotrain-detection-for-product-location-44269111681", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Cleighton071/autotrain-detection-for-product-location-44269111681", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,382 | [
[
-0.032684326171875,
-0.0225067138671875,
0.012451171875,
0.0081024169921875,
-0.00037670135498046875,
-0.002685546875,
-0.0018892288208007812,
-0.0189971923828125,
-0.006439208984375,
0.00534820556640625,
-0.051055908203125,
-0.037689208984375,
-0.05401611328125... |
ruanchaves/bert-base-portuguese-cased-assin-entailment | 2023-03-29T18:05:31.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"pt",
"dataset:assin",
"has_space",
"region:us"
] | text-classification | ruanchaves | null | null | ruanchaves/bert-base-portuguese-cased-assin-entailment | 0 | 2 | transformers | 2023-03-27T18:09:12 | ---
inference: false
language: pt
datasets:
- assin
---
# BERTimbau base for Recognizing Textual Entailment
This is the [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) model finetuned for
Recognizing Textual Entailment with the [ASSIN](https://huggingface.co/datasets/assin) dataset.
This model is suitable for Portuguese.
- Git Repo: [Evaluation of Portuguese Language Models](https://github.com/ruanchaves/eplm).
- Demo: [Portuguese Textual Entailment](https://ruanchaves-portuguese-textual-entailment.hf.space)
### **Labels**:
* 0 : There is no entailment between premise and hypothesis.
* 1 : There is entailment between premise and hypothesis.
* 2 : The premise is a paraphrase of the hypothesis.
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import numpy as np
import torch
from scipy.special import softmax
model_name = "ruanchaves/bert-base-portuguese-cased-assin-entailment"
s1 = "Os homens estão cuidadosamente colocando as malas no porta-malas de um carro."
s2 = "Os homens estão colocando bagagens dentro do porta-malas de um carro."
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model_input = tokenizer(*([s1], [s2]), padding=True, return_tensors="pt")
with torch.no_grad():
output = model(**model_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) Label: {l} Score: {np.round(float(s), 4)}")
```
## Citation
Our research is ongoing, and we are currently working on describing our experiments in a paper, which will be published soon.
In the meanwhile, if you would like to cite our work or models before the publication of the paper, please cite our [GitHub repository](https://github.com/ruanchaves/eplm):
```
@software{Chaves_Rodrigues_eplm_2023,
author = {Chaves Rodrigues, Ruan and Tanti, Marc and Agerri, Rodrigo},
doi = {10.5281/zenodo.7781848},
month = {3},
title = {{Evaluation of Portuguese Language Models}},
url = {https://github.com/ruanchaves/eplm},
version = {1.0.0},
year = {2023}
}
``` | 2,417 | [
[
-0.008056640625,
-0.061279296875,
0.034515380859375,
0.038421630859375,
-0.0230560302734375,
-0.0250091552734375,
-0.021881103515625,
-0.0183258056640625,
0.021484375,
0.0362548828125,
-0.0085906982421875,
-0.055755615234375,
-0.04217529296875,
0.00992584228... |
ruanchaves/bert-large-portuguese-cased-assin-entailment | 2023-03-29T18:05:44.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"pt",
"dataset:assin",
"has_space",
"region:us"
] | text-classification | ruanchaves | null | null | ruanchaves/bert-large-portuguese-cased-assin-entailment | 0 | 2 | transformers | 2023-03-27T18:09:30 | ---
inference: false
language: pt
datasets:
- assin
---
# BERTimbau large for Recognizing Textual Entailment
This is the [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) model finetuned for
Recognizing Textual Entailment with the [ASSIN](https://huggingface.co/datasets/assin) dataset.
This model is suitable for Portuguese.
- Git Repo: [Evaluation of Portuguese Language Models](https://github.com/ruanchaves/eplm).
- Demo: [Portuguese Textual Entailment](https://ruanchaves-portuguese-textual-entailment.hf.space)
### **Labels**:
* 0 : There is no entailment between premise and hypothesis.
* 1 : There is entailment between premise and hypothesis.
* 2 : The premise is a paraphrase of the hypothesis.
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import numpy as np
import torch
from scipy.special import softmax
model_name = "ruanchaves/bert-large-portuguese-cased-assin-entailment"
s1 = "Os homens estão cuidadosamente colocando as malas no porta-malas de um carro."
s2 = "Os homens estão colocando bagagens dentro do porta-malas de um carro."
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model_input = tokenizer(*([s1], [s2]), padding=True, return_tensors="pt")
with torch.no_grad():
output = model(**model_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) Label: {l} Score: {np.round(float(s), 4)}")
```
## Citation
Our research is ongoing, and we are currently working on describing our experiments in a paper, which will be published soon.
In the meanwhile, if you would like to cite our work or models before the publication of the paper, please cite our [GitHub repository](https://github.com/ruanchaves/eplm):
```
@software{Chaves_Rodrigues_eplm_2023,
author = {Chaves Rodrigues, Ruan and Tanti, Marc and Agerri, Rodrigo},
doi = {10.5281/zenodo.7781848},
month = {3},
title = {{Evaluation of Portuguese Language Models}},
url = {https://github.com/ruanchaves/eplm},
version = {1.0.0},
year = {2023}
}
``` | 2,421 | [
[
-0.0092620849609375,
-0.062469482421875,
0.03790283203125,
0.0390625,
-0.021453857421875,
-0.0273284912109375,
-0.0265655517578125,
-0.02276611328125,
0.0221099853515625,
0.035430908203125,
-0.006908416748046875,
-0.054840087890625,
-0.043914794921875,
0.013... |
ruanchaves/bert-large-portuguese-cased-assin2-entailment | 2023-03-29T18:05:48.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"pt",
"dataset:assin2",
"has_space",
"region:us"
] | text-classification | ruanchaves | null | null | ruanchaves/bert-large-portuguese-cased-assin2-entailment | 0 | 2 | transformers | 2023-03-27T18:09:33 | ---
inference: false
language: pt
datasets:
- assin2
---
# BERTimbau large for Recognizing Textual Entailment
This is the [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) model finetuned for
Recognizing Textual Entailment with the [ASSIN 2](https://huggingface.co/datasets/assin2) dataset.
This model is suitable for Portuguese.
- Git Repo: [Evaluation of Portuguese Language Models](https://github.com/ruanchaves/eplm).
- Demo: [Portuguese Textual Entailment](https://ruanchaves-portuguese-textual-entailment.hf.space)
### **Labels**:
* 0 : There is no entailment between premise and hypothesis.
* 1 : There is entailment between premise and hypothesis.
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import numpy as np
import torch
from scipy.special import softmax
model_name = "ruanchaves/bert-large-portuguese-cased-assin2-entailment"
s1 = "Os homens estão cuidadosamente colocando as malas no porta-malas de um carro."
s2 = "Os homens estão colocando bagagens dentro do porta-malas de um carro."
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model_input = tokenizer(*([s1], [s2]), padding=True, return_tensors="pt")
with torch.no_grad():
output = model(**model_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) Label: {l} Score: {np.round(float(s), 4)}")
```
## Citation
Our research is ongoing, and we are currently working on describing our experiments in a paper, which will be published soon.
In the meanwhile, if you would like to cite our work or models before the publication of the paper, please cite our [GitHub repository](https://github.com/ruanchaves/eplm):
```
@software{Chaves_Rodrigues_eplm_2023,
author = {Chaves Rodrigues, Ruan and Tanti, Marc and Agerri, Rodrigo},
doi = {10.5281/zenodo.7781848},
month = {3},
title = {{Evaluation of Portuguese Language Models}},
url = {https://github.com/ruanchaves/eplm},
version = {1.0.0},
year = {2023}
}
``` | 2,376 | [
[
-0.00795745849609375,
-0.060150146484375,
0.0361328125,
0.041656494140625,
-0.0194244384765625,
-0.0285797119140625,
-0.0267333984375,
-0.027252197265625,
0.019500732421875,
0.0322265625,
-0.00838470458984375,
-0.051910400390625,
-0.0455322265625,
0.01013946... |
ruanchaves/mdeberta-v3-base-assin-entailment | 2023-03-29T18:06:02.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"pt",
"dataset:assin",
"has_space",
"region:us"
] | text-classification | ruanchaves | null | null | ruanchaves/mdeberta-v3-base-assin-entailment | 0 | 2 | transformers | 2023-03-27T18:09:43 | ---
inference: false
language: pt
datasets:
- assin
---
# mDeBERTa v3 base for Recognizing Textual Entailment
This is the [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) model finetuned for
Recognizing Textual Entailment with the [ASSIN](https://huggingface.co/datasets/assin) dataset.
This model is suitable for Portuguese.
- Git Repo: [Evaluation of Portuguese Language Models](https://github.com/ruanchaves/eplm).
- Demo: [Portuguese Textual Entailment](https://ruanchaves-portuguese-textual-entailment.hf.space)
### **Labels**:
* 0 : There is no entailment between premise and hypothesis.
* 1 : There is entailment between premise and hypothesis.
* 2 : The premise is a paraphrase of the hypothesis.
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import numpy as np
import torch
from scipy.special import softmax
model_name = "ruanchaves/mdeberta-v3-base-assin-entailment"
s1 = "Os homens estão cuidadosamente colocando as malas no porta-malas de um carro."
s2 = "Os homens estão colocando bagagens dentro do porta-malas de um carro."
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model_input = tokenizer(*([s1], [s2]), padding=True, return_tensors="pt")
with torch.no_grad():
output = model(**model_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) Label: {l} Score: {np.round(float(s), 4)}")
```
## Citation
Our research is ongoing, and we are currently working on describing our experiments in a paper, which will be published soon.
In the meanwhile, if you would like to cite our work or models before the publication of the paper, please cite our [GitHub repository](https://github.com/ruanchaves/eplm):
```
@software{Chaves_Rodrigues_eplm_2023,
author = {Chaves Rodrigues, Ruan and Tanti, Marc and Agerri, Rodrigo},
doi = {10.5281/zenodo.7781848},
month = {3},
title = {{Evaluation of Portuguese Language Models}},
url = {https://github.com/ruanchaves/eplm},
version = {1.0.0},
year = {2023}
}
``` | 2,387 | [
[
-0.007686614990234375,
-0.057861328125,
0.04144287109375,
0.04248046875,
-0.022674560546875,
-0.0184326171875,
-0.009429931640625,
-0.01898193359375,
0.022918701171875,
0.036865234375,
-0.003475189208984375,
-0.056243896484375,
-0.04339599609375,
0.016906738... |
kasseev/dqn-SpaceInvadersNoFrameskip-v4 | 2023-03-27T19:38:01.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | kasseev | null | null | kasseev/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-03-27T19:37:25 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kasseev -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kasseev -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kasseev
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,687 | [
[
-0.04132080078125,
-0.037017822265625,
0.0211639404296875,
0.0248565673828125,
-0.01003265380859375,
-0.017852783203125,
0.012481689453125,
-0.0141143798828125,
0.01386260986328125,
0.02490234375,
-0.07080078125,
-0.03546142578125,
-0.027313232421875,
-0.004... |
aegrif/CIS6930_DAAGR_Classification | 2023-03-27T21:30:47.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | aegrif | null | null | aegrif/CIS6930_DAAGR_Classification | 0 | 2 | transformers | 2023-03-27T21:26:17 | ---
tags:
- generated_from_keras_callback
model-index:
- name: CIS6930_DAAGR_Classification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# CIS6930_DAAGR_Classification
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 892 | [
[
-0.032745361328125,
-0.0242919921875,
0.02581787109375,
-0.0119781494140625,
-0.038421630859375,
-0.01413726806640625,
0.01250457763671875,
-0.0223236083984375,
-0.012115478515625,
0.0179595947265625,
-0.041259765625,
-0.038604736328125,
-0.0689697265625,
-0... |
u23429/headline-predictor | 2023-03-27T22:05:18.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:u23429/autotrain-data-stock-distil",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | u23429 | null | null | u23429/headline-predictor | 0 | 2 | transformers | 2023-03-27T21:58:02 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- u23429/autotrain-data-stock-distil
co2_eq_emissions:
emissions: 2.960971697133151
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 44339111846
- CO2 Emissions (in grams): 2.9610
## Validation Metrics
- Loss: 1.634
- Accuracy: 0.940
- Macro F1: 0.882
- Micro F1: 0.940
- Weighted F1: 0.924
- Macro Precision: 0.876
- Micro Precision: 0.940
- Weighted Precision: 0.914
- Macro Recall: 0.900
- Micro Recall: 0.940
- Weighted Recall: 0.940
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/u23429/autotrain-stock-distil-44339111846
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("u23429/autotrain-stock-distil-44339111846", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("u23429/autotrain-stock-distil-44339111846", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,287 | [
[
-0.0305633544921875,
-0.0234527587890625,
0.0093536376953125,
0.01241302490234375,
-0.006366729736328125,
0.007328033447265625,
0.00057220458984375,
-0.01177215576171875,
-0.0059967041015625,
0.0014524459838867188,
-0.047210693359375,
-0.033905029296875,
-0.0616... |
yemoncad/distilbert-base-uncased-finetuned-clinc | 2023-03-27T22:34:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yemoncad | null | null | yemoncad/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-03-27T22:28:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2890 | 0.7432 |
| 2.6284 | 2.0 | 636 | 1.8756 | 0.8377 |
| 1.5483 | 3.0 | 954 | 1.1572 | 0.8961 |
| 1.015 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7953 | 5.0 | 1590 | 0.7721 | 0.9184 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu116
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,890 | [
[
-0.035369873046875,
-0.04083251953125,
0.0118865966796875,
0.0074615478515625,
-0.0262603759765625,
-0.02520751953125,
-0.012847900390625,
-0.00817108154296875,
0.0033893585205078125,
0.0225982666015625,
-0.04681396484375,
-0.048614501953125,
-0.058319091796875,... |
satyrical/dqnSpaceInvaders | 2023-03-27T23:17:10.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | satyrical | null | null | satyrical/dqnSpaceInvaders | 0 | 2 | stable-baselines3 | 2023-03-27T23:16:30 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 310.50 +/- 122.83
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga satyrical -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga satyrical -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga satyrical
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,694 | [
[
-0.041046142578125,
-0.036163330078125,
0.022552490234375,
0.024017333984375,
-0.00922393798828125,
-0.0167236328125,
0.0131378173828125,
-0.01349639892578125,
0.01264190673828125,
0.02490234375,
-0.07073974609375,
-0.034637451171875,
-0.0258026123046875,
-0... |
jakub014/ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-effectiveness-dagstuhl | 2023-03-27T23:50:10.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | jakub014 | null | null | jakub014/ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-effectiveness-dagstuhl | 0 | 2 | transformers | 2023-03-27T23:48:35 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-effectiveness-dagstuhl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-effectiveness-dagstuhl
This model is a fine-tuned version of [ibm/ColD-Fusion-bert-base-uncased-itr23-seed0](https://huggingface.co/ibm/ColD-Fusion-bert-base-uncased-itr23-seed0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6548
- Accuracy: 0.6508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 16 | 0.6548 | 0.6508 |
| No log | 2.0 | 32 | 0.6502 | 0.6190 |
| No log | 3.0 | 48 | 0.6451 | 0.6190 |
| No log | 4.0 | 64 | 0.6436 | 0.6349 |
| No log | 5.0 | 80 | 0.6482 | 0.6190 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,740 | [
[
-0.03631591796875,
-0.031768798828125,
0.00968170166015625,
0.004428863525390625,
-0.03485107421875,
-0.018402099609375,
-0.00916290283203125,
-0.012969970703125,
-0.0015192031860351562,
0.0101318359375,
-0.057708740234375,
-0.04364013671875,
-0.0477294921875,
... |
jakub014/ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-sufficiency-dagstuhl | 2023-03-28T00:00:33.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | jakub014 | null | null | jakub014/ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-sufficiency-dagstuhl | 0 | 2 | transformers | 2023-03-27T23:56:45 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-sufficiency-dagstuhl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-sufficiency-dagstuhl
This model is a fine-tuned version of [ibm/ColD-Fusion-bert-base-uncased-itr23-seed0](https://huggingface.co/ibm/ColD-Fusion-bert-base-uncased-itr23-seed0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6936
- Accuracy: 0.6349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 16 | 0.6675 | 0.5873 |
| No log | 2.0 | 32 | 0.6701 | 0.5873 |
| No log | 3.0 | 48 | 0.7022 | 0.6032 |
| No log | 4.0 | 64 | 0.6838 | 0.6190 |
| No log | 5.0 | 80 | 0.6936 | 0.6349 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,736 | [
[
-0.036163330078125,
-0.031005859375,
0.01025390625,
0.005245208740234375,
-0.035736083984375,
-0.01953125,
-0.00920867919921875,
-0.01337432861328125,
-0.001972198486328125,
0.01285552978515625,
-0.058685302734375,
-0.042266845703125,
-0.0462646484375,
-0.01... |
jakub014/ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-sufficiency-ukp | 2023-03-28T00:10:22.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | jakub014 | null | null | jakub014/ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-sufficiency-ukp | 0 | 2 | transformers | 2023-03-28T00:04:33 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-sufficiency-ukp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-sufficiency-ukp
This model is a fine-tuned version of [ibm/ColD-Fusion-bert-base-uncased-itr23-seed0](https://huggingface.co/ibm/ColD-Fusion-bert-base-uncased-itr23-seed0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5288
- Accuracy: 0.8786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 52 | 0.3410 | 0.8544 |
| No log | 2.0 | 104 | 0.4002 | 0.8689 |
| No log | 3.0 | 156 | 0.5108 | 0.8544 |
| No log | 4.0 | 208 | 0.5288 | 0.8786 |
| No log | 5.0 | 260 | 0.5707 | 0.8738 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,726 | [
[
-0.03558349609375,
-0.0277099609375,
0.0078887939453125,
0.006793975830078125,
-0.034820556640625,
-0.022491455078125,
-0.0098724365234375,
-0.0169677734375,
-0.0008983612060546875,
0.0137176513671875,
-0.058624267578125,
-0.043426513671875,
-0.04595947265625,
... |
lilouuch/ppo_LunarLander-v4 | 2023-03-28T03:11:27.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | lilouuch | null | null | lilouuch/ppo_LunarLander-v4 | 0 | 2 | stable-baselines3 | 2023-03-28T03:11:00 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 290.40 +/- 17.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
etagaca/verifai-detector-roberta | 2023-03-28T04:02:35.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"chatgpt",
"en",
"dataset:Hello-SimpleAI/HC3",
"arxiv:2301.07597",
"endpoints_compatible",
"region:us"
] | text-classification | etagaca | null | null | etagaca/verifai-detector-roberta | 0 | 2 | transformers | 2023-03-28T03:32:21 | ---
datasets:
- Hello-SimpleAI/HC3
language:
- en
pipeline_tag: text-classification
tags:
- chatgpt
---
# Model Card for `Hello-SimpleAI/chatgpt-detector-roberta`
This model is trained on **the mix of full-text and splitted sentences** of `answer`s from [Hello-SimpleAI/HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3).
More details refer to [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597) and Gtihub project [Hello-SimpleAI/chatgpt-comparison-detection](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection).
The base checkpoint is [roberta-base](https://huggingface.co/roberta-base).
We train it with all [Hello-SimpleAI/HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) data (without held-out) for 1 epoch.
(1-epoch is consistent with the experiments in [our paper](https://arxiv.org/abs/2301.07597).)
## Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
}
```
| 1,325 | [
[
-0.03326416015625,
-0.054046630859375,
0.031402587890625,
-0.0074615478515625,
-0.023162841796875,
-0.0279083251953125,
-0.014801025390625,
-0.02215576171875,
-0.0014781951904296875,
0.025238037109375,
-0.04681396484375,
-0.0275421142578125,
-0.05218505859375,
... |
evegarcianz/eega-embedding_fttest | 2023-03-28T08:13:57.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"dataset:embedding-data/sentence-compression",
"endpoints_compatible",
"region:us"
] | sentence-similarity | evegarcianz | null | null | evegarcianz/eega-embedding_fttest | 0 | 2 | sentence-transformers | 2023-03-28T08:13:50 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
datasets:
- embedding-data/sentence-compression
---
# evegarcianz/eega-embedding_fttest
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('evegarcianz/eega-embedding_fttest')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=evegarcianz/eega-embedding_fttest)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 2,453 | [
[
-0.0280609130859375,
-0.07476806640625,
0.03076171875,
0.01947021484375,
-0.007503509521484375,
-0.033447265625,
-0.0203094482421875,
0.0006437301635742188,
0.017303466796875,
0.0214996337890625,
-0.055328369140625,
-0.054290771484375,
-0.037078857421875,
-0... |
Neha988/finetuning-movie-roberta | 2023-03-28T11:45:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Neha988 | null | null | Neha988/finetuning-movie-roberta | 0 | 2 | transformers | 2023-03-28T11:03:22 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-movie-roberta
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8955555555555555
- name: F1
type: f1
value: 0.8939051918735892
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-movie-roberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6327
- Accuracy: 0.8956
- F1: 0.8939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,512 | [
[
-0.03228759765625,
-0.050018310546875,
0.019683837890625,
-0.0098419189453125,
-0.032958984375,
-0.0234527587890625,
-0.013641357421875,
-0.00681304931640625,
0.00849151611328125,
0.041717529296875,
-0.061767578125,
-0.035919189453125,
-0.0657958984375,
-0.0... |
jakub014/bert-base-uncased-IBM-argQ-30k | 2023-03-28T13:06:31.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jakub014 | null | null | jakub014/bert-base-uncased-IBM-argQ-30k | 0 | 2 | transformers | 2023-03-28T12:31:02 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-IBM-argQ-30k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-IBM-argQ-30k
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5905
- Accuracy: 0.7344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5553 | 1.0 | 1525 | 0.5541 | 0.7249 |
| 0.4613 | 2.0 | 3050 | 0.5905 | 0.7344 |
| 0.325 | 3.0 | 4575 | 0.7144 | 0.7209 |
| 0.218 | 4.0 | 6100 | 0.9566 | 0.7178 |
| 0.1563 | 5.0 | 7625 | 1.2740 | 0.7224 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,603 | [
[
-0.036865234375,
-0.0399169921875,
0.0123748779296875,
0.00882720947265625,
-0.033721923828125,
-0.024627685546875,
-0.016204833984375,
-0.0161895751953125,
0.00583648681640625,
0.0272064208984375,
-0.05224609375,
-0.04461669921875,
-0.049560546875,
-0.02819... |
platzi/platzi-distilroberta-base-mrpc-glue-andres-galvis | 2023-03-28T14:12:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-andres-galvis | 0 | 2 | transformers | 2023-03-28T13:12:05 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-andres-galvis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8357843137254902
- name: F1
type: f1
value: 0.8788426763110307
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-andres-galvis
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.5883
- Accuracy: 0.8358
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5219 | 1.09 | 500 | 0.7457 | 0.8235 | 0.8746 |
| 0.3715 | 2.18 | 1000 | 0.5883 | 0.8358 | 0.8788 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cpu
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,426 | [
[
-0.03094482421875,
-0.042449951171875,
0.00884246826171875,
0.02032470703125,
-0.0306854248046875,
-0.024932861328125,
-0.01053619384765625,
-0.0040130615234375,
0.006801605224609375,
0.00991058349609375,
-0.049163818359375,
-0.0419921875,
-0.056854248046875,
... |
jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-effectiveness-redditCMV | 2023-03-28T14:34:14.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jakub014 | null | null | jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-effectiveness-redditCMV | 0 | 2 | transformers | 2023-03-28T13:18:24 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-IBM-argQ-30k-finetuned-effectiveness-redditCMV
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-IBM-argQ-30k-finetuned-effectiveness-redditCMV
This model is a fine-tuned version of [jakub014/bert-base-uncased-IBM-argQ-30k](https://huggingface.co/jakub014/bert-base-uncased-IBM-argQ-30k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6691
- Accuracy: 0.6531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6595 | 1.0 | 516 | 0.6330 | 0.6477 |
| 0.5482 | 2.0 | 1032 | 0.6691 | 0.6531 |
| 0.3632 | 3.0 | 1548 | 0.9239 | 0.6414 |
| 0.2158 | 4.0 | 2064 | 1.3534 | 0.6332 |
| 0.1328 | 5.0 | 2580 | 1.7181 | 0.6283 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,715 | [
[
-0.041412353515625,
-0.04693603515625,
0.01085662841796875,
0.0052642822265625,
-0.033477783203125,
-0.0275421142578125,
-0.01328277587890625,
-0.016357421875,
0.006656646728515625,
0.0263214111328125,
-0.051361083984375,
-0.045379638671875,
-0.049774169921875,
... |
diegoref/testtest | 2023-03-28T14:19:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | diegoref | null | null | diegoref/testtest | 0 | 2 | transformers | 2023-03-28T14:02:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: testtest
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8700980392156863
- name: F1
type: f1
value: 0.9090909090909091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testtest
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6050
- Accuracy: 0.8701
- F1: 0.9091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.3529 | 0.8627 | 0.9007 |
| 0.4988 | 2.0 | 918 | 0.4728 | 0.8652 | 0.9079 |
| 0.2792 | 3.0 | 1377 | 0.6050 | 0.8701 | 0.9091 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,840 | [
[
-0.03369140625,
-0.0594482421875,
0.0085296630859375,
0.0206146240234375,
-0.0228271484375,
-0.033233642578125,
-0.015106201171875,
-0.0171051025390625,
0.0133819580078125,
0.0115814208984375,
-0.0526123046875,
-0.03857421875,
-0.0458984375,
-0.0183258056640... |
cardiffnlp/xlm-roberta-base-tweet-sentiment-en | 2023-03-28T15:02:26.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | cardiffnlp | null | null | cardiffnlp/xlm-roberta-base-tweet-sentiment-en | 0 | 2 | transformers | 2023-03-28T14:55:09 | # `cardiffnlp/xlm-roberta-base-tweet-sentiment-en`
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 68.85 | 68.85 | 68.85 | 68.4 | 68.85 | 68.85 | 68.85 |
Check the result file [here](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en/raw/main/eval.json). | 1,051 | [
[
-0.0281219482421875,
-0.0303497314453125,
0.0228118896484375,
0.036163330078125,
-0.035736083984375,
0.0168304443359375,
-0.02996826171875,
-0.018524169921875,
0.0382080078125,
0.0258331298828125,
-0.050689697265625,
-0.08380126953125,
-0.059783935546875,
0.... |
bazudde/potato_model | 2023-03-28T15:42:41.000Z | [
"transformers",
"pytorch",
"beit",
"image-classification",
"autotrain",
"vision",
"dataset:bazudde/autotrain-data-sweet-potato-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | bazudde | null | null | bazudde/potato_model | 0 | 2 | transformers | 2023-03-28T15:42:05 | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- bazudde/autotrain-data-sweet-potato-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.2585547491917275
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 44552112263
- CO2 Emissions (in grams): 0.2586
## Validation Metrics
- Loss: 0.098
- Accuracy: 0.923
- Macro F1: 0.911
- Micro F1: 0.923
- Weighted F1: 0.918
- Macro Precision: 0.958
- Micro Precision: 0.923
- Weighted Precision: 0.933
- Macro Recall: 0.889
- Micro Recall: 0.923
- Weighted Recall: 0.923 | 896 | [
[
-0.02337646484375,
-0.01016998291015625,
0.0169219970703125,
-0.0002524852752685547,
0.006191253662109375,
0.01303863525390625,
0.00775146484375,
-0.0167999267578125,
-0.01910400390625,
-0.00461578369140625,
-0.032958984375,
-0.044647216796875,
-0.04537963867187... |
cardiffnlp/xlm-v-base-tweet-sentiment-fr | 2023-03-28T15:59:08.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | cardiffnlp | null | null | cardiffnlp/xlm-v-base-tweet-sentiment-fr | 0 | 2 | transformers | 2023-03-28T15:51:16 | # `cardiffnlp/xlm-v-base-tweet-sentiment-fr`
This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (french).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(french).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 69.31 | 69.31 | 69.31 | 68.84 | 69.31 | 69.87 | 69.31 |
Check the result file [here](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-fr/raw/main/eval.json). | 1,043 | [
[
-0.0299835205078125,
-0.0299530029296875,
0.0239105224609375,
0.042510986328125,
-0.0306549072265625,
0.0163726806640625,
-0.018707275390625,
-0.0148773193359375,
0.04107666015625,
0.0296630859375,
-0.05291748046875,
-0.08343505859375,
-0.0513916015625,
0.00... |
MihaiIonascu/fine_tuned_bert_dreadit | 2023-03-28T18:41:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | MihaiIonascu | null | null | MihaiIonascu/fine_tuned_bert_dreadit | 0 | 2 | transformers | 2023-03-28T16:14:24 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine_tuned_bert_dreadit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_bert_dreadit
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6081
- Accuracy: 0.7528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0037 | 1.0 | 178 | 1.8515 | 0.7163 |
| 0.0017 | 2.0 | 356 | 1.7404 | 0.7163 |
| 0.001 | 3.0 | 534 | 1.2895 | 0.7921 |
| 0.0012 | 4.0 | 712 | 1.3320 | 0.7669 |
| 0.0005 | 5.0 | 890 | 1.3646 | 0.7949 |
| 0.0002 | 6.0 | 1068 | 1.5997 | 0.7809 |
| 0.0001 | 7.0 | 1246 | 1.5772 | 0.7753 |
| 0.0003 | 8.0 | 1424 | 1.7599 | 0.7556 |
| 0.0001 | 9.0 | 1602 | 1.7494 | 0.7640 |
| 0.0001 | 10.0 | 1780 | 1.9942 | 0.7556 |
| 0.0001 | 11.0 | 1958 | 1.9370 | 0.75 |
| 0.0 | 12.0 | 2136 | 1.9671 | 0.7781 |
| 0.0001 | 13.0 | 2314 | 2.1223 | 0.7640 |
| 0.0 | 14.0 | 2492 | 2.1653 | 0.7472 |
| 0.0001 | 15.0 | 2670 | 1.9924 | 0.75 |
| 0.0 | 16.0 | 2848 | 2.1778 | 0.7528 |
| 0.0 | 17.0 | 3026 | 2.3010 | 0.7612 |
| 0.0 | 18.0 | 3204 | 2.2210 | 0.7669 |
| 0.0 | 19.0 | 3382 | 2.3333 | 0.7556 |
| 0.0 | 20.0 | 3560 | 1.8684 | 0.7697 |
| 0.0976 | 21.0 | 3738 | 1.9417 | 0.7584 |
| 0.0 | 22.0 | 3916 | 2.1385 | 0.7472 |
| 0.0 | 23.0 | 4094 | 1.9774 | 0.7669 |
| 0.0 | 24.0 | 4272 | 2.0778 | 0.75 |
| 0.0001 | 25.0 | 4450 | 2.4343 | 0.7331 |
| 0.0 | 26.0 | 4628 | 2.1331 | 0.7528 |
| 0.0 | 27.0 | 4806 | 2.2511 | 0.7640 |
| 0.0 | 28.0 | 4984 | 2.2422 | 0.7584 |
| 0.0 | 29.0 | 5162 | 2.1228 | 0.7669 |
| 0.0006 | 30.0 | 5340 | 2.0973 | 0.7725 |
| 0.0 | 31.0 | 5518 | 1.9392 | 0.7809 |
| 0.0 | 32.0 | 5696 | 2.2996 | 0.7107 |
| 0.4186 | 33.0 | 5874 | 2.2191 | 0.7584 |
| 0.0 | 34.0 | 6052 | 2.2233 | 0.75 |
| 0.0 | 35.0 | 6230 | 2.2263 | 0.7584 |
| 0.0 | 36.0 | 6408 | 2.2205 | 0.7584 |
| 0.0 | 37.0 | 6586 | 2.4488 | 0.7444 |
| 0.0 | 38.0 | 6764 | 2.5616 | 0.7360 |
| 0.0 | 39.0 | 6942 | 2.5941 | 0.7416 |
| 0.0 | 40.0 | 7120 | 2.5129 | 0.7528 |
| 0.0 | 41.0 | 7298 | 2.4978 | 0.7360 |
| 0.0 | 42.0 | 7476 | 2.3089 | 0.7528 |
| 0.0 | 43.0 | 7654 | 2.5056 | 0.7472 |
| 0.0 | 44.0 | 7832 | 2.5786 | 0.7416 |
| 0.0 | 45.0 | 8010 | 2.2956 | 0.7640 |
| 0.0 | 46.0 | 8188 | 2.5265 | 0.7472 |
| 0.0 | 47.0 | 8366 | 2.4396 | 0.7584 |
| 0.0 | 48.0 | 8544 | 2.5547 | 0.7472 |
| 0.0 | 49.0 | 8722 | 2.5556 | 0.7528 |
| 0.0 | 50.0 | 8900 | 2.5732 | 0.7528 |
| 0.0 | 51.0 | 9078 | 2.5062 | 0.7556 |
| 0.0 | 52.0 | 9256 | 2.5504 | 0.7528 |
| 0.0 | 53.0 | 9434 | 2.5602 | 0.7528 |
| 0.0 | 54.0 | 9612 | 2.5627 | 0.7472 |
| 0.0 | 55.0 | 9790 | 2.6575 | 0.75 |
| 0.0 | 56.0 | 9968 | 2.6239 | 0.7528 |
| 0.0 | 57.0 | 10146 | 2.4757 | 0.7697 |
| 0.0 | 58.0 | 10324 | 2.4862 | 0.7612 |
| 0.0 | 59.0 | 10502 | 3.2968 | 0.6938 |
| 0.0 | 60.0 | 10680 | 2.5265 | 0.7472 |
| 0.0 | 61.0 | 10858 | 2.1426 | 0.7978 |
| 0.0 | 62.0 | 11036 | 2.4674 | 0.7640 |
| 0.0 | 63.0 | 11214 | 2.3496 | 0.7640 |
| 0.0 | 64.0 | 11392 | 2.4010 | 0.7556 |
| 0.0 | 65.0 | 11570 | 2.4081 | 0.7725 |
| 0.0 | 66.0 | 11748 | 2.4022 | 0.7753 |
| 0.0 | 67.0 | 11926 | 2.2982 | 0.7753 |
| 0.0 | 68.0 | 12104 | 2.4628 | 0.7612 |
| 0.0 | 69.0 | 12282 | 2.5764 | 0.7640 |
| 0.0 | 70.0 | 12460 | 2.4056 | 0.7781 |
| 0.0 | 71.0 | 12638 | 2.3265 | 0.7865 |
| 0.0 | 72.0 | 12816 | 2.5182 | 0.7640 |
| 0.0 | 73.0 | 12994 | 2.3872 | 0.7556 |
| 0.0 | 74.0 | 13172 | 2.7281 | 0.7388 |
| 0.0 | 75.0 | 13350 | 2.4907 | 0.7612 |
| 0.0 | 76.0 | 13528 | 2.5323 | 0.7584 |
| 0.0 | 77.0 | 13706 | 2.2055 | 0.7837 |
| 0.0 | 78.0 | 13884 | 2.2227 | 0.7865 |
| 0.0 | 79.0 | 14062 | 2.2794 | 0.7753 |
| 0.0 | 80.0 | 14240 | 2.2886 | 0.7753 |
| 0.0 | 81.0 | 14418 | 2.8320 | 0.7444 |
| 0.0 | 82.0 | 14596 | 2.8252 | 0.7472 |
| 0.0 | 83.0 | 14774 | 2.2986 | 0.7837 |
| 0.0 | 84.0 | 14952 | 2.7879 | 0.7416 |
| 0.0 | 85.0 | 15130 | 2.7926 | 0.7416 |
| 0.0 | 86.0 | 15308 | 2.7656 | 0.7472 |
| 0.0 | 87.0 | 15486 | 2.7336 | 0.7444 |
| 0.0 | 88.0 | 15664 | 2.7320 | 0.7444 |
| 0.0 | 89.0 | 15842 | 2.7402 | 0.7444 |
| 0.0 | 90.0 | 16020 | 2.7415 | 0.7444 |
| 0.0 | 91.0 | 16198 | 2.7406 | 0.7444 |
| 0.0 | 92.0 | 16376 | 2.7327 | 0.7444 |
| 0.0 | 93.0 | 16554 | 2.4082 | 0.7781 |
| 0.0 | 94.0 | 16732 | 2.4077 | 0.7753 |
| 0.0 | 95.0 | 16910 | 2.4185 | 0.7781 |
| 0.0 | 96.0 | 17088 | 2.6096 | 0.7528 |
| 0.0 | 97.0 | 17266 | 2.5907 | 0.7669 |
| 0.0 | 98.0 | 17444 | 2.6030 | 0.7556 |
| 0.0 | 99.0 | 17622 | 2.6081 | 0.7528 |
| 0.0 | 100.0 | 17800 | 2.6081 | 0.7528 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 7,595 | [
[
-0.04656982421875,
-0.0384521484375,
0.0236663818359375,
0.006561279296875,
0.004444122314453125,
0.008941650390625,
0.00405120849609375,
0.00582122802734375,
0.05352783203125,
0.024505615234375,
-0.042510986328125,
-0.0435791015625,
-0.044891357421875,
-0.0... |
jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-convincingness-acl2016 | 2023-03-28T17:22:01.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jakub014 | null | null | jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-convincingness-acl2016 | 0 | 2 | transformers | 2023-03-28T16:19:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-IBM-argQ-30k-finetuned-convincingness-acl2016
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-IBM-argQ-30k-finetuned-convincingness-acl2016
This model is a fine-tuned version of [jakub014/bert-base-uncased-IBM-argQ-30k](https://huggingface.co/jakub014/bert-base-uncased-IBM-argQ-30k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4143
- Accuracy: 0.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3496 | 1.0 | 583 | 0.2207 | 0.9133 |
| 0.1779 | 2.0 | 1166 | 0.2128 | 0.9159 |
| 0.1439 | 3.0 | 1749 | 0.3202 | 0.9262 |
| 0.0903 | 4.0 | 2332 | 0.4013 | 0.9258 |
| 0.051 | 5.0 | 2915 | 0.4143 | 0.9266 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,713 | [
[
-0.041595458984375,
-0.046142578125,
0.01367950439453125,
0.00511932373046875,
-0.034698486328125,
-0.027740478515625,
-0.010528564453125,
-0.0192718505859375,
0.004055023193359375,
0.0284271240234375,
-0.050689697265625,
-0.043060302734375,
-0.047515869140625,
... |
vocabtrimmer/xlm-roberta-base-trimmed-en-tweet-sentiment-en | 2023-03-28T16:40:30.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-trimmed-en-tweet-sentiment-en | 0 | 2 | transformers | 2023-03-28T16:35:00 | # `vocabtrimmer/xlm-roberta-base-trimmed-en-tweet-sentiment-en`
This model is a fine-tuned version of [/home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en](https://huggingface.co//home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 68.28 | 68.28 | 68.28 | 67.86 | 68.28 | 68.19 | 68.28 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-tweet-sentiment-en/raw/main/eval.json). | 1,197 | [
[
-0.035888671875,
-0.03961181640625,
0.01390838623046875,
0.0234832763671875,
-0.041961669921875,
0.019378662109375,
-0.0291595458984375,
-0.0176849365234375,
0.039215087890625,
0.0328369140625,
-0.055145263671875,
-0.07965087890625,
-0.051605224609375,
0.005... |
vocabtrimmer/xlm-roberta-base-trimmed-en-5000-tweet-sentiment-en | 2023-03-28T17:06:16.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-trimmed-en-5000-tweet-sentiment-en | 0 | 2 | transformers | 2023-03-28T17:04:10 | # `vocabtrimmer/xlm-roberta-base-trimmed-en-5000-tweet-sentiment-en`
This model is a fine-tuned version of [/home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en-5000](https://huggingface.co//home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en-5000) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 64.83 | 64.83 | 64.83 | 64.56 | 64.83 | 65.35 | 64.83 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-5000-tweet-sentiment-en/raw/main/eval.json). | 1,217 | [
[
-0.03692626953125,
-0.0380859375,
0.01323699951171875,
0.024871826171875,
-0.0413818359375,
0.0197601318359375,
-0.027862548828125,
-0.018829345703125,
0.03656005859375,
0.0325927734375,
-0.053863525390625,
-0.07958984375,
-0.052520751953125,
0.0108413696289... |
vocabtrimmer/xlm-roberta-base-trimmed-en-10000-tweet-sentiment-en | 2023-03-28T17:23:54.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-trimmed-en-10000-tweet-sentiment-en | 0 | 2 | transformers | 2023-03-28T17:22:01 | # `vocabtrimmer/xlm-roberta-base-trimmed-en-10000-tweet-sentiment-en`
This model is a fine-tuned version of [/home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en-10000](https://huggingface.co//home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en-10000) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 66.9 | 66.9 | 66.9 | 66.64 | 66.9 | 66.71 | 66.9 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-10000-tweet-sentiment-en/raw/main/eval.json). | 1,221 | [
[
-0.036956787109375,
-0.040008544921875,
0.0135498046875,
0.02471923828125,
-0.040283203125,
0.01898193359375,
-0.02777099609375,
-0.0178375244140625,
0.038543701171875,
0.033966064453125,
-0.0531005859375,
-0.078857421875,
-0.0528564453125,
0.007442474365234... |
vocabtrimmer/xlm-roberta-base-trimmed-en-15000-tweet-sentiment-en | 2023-03-28T17:41:31.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-trimmed-en-15000-tweet-sentiment-en | 0 | 2 | transformers | 2023-03-28T17:39:36 | # `vocabtrimmer/xlm-roberta-base-trimmed-en-15000-tweet-sentiment-en`
This model is a fine-tuned version of [/home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en-15000](https://huggingface.co//home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en-15000) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 67.59 | 67.59 | 67.59 | 67.69 | 67.59 | 68.04 | 67.59 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-15000-tweet-sentiment-en/raw/main/eval.json). | 1,221 | [
[
-0.036285400390625,
-0.03961181640625,
0.0136566162109375,
0.025604248046875,
-0.042266845703125,
0.0200347900390625,
-0.0283355712890625,
-0.019134521484375,
0.038787841796875,
0.032501220703125,
-0.053863525390625,
-0.07843017578125,
-0.052703857421875,
0.... |
cardiffnlp/xlm-v-base-tweet-sentiment-pt | 2023-03-28T17:52:15.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | cardiffnlp | null | null | cardiffnlp/xlm-v-base-tweet-sentiment-pt | 0 | 2 | transformers | 2023-03-28T17:44:07 | # `cardiffnlp/xlm-v-base-tweet-sentiment-pt`
This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (portuguese).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(portuguese).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 67.01 | 67.01 | 67.01 | 66.6 | 67.01 | 67.49 | 67.01 |
Check the result file [here](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt/raw/main/eval.json). | 1,051 | [
[
-0.0275421142578125,
-0.031341552734375,
0.0214385986328125,
0.043701171875,
-0.03985595703125,
0.0159454345703125,
-0.0181427001953125,
-0.0185699462890625,
0.04443359375,
0.029266357421875,
-0.054962158203125,
-0.086181640625,
-0.053131103515625,
0.0045890... |
jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-convincingness-IBM | 2023-03-28T18:11:19.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jakub014 | null | null | jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-convincingness-IBM | 0 | 2 | transformers | 2023-03-28T17:51:47 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-IBM-argQ-30k-finetuned-convincingness-IBM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-IBM-argQ-30k-finetuned-convincingness-IBM
This model is a fine-tuned version of [jakub014/bert-base-uncased-IBM-argQ-30k](https://huggingface.co/jakub014/bert-base-uncased-IBM-argQ-30k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9264
- Accuracy: 0.7598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 270 | 0.5303 | 0.7533 |
| 0.397 | 2.0 | 540 | 0.5559 | 0.7533 |
| 0.397 | 3.0 | 810 | 0.7691 | 0.7533 |
| 0.1903 | 4.0 | 1080 | 0.9264 | 0.7598 |
| 0.1903 | 5.0 | 1350 | 1.0564 | 0.7576 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,705 | [
[
-0.0401611328125,
-0.047027587890625,
0.01457977294921875,
0.006244659423828125,
-0.03375244140625,
-0.0290985107421875,
-0.01012420654296875,
-0.020843505859375,
0.0027256011962890625,
0.02825927734375,
-0.049774169921875,
-0.042205810546875,
-0.04833984375,
... |
vocabtrimmer/xlm-roberta-base-trimmed-en-30000-tweet-sentiment-en | 2023-03-28T18:00:28.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-trimmed-en-30000-tweet-sentiment-en | 0 | 2 | transformers | 2023-03-28T17:58:20 | # `vocabtrimmer/xlm-roberta-base-trimmed-en-30000-tweet-sentiment-en`
This model is a fine-tuned version of [/home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en-30000](https://huggingface.co//home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en-30000) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 66.55 | 66.55 | 66.55 | 66.02 | 66.55 | 66.71 | 66.55 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-30000-tweet-sentiment-en/raw/main/eval.json). | 1,221 | [
[
-0.03662109375,
-0.038787841796875,
0.014312744140625,
0.025604248046875,
-0.04119873046875,
0.0184173583984375,
-0.0278472900390625,
-0.0182647705078125,
0.03887939453125,
0.0308837890625,
-0.05328369140625,
-0.0799560546875,
-0.05230712890625,
0.0060424804... |
Svetlana0303/Regression_bert_7 | 2023-03-28T18:15:19.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_bert_7 | 0 | 2 | transformers | 2023-03-28T18:14:51 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Regression_bert_7
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Regression_bert_7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1702
- Train Mae: 0.2696
- Train Mse: 0.1221
- Train R2-score: 0.7766
- Validation Loss: 0.3290
- Validation Mae: 0.2756
- Validation Mse: 0.1076
- Validation R2-score: 0.8214
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Mae | Train Mse | Train R2-score | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Epoch |
|:----------:|:---------:|:---------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-----:|
| 0.5303 | 0.3176 | 0.1540 | 0.7493 | 0.6752 | 0.3537 | 0.1857 | 0.6758 | 0 |
| 0.2316 | 0.2775 | 0.1261 | 0.7746 | 0.2451 | 0.3060 | 0.1466 | 0.7473 | 1 |
| 0.2780 | 0.2930 | 0.1373 | 0.8061 | 0.1807 | 0.2593 | 0.1127 | 0.8102 | 2 |
| 0.1776 | 0.2673 | 0.1177 | 0.6536 | 0.1407 | 0.2617 | 0.1181 | 0.7975 | 3 |
| 0.2248 | 0.2906 | 0.1349 | 0.7639 | 0.1896 | 0.2915 | 0.1364 | 0.7665 | 4 |
| 0.2295 | 0.2718 | 0.1196 | 0.7991 | 0.2038 | 0.2757 | 0.1248 | 0.7882 | 5 |
| 0.2443 | 0.2460 | 0.0975 | 0.7298 | 0.1509 | 0.2779 | 0.1301 | 0.7783 | 6 |
| 0.2538 | 0.2907 | 0.1343 | 0.7783 | 0.1930 | 0.2984 | 0.1426 | 0.7559 | 7 |
| 0.2067 | 0.2777 | 0.1281 | 0.7605 | 0.1537 | 0.2809 | 0.1318 | 0.7756 | 8 |
| 0.1702 | 0.2696 | 0.1221 | 0.7766 | 0.3290 | 0.2756 | 0.1076 | 0.8214 | 9 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 3,136 | [
[
-0.049774169921875,
-0.045745849609375,
0.0221710205078125,
0.00492095947265625,
-0.0189971923828125,
-0.0157012939453125,
-0.0017862319946289062,
-0.01306915283203125,
0.02911376953125,
0.0187530517578125,
-0.05010986328125,
-0.050445556640625,
-0.0562744140625... |
vocabtrimmer/xlm-roberta-base-trimmed-en-60000-tweet-sentiment-en | 2023-03-28T18:21:33.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-trimmed-en-60000-tweet-sentiment-en | 0 | 2 | transformers | 2023-03-28T18:18:58 | # `vocabtrimmer/xlm-roberta-base-trimmed-en-60000-tweet-sentiment-en`
This model is a fine-tuned version of [/home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en-60000](https://huggingface.co//home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-en-60000) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 69.31 | 69.31 | 69.31 | 68.42 | 69.31 | 68.83 | 69.31 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-60000-tweet-sentiment-en/raw/main/eval.json). | 1,221 | [
[
-0.03564453125,
-0.039642333984375,
0.0128631591796875,
0.025115966796875,
-0.041900634765625,
0.01934814453125,
-0.0274810791015625,
-0.0176239013671875,
0.0386962890625,
0.03228759765625,
-0.052337646484375,
-0.07952880859375,
-0.052490234375,
0.0060844421... |
jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-effectiveness-dagstuhl | 2023-03-28T18:25:11.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jakub014 | null | null | jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-effectiveness-dagstuhl | 0 | 2 | transformers | 2023-03-28T18:23:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-IBM-argQ-30k-finetuned-effectiveness-dagstuhl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-IBM-argQ-30k-finetuned-effectiveness-dagstuhl
This model is a fine-tuned version of [jakub014/bert-base-uncased-IBM-argQ-30k](https://huggingface.co/jakub014/bert-base-uncased-IBM-argQ-30k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5516
- Accuracy: 0.7302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 16 | 0.5516 | 0.7302 |
| No log | 2.0 | 32 | 0.5431 | 0.6825 |
| No log | 3.0 | 48 | 0.5942 | 0.6349 |
| No log | 4.0 | 64 | 0.6533 | 0.6349 |
| No log | 5.0 | 80 | 0.6509 | 0.6667 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,713 | [
[
-0.039337158203125,
-0.0487060546875,
0.01311492919921875,
0.0052337646484375,
-0.033050537109375,
-0.0272979736328125,
-0.01116180419921875,
-0.0164031982421875,
0.001895904541015625,
0.02569580078125,
-0.04876708984375,
-0.0440673828125,
-0.049591064453125,
... |
vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en | 2023-03-28T18:48:25.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en | 0 | 2 | transformers | 2023-03-28T18:39:53 | # Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-en | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 219,090,435 |
| parameter_size_embedding | 192,001,536 | 133,046,016 |
| vocab_size | 250,002 | 173,237 |
| compression_rate_full | 100.0 | 78.8 |
| compression_rate_embedding | 100.0 | 69.29 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | | 2 | | 2,066 | [
[
-0.056976318359375,
-0.048736572265625,
-0.0005583763122558594,
0.01424407958984375,
-0.03662109375,
-0.00772857666015625,
-0.0230560302734375,
-0.0080413818359375,
0.039520263671875,
0.040283203125,
-0.057769775390625,
-0.0615234375,
-0.042510986328125,
-0.... |
vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-5000 | 2023-03-28T18:52:51.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-5000 | 0 | 2 | transformers | 2023-03-28T18:48:46 | # Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-5000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-en | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-5000 |
|:---------------------------|:-------------------------------------------------|:-------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 89,885,955 |
| parameter_size_embedding | 192,001,536 | 3,841,536 |
| vocab_size | 250,002 | 5,002 |
| compression_rate_full | 100.0 | 32.33 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 5000 | 2 | | 2,106 | [
[
-0.05712890625,
-0.047119140625,
0.0003266334533691406,
0.01482391357421875,
-0.035369873046875,
-0.007495880126953125,
-0.0217437744140625,
-0.00809478759765625,
0.03924560546875,
0.040313720703125,
-0.058258056640625,
-0.061187744140625,
-0.041717529296875,
... |
vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-10000 | 2023-03-28T18:56:01.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-10000 | 0 | 2 | transformers | 2023-03-28T18:53:21 | # Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-10000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-en | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-10000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 93,725,955 |
| parameter_size_embedding | 192,001,536 | 7,681,536 |
| vocab_size | 250,002 | 10,002 |
| compression_rate_full | 100.0 | 33.71 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 10000 | 2 | | 2,114 | [
[
-0.05670166015625,
-0.047149658203125,
0.000530242919921875,
0.01538848876953125,
-0.034942626953125,
-0.0081024169921875,
-0.0216217041015625,
-0.00821685791015625,
0.03961181640625,
0.040771484375,
-0.0576171875,
-0.060638427734375,
-0.04180908203125,
0.00... |
vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-15000 | 2023-03-28T18:59:30.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-15000 | 0 | 2 | transformers | 2023-03-28T18:56:45 | # Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-15000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-en | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-15000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 97,565,955 |
| parameter_size_embedding | 192,001,536 | 11,521,536 |
| vocab_size | 250,002 | 15,002 |
| compression_rate_full | 100.0 | 35.09 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 15000 | 2 | | 2,114 | [
[
-0.05706787109375,
-0.0474853515625,
0.00011461973190307617,
0.0151824951171875,
-0.035369873046875,
-0.00746917724609375,
-0.0220184326171875,
-0.0086669921875,
0.03948974609375,
0.040130615234375,
-0.057861328125,
-0.05963134765625,
-0.0418701171875,
0.001... |
vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-30000 | 2023-03-28T19:03:46.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-30000 | 0 | 2 | transformers | 2023-03-28T19:00:42 | # Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-30000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-en | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-30000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 109,085,955 |
| parameter_size_embedding | 192,001,536 | 23,041,536 |
| vocab_size | 250,002 | 30,002 |
| compression_rate_full | 100.0 | 39.23 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 30000 | 2 | | 2,114 | [
[
-0.057464599609375,
-0.047393798828125,
0.0004177093505859375,
0.01515960693359375,
-0.035430908203125,
-0.00788116455078125,
-0.022064208984375,
-0.00821685791015625,
0.03912353515625,
0.04052734375,
-0.058258056640625,
-0.06011962890625,
-0.041412353515625,
... |
vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-60000 | 2023-03-28T19:09:31.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-60000 | 0 | 2 | transformers | 2023-03-28T19:05:55 | # Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-60000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-en | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-60000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 132,125,955 |
| parameter_size_embedding | 192,001,536 | 46,081,536 |
| vocab_size | 250,002 | 60,002 |
| compression_rate_full | 100.0 | 47.52 |
| compression_rate_embedding | 100.0 | 24.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 60000 | 2 | | 2,114 | [
[
-0.0567626953125,
-0.0472412109375,
-0.00009864568710327148,
0.0149688720703125,
-0.0352783203125,
-0.00736236572265625,
-0.02191162109375,
-0.00821685791015625,
0.03948974609375,
0.040740966796875,
-0.05767822265625,
-0.06024169921875,
-0.04132080078125,
0.... |
mjbeattie/gcicontracts | 2023-04-05T21:27:55.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | mjbeattie | null | null | mjbeattie/gcicontracts | 0 | 2 | transformers | 2023-03-28T21:31:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: gcicontracts
results: []
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gcicontracts
This model is a fine-tuned version of [mjbeattie/mjbbillsum](https://huggingface.co/mjbeattie/mjbbillsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0721
- Rouge1: 0.2917
- Rouge2: 0.1209
- Rougel: 0.2556
- Rougelsum: 0.2535
- Gen Len: 18.1463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 11 | 2.4545 | 0.3004 | 0.1333 | 0.2658 | 0.2637 | 18.2927 |
| No log | 2.0 | 22 | 2.3030 | 0.3047 | 0.1397 | 0.2744 | 0.2709 | 18.2927 |
| No log | 3.0 | 33 | 2.2187 | 0.3065 | 0.1416 | 0.276 | 0.2718 | 18.2439 |
| No log | 4.0 | 44 | 2.1562 | 0.2926 | 0.1209 | 0.2558 | 0.2538 | 18.2439 |
| No log | 5.0 | 55 | 2.1172 | 0.2926 | 0.1209 | 0.2558 | 0.2538 | 18.2439 |
| No log | 6.0 | 66 | 2.0921 | 0.2914 | 0.1209 | 0.2552 | 0.253 | 18.1463 |
| No log | 7.0 | 77 | 2.0786 | 0.2917 | 0.1209 | 0.2556 | 0.2535 | 18.1463 |
| No log | 8.0 | 88 | 2.0721 | 0.2917 | 0.1209 | 0.2556 | 0.2535 | 18.1463 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.0
- Tokenizers 0.11.0 | 2,265 | [
[
-0.035369873046875,
-0.039886474609375,
0.010223388671875,
0.00439453125,
-0.01274871826171875,
-0.0180816650390625,
-0.0003380775451660156,
-0.018096923828125,
0.034515380859375,
0.02783203125,
-0.0521240234375,
-0.058135986328125,
-0.04766845703125,
-0.011... |
ahkrey/nli-roberta-base-finetuned-for-amazon-review-ratings | 2023-03-28T22:38:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ahkrey | null | null | ahkrey/nli-roberta-base-finetuned-for-amazon-review-ratings | 0 | 2 | transformers | 2023-03-28T21:54:17 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.56
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0337
- Meanabsoluteerror: 0.532
- Accuracy: 0.56
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.1731 | 1.0 | 313 | 1.0337 | 0.532 | 0.56 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,836 | [
[
-0.0361328125,
-0.04229736328125,
0.00884246826171875,
0.023162841796875,
-0.022552490234375,
-0.0345458984375,
-0.0158843994140625,
-0.0259857177734375,
0.011474609375,
0.03125,
-0.057647705078125,
-0.043365478515625,
-0.058135986328125,
0.005035400390625,
... |
jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-sufficiency-ukp | 2023-03-28T21:59:52.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jakub014 | null | null | jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-sufficiency-ukp | 0 | 2 | transformers | 2023-03-28T21:55:26 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-IBM-argQ-30k-finetuned-sufficiency-ukp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-IBM-argQ-30k-finetuned-sufficiency-ukp
This model is a fine-tuned version of [jakub014/bert-base-uncased-IBM-argQ-30k](https://huggingface.co/jakub014/bert-base-uncased-IBM-argQ-30k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5490
- Accuracy: 0.8835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 52 | 0.4372 | 0.8107 |
| No log | 2.0 | 104 | 0.3424 | 0.8786 |
| No log | 3.0 | 156 | 0.4970 | 0.8689 |
| No log | 4.0 | 208 | 0.5267 | 0.8786 |
| No log | 5.0 | 260 | 0.5490 | 0.8835 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,699 | [
[
-0.039093017578125,
-0.042755126953125,
0.01068115234375,
0.00823211669921875,
-0.03265380859375,
-0.031585693359375,
-0.0121307373046875,
-0.0211181640625,
0.003765106201171875,
0.0285186767578125,
-0.05108642578125,
-0.042266845703125,
-0.047027587890625,
... |
keeyan/nli-roberta-base-finetuned-for-amazon-review-ratings | 2023-03-28T22:00:17.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | keeyan | null | null | keeyan/nli-roberta-base-finetuned-for-amazon-review-ratings | 0 | 2 | transformers | 2023-03-28T21:55:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0080
- Meanabsoluteerror: 0.526
- Accuracy: 0.571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.1215 | 1.0 | 313 | 1.0080 | 0.526 | 0.571 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,838 | [
[
-0.03607177734375,
-0.042236328125,
0.007686614990234375,
0.022613525390625,
-0.022705078125,
-0.034759521484375,
-0.016204833984375,
-0.0250701904296875,
0.01215362548828125,
0.0312042236328125,
-0.05706787109375,
-0.04351806640625,
-0.058319091796875,
0.00... |
babyalpac/nli-roberta-base-finetuned-for-amazon-review-ratings | 2023-03-28T22:01:15.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | babyalpac | null | null | babyalpac/nli-roberta-base-finetuned-for-amazon-review-ratings | 0 | 2 | transformers | 2023-03-28T21:55:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.564
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0188
- Meanabsoluteerror: 0.524
- Accuracy: 0.564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.1441 | 1.0 | 313 | 1.0188 | 0.524 | 0.564 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,838 | [
[
-0.036529541015625,
-0.04266357421875,
0.00798797607421875,
0.02276611328125,
-0.022796630859375,
-0.034820556640625,
-0.0164031982421875,
-0.02496337890625,
0.0120086669921875,
0.0312042236328125,
-0.0572509765625,
-0.04345703125,
-0.058013916015625,
0.0050... |
coleperg/nli-roberta-base-finetuned-for-amazon-review-ratings | 2023-03-28T22:19:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | coleperg | null | null | coleperg/nli-roberta-base-finetuned-for-amazon-review-ratings | 0 | 2 | transformers | 2023-03-28T22:05:48 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0089
- Meanabsoluteerror: 0.535
- Accuracy: 0.548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.1095 | 1.0 | 313 | 1.0089 | 0.535 | 0.548 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,838 | [
[
-0.03631591796875,
-0.042327880859375,
0.00835418701171875,
0.02264404296875,
-0.0230560302734375,
-0.034820556640625,
-0.0162811279296875,
-0.0256500244140625,
0.01200103759765625,
0.03106689453125,
-0.0576171875,
-0.043548583984375,
-0.0582275390625,
0.004... |
NinjaBanana1/nli-roberta-base-finetuned-for-amazon-review-ratings | 2023-03-28T22:26:46.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | NinjaBanana1 | null | null | NinjaBanana1/nli-roberta-base-finetuned-for-amazon-review-ratings | 0 | 2 | transformers | 2023-03-28T22:21:14 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.549
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0177
- Meanabsoluteerror: 0.538
- Accuracy: 0.549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.1226 | 1.0 | 313 | 1.0177 | 0.538 | 0.549 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,838 | [
[
-0.0362548828125,
-0.042266845703125,
0.00860595703125,
0.0227508544921875,
-0.0231170654296875,
-0.035369873046875,
-0.0161590576171875,
-0.02557373046875,
0.0117340087890625,
0.0313720703125,
-0.05743408203125,
-0.04345703125,
-0.058135986328125,
0.0049705... |
jaysimons/nli-roberta-base-finetuned-for-amazon-review-ratings | 2023-03-28T22:33:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jaysimons | null | null | jaysimons/nli-roberta-base-finetuned-for-amazon-review-ratings | 0 | 2 | transformers | 2023-03-28T22:21:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.553
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0082
- Meanabsoluteerror: 0.531
- Accuracy: 0.553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.1274 | 1.0 | 313 | 1.0082 | 0.531 | 0.553 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,838 | [
[
-0.0360107421875,
-0.04254150390625,
0.00763702392578125,
0.0222320556640625,
-0.0230255126953125,
-0.034423828125,
-0.0163116455078125,
-0.025299072265625,
0.01197052001953125,
0.0312042236328125,
-0.05712890625,
-0.043548583984375,
-0.058380126953125,
0.00... |
rscales/nli-roberta-base-finetuned-for-amazon-review-ratings | 2023-03-28T22:25:15.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | rscales | null | null | rscales/nli-roberta-base-finetuned-for-amazon-review-ratings | 0 | 2 | transformers | 2023-03-28T22:22:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.55
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0092
- Meanabsoluteerror: 0.527
- Accuracy: 0.55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.1059 | 1.0 | 313 | 1.0092 | 0.527 | 0.55 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,836 | [
[
-0.036346435546875,
-0.04278564453125,
0.00833892822265625,
0.0231781005859375,
-0.022369384765625,
-0.035186767578125,
-0.01617431640625,
-0.02606201171875,
0.01187896728515625,
0.031951904296875,
-0.057891845703125,
-0.04351806640625,
-0.058135986328125,
0... |
noahknauf/nli-roberta-base-finetuned-for-amazon-review-ratings | 2023-03-28T22:30:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | noahknauf | null | null | noahknauf/nli-roberta-base-finetuned-for-amazon-review-ratings | 0 | 2 | transformers | 2023-03-28T22:23:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.551
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0037
- Meanabsoluteerror: 0.527
- Accuracy: 0.551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 0.9999 | 1.0 | 313 | 1.0037 | 0.527 | 0.551 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,838 | [
[
-0.03558349609375,
-0.042755126953125,
0.00769805908203125,
0.022979736328125,
-0.0229339599609375,
-0.034515380859375,
-0.016387939453125,
-0.0254974365234375,
0.01165771484375,
0.0311431884765625,
-0.056884765625,
-0.043701171875,
-0.058441162109375,
0.005... |
jimmysky/nli-roberta-base-finetuned-for-amazon-review-ratings | 2023-03-28T22:33:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jimmysky | null | null | jimmysky/nli-roberta-base-finetuned-for-amazon-review-ratings | 0 | 2 | transformers | 2023-03-28T22:27:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.557
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0169
- Meanabsoluteerror: 0.533
- Accuracy: 0.557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.1711 | 1.0 | 313 | 1.0169 | 0.533 | 0.557 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,838 | [
[
-0.036346435546875,
-0.042266845703125,
0.008331298828125,
0.0224609375,
-0.0231170654296875,
-0.03485107421875,
-0.0161895751953125,
-0.0258026123046875,
0.01148223876953125,
0.031280517578125,
-0.057586669921875,
-0.042877197265625,
-0.05804443359375,
0.00... |
jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-sufficiency-dagstuhl | 2023-03-28T22:29:15.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jakub014 | null | null | jakub014/bert-base-uncased-IBM-argQ-30k-finetuned-sufficiency-dagstuhl | 0 | 2 | transformers | 2023-03-28T22:27:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-IBM-argQ-30k-finetuned-sufficiency-dagstuhl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-IBM-argQ-30k-finetuned-sufficiency-dagstuhl
This model is a fine-tuned version of [jakub014/bert-base-uncased-IBM-argQ-30k](https://huggingface.co/jakub014/bert-base-uncased-IBM-argQ-30k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5933
- Accuracy: 0.6984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 16 | 0.5933 | 0.6984 |
| No log | 2.0 | 32 | 0.6388 | 0.6190 |
| No log | 3.0 | 48 | 0.7638 | 0.6349 |
| No log | 4.0 | 64 | 0.8638 | 0.6190 |
| No log | 5.0 | 80 | 0.9086 | 0.6349 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,709 | [
[
-0.04022216796875,
-0.04705810546875,
0.01450347900390625,
0.006710052490234375,
-0.03369140625,
-0.028045654296875,
-0.01210784912109375,
-0.0171661376953125,
0.0019321441650390625,
0.0272216796875,
-0.051971435546875,
-0.042816162109375,
-0.048309326171875,
... |
ktdent/nli-roberta-base-finetuned-for-amazon-review-ratings | 2023-03-28T22:34:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ktdent | null | null | ktdent/nli-roberta-base-finetuned-for-amazon-review-ratings | 0 | 2 | transformers | 2023-03-28T22:31:04 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.33
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6148
- Meanabsoluteerror: 1.215
- Accuracy: 0.33
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.679 | 1.0 | 32 | 1.6148 | 1.215 | 0.33 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,836 | [
[
-0.03656005859375,
-0.042236328125,
0.00897216796875,
0.0231781005859375,
-0.02197265625,
-0.03466796875,
-0.01629638671875,
-0.026397705078125,
0.01187896728515625,
0.03192138671875,
-0.057861328125,
-0.042938232421875,
-0.0579833984375,
0.00609207153320312... |
PJHinAI/sentiment-analysis-using-steam-data | 2023-04-03T08:07:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | PJHinAI | null | null | PJHinAI/sentiment-analysis-using-steam-data | 0 | 2 | transformers | 2023-03-29T02:57:32 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: activelearning-sentiment-model-using-steam-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# activelearning-sentiment-model-using-steam-data
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2861
- Accuacy: 0.8470
- F1: 0.8467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,185 | [
[
-0.0172119140625,
-0.055877685546875,
0.02545166015625,
0.01678466796875,
-0.0289764404296875,
-0.03271484375,
-0.0204010009765625,
-0.0011043548583984375,
0.0111083984375,
0.01806640625,
-0.0526123046875,
-0.055328369140625,
-0.048980712890625,
-0.022277832... |
davidliu1110/bert-fine-tuned-cola | 2023-03-29T03:31:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | davidliu1110 | null | null | davidliu1110/bert-fine-tuned-cola | 0 | 2 | transformers | 2023-03-29T03:01:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: bert-fine-tuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 459 | 0.4187 |
| 0.5148 | 2.0 | 918 | 0.5389 |
| 0.3202 | 3.0 | 1377 | 0.6432 |
| 0.1684 | 4.0 | 1836 | 0.7600 |
| 0.101 | 5.0 | 2295 | 0.8369 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,480 | [
[
-0.0285186767578125,
-0.055145263671875,
0.0043487548828125,
0.0192413330078125,
-0.0223541259765625,
-0.01910400390625,
-0.0171966552734375,
-0.01806640625,
0.0153961181640625,
0.01206207275390625,
-0.05859375,
-0.028289794921875,
-0.05145263671875,
-0.0154... |
Svetlana0303/Regression_albert_8 | 2023-03-29T07:02:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_albert_8 | 0 | 2 | transformers | 2023-03-29T06:54:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Regression_albert_8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Regression_albert_8
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0710
- Mse: 0.0710
- Mae: 0.1978
- R2: 0.0202
- Accuracy: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| No log | 1.0 | 49 | 0.0777 | 0.0777 | 0.2323 | 0.2804 | 0.9464 |
| No log | 2.0 | 98 | 0.0649 | 0.0649 | 0.2176 | 0.3990 | 0.9464 |
| No log | 3.0 | 147 | 0.0885 | 0.0885 | 0.2354 | 0.1799 | 0.8571 |
| No log | 4.0 | 196 | 0.0620 | 0.0620 | 0.1971 | 0.4252 | 0.9643 |
| No log | 5.0 | 245 | 0.0605 | 0.0605 | 0.2071 | 0.4394 | 0.9821 |
| No log | 6.0 | 294 | 0.0523 | 0.0523 | 0.1714 | 0.5155 | 0.9821 |
| No log | 7.0 | 343 | 0.1047 | 0.1047 | 0.2598 | 0.0301 | 0.8393 |
| No log | 8.0 | 392 | 0.0421 | 0.0421 | 0.1543 | 0.6103 | 0.9643 |
| No log | 9.0 | 441 | 0.0445 | 0.0445 | 0.1612 | 0.5875 | 0.9643 |
| No log | 10.0 | 490 | 0.0438 | 0.0438 | 0.1608 | 0.5939 | 0.9821 |
| 0.0478 | 11.0 | 539 | 0.0529 | 0.0529 | 0.1816 | 0.5095 | 0.9464 |
| 0.0478 | 12.0 | 588 | 0.0401 | 0.0401 | 0.1495 | 0.6288 | 0.9643 |
| 0.0478 | 13.0 | 637 | 0.0471 | 0.0471 | 0.1637 | 0.5639 | 0.9643 |
| 0.0478 | 14.0 | 686 | 0.0454 | 0.0454 | 0.1632 | 0.5797 | 0.9643 |
| 0.0478 | 15.0 | 735 | 0.0436 | 0.0436 | 0.1526 | 0.5957 | 0.9643 |
| 0.0478 | 16.0 | 784 | 0.0520 | 0.0520 | 0.1764 | 0.5178 | 0.9643 |
| 0.0478 | 17.0 | 833 | 0.0414 | 0.0414 | 0.1536 | 0.6166 | 0.9821 |
| 0.0478 | 18.0 | 882 | 0.0413 | 0.0413 | 0.1490 | 0.6176 | 0.9643 |
| 0.0478 | 19.0 | 931 | 0.0413 | 0.0413 | 0.1514 | 0.6174 | 0.9821 |
| 0.0478 | 20.0 | 980 | 0.0429 | 0.0429 | 0.1537 | 0.6023 | 0.9821 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 3,139 | [
[
-0.038055419921875,
-0.0374755859375,
0.0186309814453125,
0.0123138427734375,
-0.0020961761474609375,
-0.01346588134765625,
0.007289886474609375,
-0.00943756103515625,
0.037353515625,
0.0264739990234375,
-0.043212890625,
-0.05438232421875,
-0.051910400390625,
... |
Azzizz17/autotrain-translator-44772112704 | 2023-03-29T07:32:33.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:Azzizz17/autotrain-data-translator",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | translation | Azzizz17 | null | null | Azzizz17/autotrain-translator-44772112704 | 0 | 2 | transformers | 2023-03-29T07:28:11 | ---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- Azzizz17/autotrain-data-translator
co2_eq_emissions:
emissions: 1.6332201411420315
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 44772112704
- CO2 Emissions (in grams): 1.6332
## Validation Metrics
- Loss: 2.930
- SacreBLEU: 1.592
- Gen len: 18.672 | 354 | [
[
-0.009765625,
-0.0166015625,
0.034576416015625,
0.0120697021484375,
-0.00506591796875,
-0.01467132568359375,
0.007808685302734375,
-0.005115509033203125,
-0.0290679931640625,
0.0249481201171875,
-0.0504150390625,
-0.0254058837890625,
-0.045623779296875,
-0.0... |
LinhDuong/doctorwithbloomz-7b1-mt | 2023-03-31T00:04:50.000Z | [
"transformers",
"pytorch",
"arxiv:2303.14070",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"region:us"
] | null | LinhDuong | null | null | LinhDuong/doctorwithbloomz-7b1-mt | 1 | 2 | transformers | 2023-03-29T07:28:41 | ---
license: bigscience-bloom-rail-1.0
---
Here is our finetuned weight for Bloomz-7b1-mt with Low-Rank Adaptation and a chatdoctor-200k dataset from a paper, namely ChatDoctor: A Medical Chat Model Fine-tuned
on LLaMA Model using Medical Domain Knowledge (https://arxiv.org/pdf/2303.14070.pdf).
Our source code can be found at https://github.com/linhduongtuan/doctorwithbloom
| 378 | [
[
-0.0123443603515625,
-0.0294342041015625,
0.025054931640625,
0.018402099609375,
-0.021881103515625,
-0.00872039794921875,
-0.01206207275390625,
-0.024078369140625,
0.0185546875,
0.03717041015625,
-0.054901123046875,
-0.042266845703125,
-0.06280517578125,
-0.... |
WilHoon/distilbert-base-uncased-finetuned-emotion | 2023-03-29T08:51:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | WilHoon | null | null | WilHoon/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-03-29T07:56:17 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9264851417335438
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2217
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8267 | 1.0 | 250 | 0.3277 | 0.9015 | 0.8977 |
| 0.2576 | 2.0 | 500 | 0.2217 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,849 | [
[
-0.037506103515625,
-0.041534423828125,
0.015777587890625,
0.0220947265625,
-0.0260009765625,
-0.02001953125,
-0.01287078857421875,
-0.00833892822265625,
0.01062774658203125,
0.008392333984375,
-0.056304931640625,
-0.0517578125,
-0.05902099609375,
-0.0087203... |
SukeerthJonathan/bhagavatgita | 2023-03-29T09:45:49.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"question-answering",
"en",
"arxiv:1910.09700",
"license:openrail",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | question-answering | SukeerthJonathan | null | null | SukeerthJonathan/bhagavatgita | 0 | 2 | transformers | 2023-03-29T09:32:29 | ---
license: openrail
language:
- en
library_name: transformers
pipeline_tag: question-answering
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| 5,264 | [
[
-0.048004150390625,
-0.0455322265625,
0.031982421875,
0.008453369140625,
-0.0243988037109375,
-0.02484130859375,
0.00885009765625,
-0.047088623046875,
0.01849365234375,
0.0496826171875,
-0.055633544921875,
-0.05059814453125,
-0.044342041015625,
-0.0077323913... |
Pavan27/autotrain-telugu_summarization-44817112805 | 2023-03-30T10:16:53.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:Pavan27/autotrain-data-telugu_summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | Pavan27 | null | null | Pavan27/autotrain-telugu_summarization-44817112805 | 0 | 2 | transformers | 2023-03-29T09:53:58 | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Pavan27/autotrain-data-telugu_summarization
co2_eq_emissions:
emissions: 553.9241452628997
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 44817112805
- CO2 Emissions (in grams): 553.9241
## Validation Metrics
- Loss: 1.240
- Rouge1: 25.220
- Rouge2: 6.815
- RougeL: 24.642
- RougeLsum: 25.120
- Gen Len: 82.823
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Pavan27/autotrain-telugu_summarization-44817112805
``` | 736 | [
[
-0.0292510986328125,
-0.0308990478515625,
0.0202789306640625,
0.0248565673828125,
-0.01068878173828125,
0.006923675537109375,
0.01318359375,
-0.0079498291015625,
0.0240631103515625,
0.01126861572265625,
-0.053314208984375,
-0.02935791015625,
-0.058807373046875,
... |
Pavan27/autotrain-telugu_summarization-44817112806 | 2023-03-29T23:18:20.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:Pavan27/autotrain-data-telugu_summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | Pavan27 | null | null | Pavan27/autotrain-telugu_summarization-44817112806 | 0 | 2 | transformers | 2023-03-29T09:53:58 | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Pavan27/autotrain-data-telugu_summarization
co2_eq_emissions:
emissions: 304.57370965004566
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 44817112806
- CO2 Emissions (in grams): 304.5737
## Validation Metrics
- Loss: 1.288
- Rouge1: 25.042
- Rouge2: 6.486
- RougeL: 24.483
- RougeLsum: 24.899
- Gen Len: 82.861
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Pavan27/autotrain-telugu_summarization-44817112806
``` | 737 | [
[
-0.0284881591796875,
-0.03131103515625,
0.0200042724609375,
0.0253143310546875,
-0.00972747802734375,
0.006622314453125,
0.01329803466796875,
-0.007137298583984375,
0.02301025390625,
0.0125885009765625,
-0.055267333984375,
-0.029449462890625,
-0.057891845703125,... |
nullzero-live/bert-base-banking77-pt2 | 2023-03-29T12:00:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | nullzero-live | null | null | nullzero-live/bert-base-banking77-pt2 | 0 | 2 | transformers | 2023-03-29T10:04:48 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9290417627851566
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2990
- F1: 0.9290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0285 | 1.0 | 626 | 0.7603 | 0.8517 |
| 0.3662 | 2.0 | 1252 | 0.3676 | 0.9198 |
| 0.1822 | 3.0 | 1878 | 0.2990 | 0.9290 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1,767 | [
[
-0.03167724609375,
-0.0399169921875,
0.0099639892578125,
0.0118408203125,
-0.04339599609375,
-0.025360107421875,
-0.0091094970703125,
-0.0194854736328125,
-0.0013399124145507812,
0.041748046875,
-0.04144287109375,
-0.04541015625,
-0.05328369140625,
-0.029647... |
harouzie/bert-base-paws | 2023-03-31T12:35:09.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:paws",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | harouzie | null | null | harouzie/bert-base-paws | 0 | 2 | transformers | 2023-03-29T11:28:25 | ---
license: mit
language:
- en
metrics:
- accuracy
- f1
library_name: transformers
pipeline_tag: text-classification
datasets:
- paws
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | 5,299 | [
[
-0.04803466796875,
-0.0455322265625,
0.032012939453125,
0.00844573974609375,
-0.024383544921875,
-0.0248565673828125,
0.00884246826171875,
-0.047119140625,
0.018524169921875,
0.0498046875,
-0.0556640625,
-0.050628662109375,
-0.04437255859375,
-0.007740020751... |
WENGSYX/CoNN_Parity | 2023-04-14T02:00:54.000Z | [
"transformers",
"pytorch",
"conn",
"arxiv:2304.01665",
"endpoints_compatible",
"region:us"
] | null | WENGSYX | null | null | WENGSYX/CoNN_Parity | 0 | 2 | transformers | 2023-03-29T11:49:36 | # Model card for CoNN Parity
### Introduction
In paper Neural Comprehension: Language Models with Compiled Neural Networks , we introduced the integration of Compiled Neural Networks (CoNN) into the framework of language models, enabling existing language models to perform symbolic operations with perfect accuracy without the need for external tools. In this model card, we introduce the Parity model, which is similar to the Transformer model and can be used to perform the Parity task.
### Install
```
git clone https://github.com/WENGSYX/Neural-Comprehension
cd Neural-Comprehension
pip install .
```
To run neural comprehension, you need to install `PyTorch`, `Transformers`, `jax`, and `tracr`.
### How to Use?
```
from NeuralCom.CoNN.modeling_conn import CoNNModel
from NeuralCom.CoNN import Tokenizer
model = CoNNModel.from_pretrained('WENGSYX/CoNN_Parity')
tokenizer = Tokenizer(model.config.input_encoding_map, model.config.output_encoding_map,model.config.max_position_embeddings)
output = model(tokenizer('1 1 0 0 1 0').unsqueeze(0))
print(tokenizer.decode(output.argmax(2)))
>>> [['bos', '1', '1', '1', '1', '1', '1']]
```
### 🙏Cite🙏
###### If you are interested in our paper, please feel free to cite it.
```
@misc{weng2023neural,
title={Neural Comprehension: Language Models with Compiled Neural Networks},
author={Yixuan Weng and Minjun Zhu and Fei Xia and Bin Li and Shizhu He and Kang Liu and Jun Zhao},
year={2023},
eprint={2304.01665},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 1,562 | [
[
-0.021240234375,
-0.04296875,
0.0137176513671875,
0.0249786376953125,
-0.03753662109375,
-0.029205322265625,
-0.0181121826171875,
-0.01678466796875,
-0.0004963874816894531,
0.032196044921875,
-0.03155517578125,
-0.043792724609375,
-0.0357666015625,
0.0049552... |
ennp/bert-turkish-text-classification-cased | 2023-04-09T13:46:00.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"tr",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | ennp | null | null | ennp/bert-turkish-text-classification-cased | 0 | 2 | transformers | 2023-03-29T14:09:40 | ---
license: mit
language:
- tr
metrics:
- accuracy
- f1
---
Bu model https://github.com/stefan-it/turkish-bert'in; aşağıdaki 5 kategorinin olduğu metin sınıflandırma verilerine göre fine-tuned edilmiş halidir.
code_to_label={
'LABEL_0': 'INSULT ',
'LABEL_1': 'RACIST ',
'LABEL_2': 'SEXIST',
'LABEL_3': 'PROFANITY ',
'LABEL_4': 'OTHER' }
````
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer, AutoModelForSequenceClassification
tokenizer= AutoTokenizer.from_pretrained("ennp/bert-turkish-text-classification-cased")
model= AutoModelForSequenceClassification.from_pretrained("ennp/bert-turkish-text-classification-cased")
nlp=pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
code_to_label={
'LABEL_0': 'INSULT ',
'LABEL_1': 'RACIST ',
'LABEL_2': 'SEXIST',
'LABEL_3': 'PROFANITY ',
'LABEL_4': 'OTHER' }
code_to_label[nlp("kıl herif gibi davranma")[0]['label']]
````
| 931 | [
[
-0.035003662109375,
-0.036590576171875,
-0.01085662841796875,
0.0289459228515625,
-0.037322998046875,
0.0027217864990234375,
-0.01245880126953125,
-0.0064544677734375,
0.01023101806640625,
0.0155181884765625,
-0.037322998046875,
-0.050872802734375,
-0.0573730468... |
feabries/ppo-SnowballTarget | 2023-03-29T15:33:15.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | feabries | null | null | feabries/ppo-SnowballTarget | 0 | 2 | ml-agents | 2023-03-29T15:33:09 | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: feabries/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 987 | [
[
-0.016143798828125,
-0.02801513671875,
0.007625579833984375,
0.0171356201171875,
-0.021484375,
0.0165863037109375,
0.0230255126953125,
-0.006465911865234375,
0.025360107421875,
0.0390625,
-0.053375244140625,
-0.05609130859375,
-0.040863037109375,
-0.01739501... |
Ganu3010/dqn-SpaceInvadersNoFrameskip-v4 | 2023-03-29T16:37:59.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Ganu3010 | null | null | Ganu3010/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-03-29T16:37:14 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 643.50 +/- 137.50
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ganu3010 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ganu3010 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Ganu3010
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,691 | [
[
-0.04193115234375,
-0.03753662109375,
0.0205535888671875,
0.024810791015625,
-0.009307861328125,
-0.0180511474609375,
0.0113677978515625,
-0.01384735107421875,
0.0135498046875,
0.0237884521484375,
-0.0689697265625,
-0.03582763671875,
-0.02691650390625,
-0.00... |
amalik27/fake2 | 2023-03-29T19:59:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | amalik27 | null | null | amalik27/fake2 | 0 | 2 | transformers | 2023-03-29T17:06:17 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fake2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fake2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0191
- Accuracy: {'accuracy': 0.996116504854369}
- F1: 0.9961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:------:|
| 0.0294 | 1.0 | 4056 | 0.0179 | {'accuracy': 0.9938973647711512} | 0.9939 |
| 0.007 | 2.0 | 8112 | 0.0191 | {'accuracy': 0.996116504854369} | 0.9961 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,542 | [
[
-0.032318115234375,
-0.0465087890625,
0.01441192626953125,
0.01904296875,
-0.0262298583984375,
-0.029541015625,
-0.01349639892578125,
-0.024261474609375,
0.0104217529296875,
0.0212860107421875,
-0.056182861328125,
-0.03802490234375,
-0.048583984375,
-0.02384... |
Ranjit/Whisper_Small_Odia_CV_11.0_5k_steps | 2023-05-31T19:44:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"or",
"dataset:mozilla-foundation/common_voice_11_0",
"license:afl-3.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | Ranjit | null | null | Ranjit/Whisper_Small_Odia_CV_11.0_5k_steps | 1 | 2 | transformers | 2023-03-29T18:53:23 | ---
license: afl-3.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper_Small_Odia_CV_11.0_5k_steps
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 or
type: mozilla-foundation/common_voice_11_0
config: or
split: test
args: or
metrics:
- name: Wer
type: wer
value: 23.497884344146687
language:
- or
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper_Small_Odia_CV_11.0_5k_steps
This model is a fine-tuned version of [Ranjit/Whisper_Small_Odia_10k_steps](https://huggingface.co/Ranjit/Whisper_Small_Odia_10k_steps) on the [mozilla-foundation/common_voice_11_0 or](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4827
- Wer: 23.4979
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0018 | 50.0 | 1000 | 0.3315 | 24.0903 |
| 0.0 | 100.0 | 2000 | 0.4098 | 23.7236 |
| 0.0 | 150.0 | 3000 | 0.4827 | 23.4979 |
| 0.0 | 200.0 | 4000 | 0.4914 | 23.8928 |
| 0.0 | 250.0 | 5000 | 0.4953 | 23.7800 | | 1,928 | [
[
-0.034820556640625,
-0.044952392578125,
0.00977325439453125,
0.0209503173828125,
-0.0279388427734375,
-0.0293731689453125,
-0.0109100341796875,
-0.0167083740234375,
0.0228118896484375,
0.0350341796875,
-0.0565185546875,
-0.04736328125,
-0.032958984375,
-0.01... |
eLarry/poca-SoccerTwos-v3-Self-Aware | 2023-03-29T20:25:55.000Z | [
"ml-agents",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | eLarry | null | null | eLarry/poca-SoccerTwos-v3-Self-Aware | 0 | 2 | ml-agents | 2023-03-29T20:25:50 |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: eLarry/poca-SoccerTwos-v3-Self-Aware
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,043 | [
[
-0.033447265625,
-0.03619384765625,
0.0160675048828125,
0.030181884765625,
-0.0111083984375,
0.01474761962890625,
0.022491455078125,
-0.0253448486328125,
0.055328369140625,
0.0231170654296875,
-0.056427001953125,
-0.058502197265625,
-0.0290679931640625,
-0.0... |
Hinataaa/autotrain-summarize_model_arp-45003113075 | 2023-03-29T20:35:26.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:Hinataaa/autotrain-data-summarize_model_arp",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | Hinataaa | null | null | Hinataaa/autotrain-summarize_model_arp-45003113075 | 0 | 2 | transformers | 2023-03-29T20:35:11 | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Hinataaa/autotrain-data-summarize_model_arp
co2_eq_emissions:
emissions: 0.13739672174523904
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 45003113075
- CO2 Emissions (in grams): 0.1374
## Validation Metrics
- Loss: 0.828
- Rouge1: 65.000
- Rouge2: 21.053
- RougeL: 52.500
- RougeLsum: 52.500
- Gen Len: 14.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Hinataaa/autotrain-summarize_model_arp-45003113075
``` | 736 | [
[
-0.0369873046875,
-0.0277252197265625,
0.022216796875,
0.0204315185546875,
-0.003200531005859375,
0.002155303955078125,
0.0177001953125,
-0.0120391845703125,
0.0264892578125,
0.0233612060546875,
-0.055206298828125,
-0.031463623046875,
-0.05584716796875,
0.00... |
sofiapecora/SpaceInvadersNoFrameskip-v4 | 2023-03-29T21:05:53.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | sofiapecora | null | null | sofiapecora/SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-03-29T21:05:14 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 529.50 +/- 116.01
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sofiapecora -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sofiapecora -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sofiapecora
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,700 | [
[
-0.0413818359375,
-0.036834716796875,
0.021514892578125,
0.0255126953125,
-0.01053619384765625,
-0.01861572265625,
0.01219940185546875,
-0.01446533203125,
0.0147247314453125,
0.023651123046875,
-0.06854248046875,
-0.036224365234375,
-0.02716064453125,
-0.003... |
platzi/platzi-distilroberta-base-mrpc-glue-david-garcia | 2023-03-30T00:24:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-david-garcia | 0 | 2 | transformers | 2023-03-29T22:00:04 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-david-garcia
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7965686274509803
- name: F1
type: f1
value: 0.8623548922056385
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-david-garcia
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6754
- Accuracy: 0.7966
- F1: 0.8624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.526 | 1.09 | 500 | 0.6754 | 0.7966 | 0.8624 |
| 0.3485 | 2.18 | 1000 | 0.6995 | 0.8309 | 0.8783 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,427 | [
[
-0.030120849609375,
-0.043975830078125,
0.0092315673828125,
0.0195159912109375,
-0.0295257568359375,
-0.0239105224609375,
-0.0084228515625,
-0.004398345947265625,
0.0104217529296875,
0.0104217529296875,
-0.050384521484375,
-0.043212890625,
-0.057861328125,
-... |
drinux/distilbert-base-uncased-finetuned-emotion | 2023-03-29T22:46:14.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | drinux | null | null | drinux/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-03-29T22:40:14 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9244751458315241
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2222
- Accuracy: 0.9245
- F1: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.87 | 1.0 | 250 | 0.3317 | 0.901 | 0.8967 |
| 0.2625 | 2.0 | 500 | 0.2222 | 0.9245 | 0.9245 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,804 | [
[
-0.037841796875,
-0.041290283203125,
0.01436614990234375,
0.0221099853515625,
-0.0259552001953125,
-0.0200347900390625,
-0.01300811767578125,
-0.00820159912109375,
0.0099639892578125,
0.0080108642578125,
-0.056488037109375,
-0.05157470703125,
-0.060150146484375,... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.