modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gxhuggingface/distilbert-base-uncased-finetuned-emotion | 2023-05-15T16:28:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gxhuggingface | null | null | gxhuggingface/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-15T15:42:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9405
- name: F1
type: f1
value: 0.9406663459684013
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1472
- Accuracy: 0.9405
- F1: 0.9407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.1786 | 0.9275 | 0.9274 |
| No log | 2.0 | 500 | 0.1472 | 0.9405 | 0.9407 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.036407470703125,
-0.04345703125,
0.01442718505859375,
0.0220489501953125,
-0.027435302734375,
-0.0193939208984375,
-0.0130767822265625,
-0.01047515869140625,
0.01126861572265625,
0.0088348388671875,
-0.056365966796875,
-0.052001953125,
-0.0595703125,
-0.0... |
nimeeshachan/mlma_nchan19_biogpt_on_adr_test_set | 2023-05-15T17:17:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | token-classification | nimeeshachan | null | null | nimeeshachan/mlma_nchan19_biogpt_on_adr_test_set | 0 | 2 | transformers | 2023-05-15T16:05:09 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mlma_nchan19_biogpt_on_adr_test_set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlma_nchan19_biogpt_on_adr_test_set
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1452
- Precision: 0.4772
- Recall: 0.5467
- F1: 0.5096
- Accuracy: 0.9523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 448 | 0.1998 | 0.4210 | 0.3185 | 0.3626 | 0.9382 |
| 0.289 | 2.0 | 896 | 0.1630 | 0.4394 | 0.5043 | 0.4696 | 0.9474 |
| 0.1587 | 3.0 | 1344 | 0.1452 | 0.4772 | 0.5467 | 0.5096 | 0.9523 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,701 | [
[
-0.033050537109375,
-0.043365478515625,
0.0084686279296875,
0.0003864765167236328,
-0.0175323486328125,
-0.0284423828125,
0.003231048583984375,
-0.01922607421875,
0.00440216064453125,
0.01812744140625,
-0.044403076171875,
-0.04296875,
-0.04730224609375,
-0.0... |
yeobeom/distilbert-base-uncased-finetuned-emotion | 2023-05-15T17:17:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yeobeom | null | null | yeobeom/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-15T17:12:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9256616841507974
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2219
- Accuracy: 0.9255
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8632 | 1.0 | 250 | 0.3232 | 0.906 | 0.9035 |
| 0.2592 | 2.0 | 500 | 0.2219 | 0.9255 | 0.9257 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.037506103515625,
-0.04150390625,
0.01433563232421875,
0.022003173828125,
-0.0262298583984375,
-0.018890380859375,
-0.01340484619140625,
-0.00838470458984375,
0.01061248779296875,
0.0079803466796875,
-0.056396484375,
-0.0517578125,
-0.059356689453125,
-0.0... |
livinNector/IndicBERTv2-MLM-Sam-TLM-NER | 2023-05-18T13:44:09.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | livinNector | null | null | livinNector/IndicBERTv2-MLM-Sam-TLM-NER | 0 | 2 | transformers | 2023-05-15T17:56:40 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: IndicBERTv2-MLM-Sam-TLM-NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndicBERTv2-MLM-Sam-TLM-NER
This model is a fine-tuned version of [ai4bharat/IndicBERTv2-MLM-Sam-TLM](https://huggingface.co/ai4bharat/IndicBERTv2-MLM-Sam-TLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4521
- Precision: 0.7629
- Recall: 0.7792
- F1: 0.7710
- Accuracy: 0.9038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3268 | 0.49 | 1000 | 0.3440 | 0.7207 | 0.7602 | 0.7399 | 0.8887 |
| 0.2763 | 0.99 | 2000 | 0.3083 | 0.7568 | 0.7732 | 0.7649 | 0.8983 |
| 0.2604 | 1.48 | 3000 | 0.3312 | 0.7309 | 0.7494 | 0.7401 | 0.8909 |
| 0.2501 | 1.98 | 4000 | 0.3017 | 0.7415 | 0.7956 | 0.7676 | 0.9014 |
| 0.2269 | 2.47 | 5000 | 0.2930 | 0.7528 | 0.7970 | 0.7743 | 0.9050 |
| 0.223 | 2.96 | 6000 | 0.2963 | 0.7590 | 0.7963 | 0.7772 | 0.9053 |
| 0.2011 | 3.46 | 7000 | 0.2939 | 0.7627 | 0.7946 | 0.7783 | 0.9079 |
| 0.1999 | 3.95 | 8000 | 0.3036 | 0.7676 | 0.7903 | 0.7788 | 0.9069 |
| 0.1815 | 4.44 | 9000 | 0.3125 | 0.7618 | 0.7915 | 0.7764 | 0.9056 |
| 0.1777 | 4.94 | 10000 | 0.3083 | 0.7748 | 0.7957 | 0.7851 | 0.9098 |
| 0.1622 | 5.43 | 11000 | 0.3251 | 0.7721 | 0.7909 | 0.7814 | 0.9089 |
| 0.1598 | 5.93 | 12000 | 0.3197 | 0.7767 | 0.7947 | 0.7856 | 0.9092 |
| 0.145 | 6.42 | 13000 | 0.3366 | 0.7718 | 0.7986 | 0.7850 | 0.9101 |
| 0.1436 | 6.91 | 14000 | 0.3247 | 0.7776 | 0.7977 | 0.7875 | 0.9112 |
| 0.1306 | 7.41 | 15000 | 0.3502 | 0.7779 | 0.7958 | 0.7867 | 0.9107 |
| 0.1311 | 7.9 | 16000 | 0.3585 | 0.7857 | 0.7909 | 0.7883 | 0.9105 |
| 0.12 | 8.4 | 17000 | 0.3717 | 0.7768 | 0.7911 | 0.7839 | 0.9099 |
| 0.1202 | 8.89 | 18000 | 0.3667 | 0.7796 | 0.7882 | 0.7839 | 0.9100 |
| 0.1141 | 9.38 | 19000 | 0.3860 | 0.7857 | 0.7900 | 0.7879 | 0.9100 |
| 0.1113 | 9.88 | 20000 | 0.3824 | 0.7758 | 0.7970 | 0.7862 | 0.9094 |
| 0.1056 | 10.37 | 21000 | 0.4041 | 0.7740 | 0.7952 | 0.7845 | 0.9084 |
| 0.1073 | 10.86 | 22000 | 0.4062 | 0.7735 | 0.7929 | 0.7831 | 0.9094 |
| 0.1063 | 11.36 | 23000 | 0.4197 | 0.7720 | 0.7866 | 0.7793 | 0.9071 |
| 0.1026 | 11.85 | 24000 | 0.4179 | 0.7625 | 0.7767 | 0.7695 | 0.9040 |
| 0.1042 | 12.35 | 25000 | 0.4392 | 0.7639 | 0.7748 | 0.7693 | 0.9037 |
| 0.101 | 12.84 | 26000 | 0.4373 | 0.7533 | 0.7795 | 0.7662 | 0.9029 |
| 0.1003 | 13.33 | 27000 | 0.4554 | 0.7535 | 0.7774 | 0.7653 | 0.9021 |
| 0.0993 | 13.83 | 28000 | 0.4530 | 0.7555 | 0.7773 | 0.7663 | 0.9019 |
| 0.0978 | 14.32 | 29000 | 0.4467 | 0.7637 | 0.7843 | 0.7738 | 0.9050 |
| 0.0946 | 14.81 | 30000 | 0.4521 | 0.7629 | 0.7792 | 0.7710 | 0.9038 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
| 4,290 | [
[
-0.042083740234375,
-0.033660888671875,
0.0125732421875,
0.006595611572265625,
-0.004833221435546875,
0.0003714561462402344,
0.00255584716796875,
-0.004428863525390625,
0.03924560546875,
0.031646728515625,
-0.043731689453125,
-0.0496826171875,
-0.0439453125,
... |
harshadpc10/MyFirstModel | 2023-05-15T20:56:27.000Z | [
"keras",
"region:us"
] | null | harshadpc10 | null | null | harshadpc10/MyFirstModel | 0 | 2 | keras | 2023-05-15T18:39:46 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | SGD |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.009999999776482582 |
| momentum | 0.0 |
| nesterov | False |
| training_precision | float32 |
| 701 | [
[
-0.02783203125,
-0.03515625,
0.01229095458984375,
0.01812744140625,
-0.03826904296875,
-0.0216522216796875,
0.003116607666015625,
0.01245880126953125,
0.0217742919921875,
0.0293121337890625,
-0.045684814453125,
-0.0533447265625,
-0.033782958984375,
-0.008056... |
YaYaB/l3-setfit_v2 | 2023-05-15T21:37:13.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | YaYaB | null | null | YaYaB/l3-setfit_v2 | 0 | 2 | sentence-transformers | 2023-05-15T21:37:09 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YaYaB/l3-setfit_v2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YaYaB/l3-setfit_v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,525 | [
[
-0.005001068115234375,
-0.0654296875,
0.028106689453125,
-0.0137176513671875,
-0.01264190673828125,
-0.019500732421875,
-0.01035308837890625,
-0.018402099609375,
0.0044403076171875,
0.036529541015625,
-0.04620361328125,
-0.018463134765625,
-0.0396728515625,
... |
Madhu45/Teledermatology_model | 2023-05-16T07:23:31.000Z | [
"keras",
"region:us"
] | null | Madhu45 | null | null | Madhu45/Teledermatology_model | 0 | 2 | keras | 2023-05-15T22:06:45 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | SGD |
| learning_rate | 0.05000000074505806 |
| decay | 0.0 |
| momentum | 0.0 |
| nesterov | False |
| training_precision | float32 |
| 489 | [
[
-0.016571044921875,
-0.0277557373046875,
0.0007143020629882812,
0.01190948486328125,
-0.044830322265625,
-0.0207366943359375,
0.00238037109375,
0.010894775390625,
0.0157470703125,
0.0248565673828125,
-0.039215087890625,
-0.05499267578125,
-0.038970947265625,
... |
TimBless222/distilbert-base-uncased-finetuned-emotion | 2023-05-19T15:24:14.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | TimBless222 | null | null | TimBless222/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-16T00:04:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9285439912301902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.9285
- F1: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8381 | 1.0 | 250 | 0.3165 | 0.9075 | 0.9040 |
| 0.2524 | 2.0 | 500 | 0.2183 | 0.9285 | 0.9285 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.038482666015625,
-0.041259765625,
0.0149688720703125,
0.0216217041015625,
-0.0262908935546875,
-0.0191802978515625,
-0.01313018798828125,
-0.008697509765625,
0.010711669921875,
0.008697509765625,
-0.056732177734375,
-0.051971435546875,
-0.0595703125,
-0.0... |
vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridadDA-V3 | 2023-05-16T03:17:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | vg055 | null | null | vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridadDA-V3 | 0 | 2 | transformers | 2023-05-16T01:02:19 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridadDA-V3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridadDA-V3
This model is a fine-tuned version of [vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation](https://huggingface.co/vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6583
- F1: 0.7400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5919 | 1.0 | 17166 | 0.5992 | 0.7388 |
| 0.3925 | 2.0 | 34332 | 0.6583 | 0.7400 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,612 | [
[
-0.038818359375,
-0.041656494140625,
0.0151824951171875,
0.01617431640625,
-0.032867431640625,
-0.042388916015625,
-0.01495361328125,
-0.0165557861328125,
0.007221221923828125,
0.03216552734375,
-0.057037353515625,
-0.047576904296875,
-0.048248291015625,
-0.... |
yyabuki/distilbert-base-uncased-finetuned-emotion | 2023-05-17T09:54:24.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yyabuki | null | null | yyabuki/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-16T01:17:15 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.925439015968626
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2205
- Accuracy: 0.9255
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8201 | 1.0 | 250 | 0.3106 | 0.907 | 0.9049 |
| 0.2487 | 2.0 | 500 | 0.2205 | 0.9255 | 0.9254 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,842 | [
[
-0.037994384765625,
-0.040924072265625,
0.01348876953125,
0.0229339599609375,
-0.0261688232421875,
-0.0203399658203125,
-0.01291656494140625,
-0.00846099853515625,
0.01065826416015625,
0.0085906982421875,
-0.056488037109375,
-0.05230712890625,
-0.0596923828125,
... |
AustinCarthy/MixGPT2_10K_fromB_BFall_10KGen_topP_0.75 | 2023-05-18T20:17:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/MixGPT2_10K_fromB_BFall_10KGen_topP_0.75 | 0 | 2 | transformers | 2023-05-16T03:36:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_10K_fromB_BFall_10KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_10K_fromB_BFall_10KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0493
- Accuracy: 0.9952
- F1: 0.9474
- Precision: 0.9989
- Recall: 0.901
- Roc Auc Score: 0.9505
- Tpr At Fpr 0.01: 0.9106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0068 | 1.0 | 13125 | 0.0321 | 0.9930 | 0.9209 | 0.9949 | 0.8572 | 0.9285 | 0.821 |
| 0.0041 | 2.0 | 26250 | 0.0398 | 0.9941 | 0.9341 | 0.9973 | 0.8784 | 0.9391 | 0.8602 |
| 0.0011 | 3.0 | 39375 | 0.0646 | 0.9922 | 0.9109 | 0.9990 | 0.837 | 0.9185 | 0.8694 |
| 0.0014 | 4.0 | 52500 | 0.0567 | 0.9929 | 0.9191 | 0.9998 | 0.8504 | 0.9252 | 0.895 |
| 0.0 | 5.0 | 65625 | 0.0493 | 0.9952 | 0.9474 | 0.9989 | 0.901 | 0.9505 | 0.9106 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,241 | [
[
-0.044219970703125,
-0.042877197265625,
0.007190704345703125,
0.0152130126953125,
-0.0233001708984375,
-0.018768310546875,
-0.005527496337890625,
-0.02044677734375,
0.0285797119140625,
0.024322509765625,
-0.0517578125,
-0.045745849609375,
-0.05517578125,
-0.... |
AustinCarthy/MixGPT2_10K_fromB_BFall_20KGen_topP_0.75 | 2023-05-16T15:28:14.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/MixGPT2_10K_fromB_BFall_20KGen_topP_0.75 | 0 | 2 | transformers | 2023-05-16T05:04:51 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_10K_fromB_BFall_20KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_10K_fromB_BFall_20KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0667
- Accuracy: 0.9940
- F1: 0.9325
- Precision: 0.9993
- Recall: 0.874
- Roc Auc Score: 0.9370
- Tpr At Fpr 0.01: 0.8984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0038 | 1.0 | 19688 | 0.0511 | 0.9926 | 0.9158 | 0.9991 | 0.8454 | 0.9227 | 0.8744 |
| 0.0028 | 2.0 | 39376 | 0.0423 | 0.9946 | 0.9405 | 0.9951 | 0.8916 | 0.9457 | 0.884 |
| 0.0006 | 3.0 | 59064 | 0.0510 | 0.9940 | 0.9325 | 0.9975 | 0.8754 | 0.9376 | 0.875 |
| 0.0 | 4.0 | 78752 | 0.0355 | 0.9958 | 0.9536 | 0.9987 | 0.9124 | 0.9562 | 0.9172 |
| 0.0 | 5.0 | 98440 | 0.0667 | 0.9940 | 0.9325 | 0.9993 | 0.874 | 0.9370 | 0.8984 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,241 | [
[
-0.0447998046875,
-0.042022705078125,
0.006084442138671875,
0.0163421630859375,
-0.0233154296875,
-0.018890380859375,
-0.007198333740234375,
-0.0207672119140625,
0.027801513671875,
0.0242767333984375,
-0.051422119140625,
-0.046905517578125,
-0.05487060546875,
... |
mathislucka/bi-deberta-base-hallucination-v1 | 2023-05-16T06:28:21.000Z | [
"sentence-transformers",
"pytorch",
"deberta-v2",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | mathislucka | null | null | mathislucka/bi-deberta-base-hallucination-v1 | 0 | 2 | sentence-transformers | 2023-05-16T06:24:17 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 516 with parameters:
```
{'batch_size': 14}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 300,
"evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 0,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: DebertaV2Model
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 3,805 | [
[
-0.019622802734375,
-0.06402587890625,
0.0234222412109375,
0.025360107421875,
-0.0220947265625,
-0.03277587890625,
-0.01580810546875,
0.002716064453125,
0.0161285400390625,
0.02777099609375,
-0.04998779296875,
-0.04534912109375,
-0.05328369140625,
-0.0019855... |
neurae/bert-dnd-intents | 2023-07-22T04:36:32.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"dataset:neurae/dnd_style_intents",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | neurae | null | null | neurae/bert-dnd-intents | 0 | 2 | transformers | 2023-05-16T06:37:46 | ---
datasets:
- neurae/dnd_style_intents
language:
- en
pipeline_tag: text-classification
license: apache-2.0
metrics:
- accuracy
- f1
---
This is bert base tuned with optimal lr, lr scheduler and weight decay on dnd-style-intents dataset.
| parametrs | value |
|---------------|----------|
| learning rate | 1.3e-4 |
| lr scheduler | constant |
| weight decay | 7e-2 |
Model has next metrics on test data from dataset
| metric | value |
|----------|-------|
| accuracy | 0.978 |
| Macro F1 | 0.977 |
| Micro F1 | 0.978 | | 542 | [
[
-0.01238250732421875,
-0.0341796875,
0.019989013671875,
0.0191802978515625,
-0.027099609375,
-0.027587890625,
-0.016082763671875,
0.00720977783203125,
0.0289459228515625,
0.01806640625,
-0.07574462890625,
-0.0201263427734375,
-0.035736083984375,
-0.025604248... |
neurae/distilbert-dnd-intents | 2023-07-16T09:37:51.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:neurae/dnd_style_intents",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | neurae | null | null | neurae/distilbert-dnd-intents | 0 | 2 | transformers | 2023-05-16T06:39:30 | ---
datasets:
- neurae/dnd_style_intents
language:
- en
pipeline_tag: text-classification
license: apache-2.0
metrics:
- accuracy
- f1
---
This is distilbert base tuned with optimal lr, lr scheduler and weight decay on dnd-style-intents dataset.
| parametrs | value |
|---------------|----------|
| learning rate | 1.8e-4 |
| lr scheduler | linear |
| weight decay | 0 |
Model has next metrics on test data from dataset
| metric | value |
|----------|-------|
| accuracy | 0.985 |
| Macro F1 | 0.984 |
| Micro F1 | 0.985 | | 548 | [
[
-0.006809234619140625,
-0.03155517578125,
0.0234222412109375,
0.01293182373046875,
-0.026763916015625,
-0.01079559326171875,
-0.002941131591796875,
0.029205322265625,
0.021148681640625,
0.0163116455078125,
-0.06866455078125,
-0.0230560302734375,
-0.0468139648437... |
AustinCarthy/MixGPT2_10K_fromB_BFall_30KGen_topP_0.75 | 2023-05-16T15:41:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/MixGPT2_10K_fromB_BFall_30KGen_topP_0.75 | 0 | 2 | transformers | 2023-05-16T07:12:31 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_10K_fromB_BFall_30KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_10K_fromB_BFall_30KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Accuracy: 0.9926
- F1: 0.9162
- Precision: 0.9998
- Recall: 0.8456
- Roc Auc Score: 0.9228
- Tpr At Fpr 0.01: 0.8956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.005 | 1.0 | 26250 | 0.0392 | 0.9921 | 0.9101 | 0.9983 | 0.8362 | 0.9181 | 0.838 |
| 0.0015 | 2.0 | 52500 | 0.0749 | 0.9909 | 0.8940 | 0.9978 | 0.8098 | 0.9049 | 0.8144 |
| 0.0007 | 3.0 | 78750 | 0.0421 | 0.9952 | 0.9471 | 0.9989 | 0.9004 | 0.9502 | 0.9072 |
| 0.0013 | 4.0 | 105000 | 0.0393 | 0.9941 | 0.9344 | 0.9998 | 0.877 | 0.9385 | 0.9138 |
| 0.0003 | 5.0 | 131250 | 0.0617 | 0.9926 | 0.9162 | 0.9998 | 0.8456 | 0.9228 | 0.8956 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,249 | [
[
-0.044342041015625,
-0.042633056640625,
0.0074920654296875,
0.0156402587890625,
-0.0225677490234375,
-0.01947021484375,
-0.00588226318359375,
-0.019561767578125,
0.0284881591796875,
0.023712158203125,
-0.051300048828125,
-0.045074462890625,
-0.053558349609375,
... |
alessandrobrra/dqn-BeamRiderNoFrameskip-v4 | 2023-05-16T08:06:24.000Z | [
"stable-baselines3",
"BeamRiderNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | alessandrobrra | null | null | alessandrobrra/dqn-BeamRiderNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-16T08:05:24 | ---
library_name: stable-baselines3
tags:
- BeamRiderNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRiderNoFrameskip-v4
type: BeamRiderNoFrameskip-v4
metrics:
- type: mean_reward
value: 602.00 +/- 173.17
name: mean_reward
verified: false
---
# **DQN** Agent playing **BeamRiderNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BeamRiderNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga alessandrobrra -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga alessandrobrra -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/ -orga alessandrobrra
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,666 | [
[
-0.0377197265625,
-0.044281005859375,
0.0152587890625,
0.029754638671875,
-0.01036834716796875,
-0.01361083984375,
0.025054931640625,
-0.0186920166015625,
0.0036334991455078125,
0.024322509765625,
-0.0631103515625,
-0.037811279296875,
-0.0303497314453125,
-0... |
neurae/roberta-dnd-intents | 2023-07-16T09:33:44.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:neurae/dnd_style_intents",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | neurae | null | null | neurae/roberta-dnd-intents | 0 | 2 | transformers | 2023-05-16T09:18:00 | ---
datasets:
- neurae/dnd_style_intents
language:
- en
pipeline_tag: text-classification
license: apache-2.0
metrics:
- accuracy
- f1
---
This is roberta base tuned with optimal lr, lr scheduler and weight decay on dnd-style-intents dataset.
| parametrs | value |
|---------------|----------|
| learning rate | 5e-5 |
| lr scheduler | linear |
| weight decay | 0 |
Model has next metrics on test data from dataset
| metric | value |
|----------|-------|
| accuracy | 0.985 |
| Macro F1 | 0.985 |
| Micro F1 | 0.985 | | 545 | [
[
0.0022678375244140625,
-0.04034423828125,
0.02850341796875,
0.0062408447265625,
-0.0243682861328125,
-0.0199432373046875,
-0.01995849609375,
0.021514892578125,
0.01605224609375,
0.030975341796875,
-0.06927490234375,
-0.0294036865234375,
-0.054779052734375,
-... |
platzi/platzi-distilroberta-base-mrpc-glue-jonathan-narvaez | 2023-05-16T11:21:15.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-jonathan-narvaez | 0 | 2 | transformers | 2023-05-16T09:18:27 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-jonathan-narvaez
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8259803921568627
- name: F1
type: f1
value: 0.8725314183123878
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-jonathan-narvaez
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4482
- Accuracy: 0.8260
- F1: 0.8725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3682 | 1.09 | 500 | 0.4482 | 0.8260 | 0.8725 |
| 0.3611 | 2.18 | 1000 | 0.4482 | 0.8260 | 0.8725 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,437 | [
[
-0.031768798828125,
-0.042572021484375,
0.00881195068359375,
0.019012451171875,
-0.0303955078125,
-0.0261688232421875,
-0.0101165771484375,
-0.004245758056640625,
0.005802154541015625,
0.00960540771484375,
-0.050079345703125,
-0.04095458984375,
-0.05596923828125... |
AustinCarthy/MixGPT2_10K_fromB_BFall_40KGen_topP_0.75 | 2023-05-18T20:30:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/MixGPT2_10K_fromB_BFall_40KGen_topP_0.75 | 0 | 2 | transformers | 2023-05-16T09:58:32 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_10K_fromB_BFall_40KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_10K_fromB_BFall_40KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0480
- Accuracy: 0.9944
- F1: 0.9378
- Precision: 0.9998
- Recall: 0.883
- Roc Auc Score: 0.9415
- Tpr At Fpr 0.01: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0059 | 1.0 | 32813 | 0.0405 | 0.9934 | 0.9250 | 0.9991 | 0.8612 | 0.9306 | 0.8928 |
| 0.0036 | 2.0 | 65626 | 0.0503 | 0.9929 | 0.9193 | 0.9998 | 0.8508 | 0.9254 | 0.8914 |
| 0.001 | 3.0 | 98439 | 0.0706 | 0.9908 | 0.8936 | 0.9995 | 0.808 | 0.9040 | 0.8702 |
| 0.0011 | 4.0 | 131252 | 0.0564 | 0.9943 | 0.9363 | 0.9986 | 0.8812 | 0.9406 | 0.8958 |
| 0.0 | 5.0 | 164065 | 0.0480 | 0.9944 | 0.9378 | 0.9998 | 0.883 | 0.9415 | 0.91 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,246 | [
[
-0.04437255859375,
-0.0421142578125,
0.006526947021484375,
0.0160064697265625,
-0.02215576171875,
-0.0189056396484375,
-0.006458282470703125,
-0.0205078125,
0.028594970703125,
0.0241241455078125,
-0.05218505859375,
-0.045623779296875,
-0.05438232421875,
-0.0... |
neurae/albert-dnd-intents | 2023-07-16T09:38:16.000Z | [
"transformers",
"pytorch",
"albert",
"text-classification",
"en",
"dataset:neurae/dnd_style_intents",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | neurae | null | null | neurae/albert-dnd-intents | 0 | 2 | transformers | 2023-05-16T09:58:57 | ---
datasets:
- neurae/dnd_style_intents
language:
- en
pipeline_tag: text-classification
license: apache-2.0
metrics:
- accuracy
- f1
---
This is albert base tuned with optimal lr, lr scheduler and weight decay on dnd-style-intents dataset.
| parametrs | value |
|---------------|----------|
| learning rate | 5e-5 |
| lr scheduler | linear |
| weight decay | 0 |
Model has next metrics on test data from dataset
| metric | value |
|----------|-------|
| accuracy | 0.981 |
| Macro F1 | 0.979 |
| Micro F1 | 0.985 | | 544 | [
[
-0.01262664794921875,
-0.020477294921875,
0.0207366943359375,
0.0148468017578125,
-0.0097808837890625,
-0.0164642333984375,
0.0017671585083007812,
0.0205841064453125,
0.021759033203125,
0.032989501953125,
-0.0618896484375,
-0.033355712890625,
-0.035980224609375,... |
Yorth/gpt2_medium_poetry | 2023-05-16T13:09:09.000Z | [
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Yorth | null | null | Yorth/gpt2_medium_poetry | 0 | 2 | transformers | 2023-05-16T12:38:00 | ---
tags:
- generated_from_keras_callback
model-index:
- name: gpt2_medium_poetry
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gpt2_medium_poetry
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3
| 854 | [
[
-0.021881103515625,
-0.03802490234375,
0.039886474609375,
0.00484466552734375,
-0.04766845703125,
-0.03466796875,
-0.0163726806640625,
-0.027099609375,
-0.008544921875,
0.030487060546875,
-0.03936767578125,
-0.0377197265625,
-0.0782470703125,
-0.025863647460... |
Chantland/HRAF_EVENT_Demo | 2023-06-26T20:19:53.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"anthropology",
"license:unlicense",
"endpoints_compatible",
"region:us"
] | text-classification | Chantland | null | null | Chantland/HRAF_EVENT_Demo | 0 | 2 | transformers | 2023-05-16T15:25:31 | ---
license: unlicense
tags:
- anthropology
- text-classification
---
Text classification model used to decode passages that contain misfortunate events. Current F1 score of 140 passages not used for training is .94.
<br><br><br>
For a quick demo, try typing in a sentence or even a paragraph in the <b>Hosted inference API</b> then pressing "compute"! | 353 | [
[
-0.024169921875,
-0.054107666015625,
0.034149169921875,
0.046142578125,
-0.032470703125,
-0.01385498046875,
0.00551605224609375,
-0.039154052734375,
-0.01702880859375,
0.02337646484375,
-0.053985595703125,
-0.043243408203125,
-0.0310821533203125,
0.030395507... |
2rtl3/mn-roberta-base-demo-named-entity | 2023-05-16T16:55:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"mn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2rtl3 | null | null | 2rtl3/mn-roberta-base-demo-named-entity | 0 | 2 | transformers | 2023-05-16T16:13:13 | ---
language:
- mn
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mn-roberta-base-demo-named-entity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mn-roberta-base-demo-named-entity
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1354
- Precision: 0.9239
- Recall: 0.9322
- F1: 0.9280
- Accuracy: 0.9797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1651 | 1.0 | 477 | 0.0835 | 0.8900 | 0.9145 | 0.9021 | 0.9745 |
| 0.0535 | 2.0 | 954 | 0.0780 | 0.9047 | 0.9243 | 0.9144 | 0.9775 |
| 0.0267 | 3.0 | 1431 | 0.0836 | 0.9184 | 0.9307 | 0.9245 | 0.9790 |
| 0.0159 | 4.0 | 1908 | 0.0936 | 0.9224 | 0.9329 | 0.9276 | 0.9803 |
| 0.0083 | 5.0 | 2385 | 0.1155 | 0.9224 | 0.9307 | 0.9265 | 0.9790 |
| 0.0055 | 6.0 | 2862 | 0.1211 | 0.9222 | 0.9316 | 0.9268 | 0.9793 |
| 0.0034 | 7.0 | 3339 | 0.1258 | 0.9199 | 0.9329 | 0.9263 | 0.9789 |
| 0.0025 | 8.0 | 3816 | 0.1300 | 0.9249 | 0.9339 | 0.9294 | 0.9799 |
| 0.002 | 9.0 | 4293 | 0.1352 | 0.9231 | 0.9313 | 0.9272 | 0.9795 |
| 0.0018 | 10.0 | 4770 | 0.1354 | 0.9239 | 0.9322 | 0.9280 | 0.9797 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,380 | [
[
-0.034393310546875,
-0.043304443359375,
0.01308441162109375,
0.0035037994384765625,
-0.0147247314453125,
-0.02191162109375,
-0.004627227783203125,
-0.00809478759765625,
0.027496337890625,
0.0286712646484375,
-0.053314208984375,
-0.05755615234375,
-0.050415039062... |
land25/distilbert-base-uncased_emotion_ft_0416 | 2023-05-17T14:58:38.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | land25 | null | null | land25/distilbert-base-uncased_emotion_ft_0416 | 0 | 2 | transformers | 2023-05-16T16:45:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
- precision
model-index:
- name: distilbert-base-uncased_emotion_ft_0416
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
- name: F1
type: f1
value: 0.9378516520466151
- name: Precision
type: precision
value: 0.9085326888984738
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft_0416
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1495
- Accuracy: 0.9375
- F1: 0.9379
- Precision: 0.9085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|
| 0.8285 | 1.0 | 250 | 0.2793 | 0.917 | 0.9150 | 0.9106 |
| 0.2185 | 2.0 | 500 | 0.1718 | 0.926 | 0.9262 | 0.8978 |
| 0.1413 | 3.0 | 750 | 0.1579 | 0.9325 | 0.9325 | 0.9096 |
| 0.1147 | 4.0 | 1000 | 0.1495 | 0.9375 | 0.9379 | 0.9085 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,160 | [
[
-0.037017822265625,
-0.034149169921875,
0.01398468017578125,
0.019775390625,
-0.0243988037109375,
-0.0179290771484375,
-0.00850677490234375,
-0.005580902099609375,
0.0131988525390625,
0.0102386474609375,
-0.0543212890625,
-0.0516357421875,
-0.05908203125,
-0... |
asieh/bert-fine-tuned-cola | 2023-05-22T14:08:55.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | asieh | null | null | asieh/bert-fine-tuned-cola | 0 | 2 | transformers | 2023-05-16T17:18:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5532122564572604
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8643
- Matthews Correlation: 0.5532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4782 | 1.0 | 1069 | 0.5697 | 0.4911 |
| 0.3103 | 2.0 | 2138 | 0.6183 | 0.5820 |
| 0.176 | 3.0 | 3207 | 0.8643 | 0.5532 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,840 | [
[
-0.0260772705078125,
-0.0582275390625,
0.0089569091796875,
0.0191497802734375,
-0.0219268798828125,
-0.0165863037109375,
-0.01435089111328125,
-0.016143798828125,
0.02386474609375,
0.0097808837890625,
-0.05328369140625,
-0.030975341796875,
-0.053985595703125,
... |
AustinCarthy/Baseline_20Kphish_benignFall_20_20_20 | 2023-05-17T15:52:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_20Kphish_benignFall_20_20_20 | 0 | 2 | transformers | 2023-05-16T19:07:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_20Kphish_benignFall_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_20Kphish_benignFall_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0540
- Accuracy: 0.9952
- F1: 0.9467
- Precision: 0.9984
- Recall: 0.9
- Roc Auc Score: 0.9500
- Tpr At Fpr 0.01: 0.9032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0065 | 1.0 | 13125 | 0.0309 | 0.991 | 0.8959 | 0.9975 | 0.813 | 0.9064 | 0.7808 |
| 0.004 | 2.0 | 26250 | 0.0448 | 0.9926 | 0.9153 | 0.9988 | 0.8446 | 0.9223 | 0.8598 |
| 0.0019 | 3.0 | 39375 | 0.0501 | 0.9938 | 0.9302 | 0.9986 | 0.8706 | 0.9353 | 0.8818 |
| 0.0013 | 4.0 | 52500 | 0.0462 | 0.9954 | 0.9496 | 0.9967 | 0.9068 | 0.9533 | 0.895 |
| 0.0 | 5.0 | 65625 | 0.0540 | 0.9952 | 0.9467 | 0.9984 | 0.9 | 0.9500 | 0.9032 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,233 | [
[
-0.040740966796875,
-0.044403076171875,
0.00804901123046875,
0.010223388671875,
-0.02044677734375,
-0.021453857421875,
-0.00396728515625,
-0.0186309814453125,
0.028076171875,
0.029144287109375,
-0.054290771484375,
-0.055816650390625,
-0.05072021484375,
-0.01... |
maxbarshay/Jordan_Name_Distinction | 2023-05-22T14:19:29.000Z | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | maxbarshay | null | null | maxbarshay/Jordan_Name_Distinction | 0 | 2 | transformers | 2023-05-16T19:09:13 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Jordan_Name_Distinction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jordan_Name_Distinction
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,025 | [
[
-0.03228759765625,
-0.04766845703125,
0.0194854736328125,
0.00931549072265625,
-0.025299072265625,
-0.03436279296875,
-0.01446533203125,
-0.0188446044921875,
0.0107421875,
0.03240966796875,
-0.047943115234375,
-0.046966552734375,
-0.05755615234375,
0.0016422... |
AustinCarthy/Baseline_30Kphish_benignFall_20_20_20 | 2023-05-17T16:04:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_30Kphish_benignFall_20_20_20 | 0 | 2 | transformers | 2023-05-16T20:35:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_30Kphish_benignFall_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_30Kphish_benignFall_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0374
- Accuracy: 0.9962
- F1: 0.9589
- Precision: 0.9998
- Recall: 0.9212
- Roc Auc Score: 0.9606
- Tpr At Fpr 0.01: 0.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0045 | 1.0 | 19688 | 0.0304 | 0.9933 | 0.9241 | 0.9993 | 0.8594 | 0.9297 | 0.874 |
| 0.0029 | 2.0 | 39376 | 0.0210 | 0.9967 | 0.9643 | 0.9953 | 0.9352 | 0.9675 | 0.917 |
| 0.0003 | 3.0 | 59064 | 0.0434 | 0.9947 | 0.9407 | 0.9980 | 0.8896 | 0.9448 | 0.8936 |
| 0.0016 | 4.0 | 78752 | 0.0408 | 0.9952 | 0.9468 | 0.9998 | 0.8992 | 0.9496 | 0.9336 |
| 0.0008 | 5.0 | 98440 | 0.0374 | 0.9962 | 0.9589 | 0.9998 | 0.9212 | 0.9606 | 0.9438 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,236 | [
[
-0.041259765625,
-0.044281005859375,
0.0087127685546875,
0.0102386474609375,
-0.0206451416015625,
-0.0225830078125,
-0.004062652587890625,
-0.0186309814453125,
0.0268402099609375,
0.0296173095703125,
-0.054046630859375,
-0.054107666015625,
-0.04949951171875,
... |
aalksii/albert-base-v2-ml-arxiv-papers | 2023-06-01T12:45:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"en",
"dataset:aalksii/ml-arxiv-papers",
"dataset:CShorten/ML-ArXiv-Papers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | aalksii | null | null | aalksii/albert-base-v2-ml-arxiv-papers | 0 | 2 | transformers | 2023-05-16T21:01:42 | ---
datasets:
- aalksii/ml-arxiv-papers
- CShorten/ML-ArXiv-Papers
language:
- en
metrics:
- perplexity
pipeline_tag: fill-mask
---
This model is a version of albert-base-v2, which is fine-tuned using MLM on ml-arxiv-papers dataset. | 233 | [
[
-0.007534027099609375,
-0.042572021484375,
0.0036907196044921875,
0.006801605224609375,
0.0118255615234375,
-0.017059326171875,
0.0224151611328125,
-0.02880859375,
0.00838470458984375,
0.07830810546875,
-0.047332763671875,
-0.0258026123046875,
-0.028289794921875... |
MinaAlmasi/ES-ENG-mBERT-sentiment | 2023-05-22T20:15:04.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | MinaAlmasi | null | null | MinaAlmasi/ES-ENG-mBERT-sentiment | 0 | 2 | transformers | 2023-05-16T21:24:54 | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: ES-ENG-mBERT-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ES-ENG-mBERT-sentiment
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on a Custom dataset.
The best model (stopped after 14 epochs) achieves the following results on the evaluation set:
- Loss: 0.8110
- Accuracy: 0.6307
- F1: 0.6298
- Precision: 0.6291
- Recall: 0.6307
## Intended uses & limitations
Note that commercial use with this model is prohibited.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.063 | 1.0 | 208 | 0.9989 | 0.4731 | 0.4044 | 0.4885 | 0.4731 |
| 0.9664 | 2.0 | 416 | 0.9144 | 0.5262 | 0.4845 | 0.5270 | 0.5262 |
| 0.9067 | 3.0 | 624 | 0.8648 | 0.5896 | 0.5844 | 0.5935 | 0.5896 |
| 0.8572 | 4.0 | 832 | 0.8294 | 0.6065 | 0.5984 | 0.6102 | 0.6065 |
| 0.8168 | 5.0 | 1040 | 0.8101 | 0.6107 | 0.6092 | 0.6119 | 0.6107 |
| 0.7897 | 6.0 | 1248 | 0.8213 | 0.6074 | 0.6015 | 0.6018 | 0.6074 |
| 0.7568 | 7.0 | 1456 | 0.7992 | 0.6194 | 0.6181 | 0.6176 | 0.6194 |
| 0.7465 | 8.0 | 1664 | 0.8089 | 0.6246 | 0.6183 | 0.6206 | 0.6246 |
| 0.7223 | 9.0 | 1872 | 0.7988 | 0.6236 | 0.6214 | 0.6207 | 0.6236 |
| 0.7045 | 10.0 | 2080 | 0.8390 | 0.6165 | 0.6080 | 0.6126 | 0.6165 |
| 0.6888 | 11.0 | 2288 | 0.8042 | 0.6291 | 0.6260 | 0.6257 | 0.6291 |
| 0.671 | 12.0 | 2496 | 0.8088 | 0.6239 | 0.6212 | 0.6216 | 0.6239 |
| 0.6543 | 13.0 | 2704 | 0.8104 | 0.6256 | 0.6227 | 0.6216 | 0.6256 |
| 0.6409 | 14.0 | 2912 | 0.8110 | 0.6307 | 0.6298 | 0.6291 | 0.6307 |
| 0.6275 | 15.0 | 3120 | 0.8127 | 0.6298 | 0.6292 | 0.6299 | 0.6298 |
| 0.6176 | 16.0 | 3328 | 0.8334 | 0.6252 | 0.6217 | 0.6206 | 0.6252 |
| 0.6096 | 17.0 | 3536 | 0.8331 | 0.6256 | 0.6210 | 0.6210 | 0.6256 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3 | 2,964 | [
[
-0.047821044921875,
-0.04241943359375,
0.0097503662109375,
0.0107879638671875,
-0.006938934326171875,
-0.0028171539306640625,
-0.0025119781494140625,
-0.00864410400390625,
0.0379638671875,
0.0208587646484375,
-0.050048828125,
-0.053466796875,
-0.0418701171875,
... |
aalksii/distilbert-base-uncased-ml-arxiv-papers | 2023-06-01T12:44:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"en",
"dataset:aalksii/ml-arxiv-papers",
"dataset:CShorten/ML-ArXiv-Papers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | aalksii | null | null | aalksii/distilbert-base-uncased-ml-arxiv-papers | 0 | 2 | transformers | 2023-05-16T22:04:22 | ---
language:
- en
metrics:
- perplexity
pipeline_tag: fill-mask
datasets:
- aalksii/ml-arxiv-papers
- CShorten/ML-ArXiv-Papers
---
This model is a version of distilbert-base-uncased, which is fine-tuned using MLM on ml-arxiv-papers dataset. | 242 | [
[
-0.011627197265625,
-0.056182861328125,
0.004695892333984375,
0.0017328262329101562,
-0.0084686279296875,
0.0147247314453125,
0.0054473876953125,
0.0005192756652832031,
0.00827789306640625,
0.073974609375,
-0.0499267578125,
-0.03887939453125,
-0.038909912109375,... |
AustinCarthy/Baseline_40Kphish_benignFall_20_20_20 | 2023-05-17T16:17:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_40Kphish_benignFall_20_20_20 | 0 | 2 | transformers | 2023-05-16T22:42:42 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_40Kphish_benignFall_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_40Kphish_benignFall_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0374
- Accuracy: 0.9958
- F1: 0.9536
- Precision: 0.9985
- Recall: 0.9126
- Roc Auc Score: 0.9563
- Tpr At Fpr 0.01: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0045 | 1.0 | 26250 | 0.0211 | 0.9949 | 0.9441 | 0.9962 | 0.8972 | 0.9485 | 0.8784 |
| 0.0018 | 2.0 | 52500 | 0.0289 | 0.9957 | 0.9528 | 0.9967 | 0.9126 | 0.9562 | 0.9002 |
| 0.0021 | 3.0 | 78750 | 0.0317 | 0.9940 | 0.9325 | 0.9993 | 0.874 | 0.9370 | 0.9172 |
| 0.0014 | 4.0 | 105000 | 0.0315 | 0.9955 | 0.9504 | 0.9976 | 0.9074 | 0.9536 | 0.9046 |
| 0.0003 | 5.0 | 131250 | 0.0374 | 0.9958 | 0.9536 | 0.9985 | 0.9126 | 0.9563 | 0.9268 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,243 | [
[
-0.04095458984375,
-0.043304443359375,
0.00864410400390625,
0.0101470947265625,
-0.021697998046875,
-0.0205078125,
-0.0049896240234375,
-0.018157958984375,
0.0274658203125,
0.028472900390625,
-0.053741455078125,
-0.055816650390625,
-0.050048828125,
-0.013648... |
futuredatascience/welcome_video_model | 2023-05-16T22:51:17.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:futuredatascience/autotrain-data-welcome_message_2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | futuredatascience | null | null | futuredatascience/welcome_video_model | 0 | 2 | transformers | 2023-05-16T22:49:40 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- futuredatascience/autotrain-data-welcome_message_2
co2_eq_emissions:
emissions: 0.5524527127969758
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 59180133582
- CO2 Emissions (in grams): 0.5525
## Validation Metrics
- Loss: 0.347
- Accuracy: 0.865
- Precision: 0.852
- Recall: 0.958
- AUC: 0.814
- F1: 0.902
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/futuredatascience/autotrain-welcome_message_2-59180133582
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("futuredatascience/autotrain-welcome_message_2-59180133582", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("futuredatascience/autotrain-welcome_message_2-59180133582", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,202 | [
[
-0.0269012451171875,
-0.025299072265625,
0.01470184326171875,
0.00861358642578125,
-0.0028743743896484375,
-0.0006165504455566406,
0.006931304931640625,
-0.019683837890625,
0.004566192626953125,
0.011749267578125,
-0.06024169921875,
-0.0341796875,
-0.05911254882... |
mtreviso/roberta-base-snli | 2023-05-17T00:27:28.000Z | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | mtreviso | null | null | mtreviso/roberta-base-snli | 0 | 2 | transformers | 2023-05-17T00:27:04 | ---
duplicated_from: boychaboy/SNLI_roberta-base
---
Forked from: https://huggingface.co/boychaboy/SNLI_roberta-base | 117 | [
[
-0.020294189453125,
-0.06085205078125,
0.034454345703125,
0.0221710205078125,
-0.011383056640625,
0.006198883056640625,
0.01007843017578125,
-0.014434814453125,
0.0814208984375,
0.050537109375,
-0.08123779296875,
-0.0053863525390625,
-0.051849365234375,
-0.0... |
lgfunderburk/distilbert-truncated | 2023-05-17T02:52:29.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | lgfunderburk | null | null | lgfunderburk/distilbert-truncated | 0 | 2 | transformers | 2023-05-17T00:42:05 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-truncated
results: []
---
# distilbert-truncated
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [20 Newsgroups dataset](http://qwone.com/~jason/20Newsgroups/).
It achieves the following results on the evaluation set:
## Training and evaluation data
The data was split into training and testing: model trained on 90% of the data, and had a testing data size of 10% of the original dataset.
## Training procedure
DistilBERT has a maximum input length of 512, so with this in mind the following was performed:
1. I used the `distilbert-base-uncased` pretrained model to initialize an `AutoTokenizer`.
2. Setting a maximum length of 256, each entry in the training, testing and validation data was truncated if it exceeded the limit and padded if it didn't reach the limit.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
EPOCHS = 3
batches_per_epoch = 636
total_train_steps = 1908
Model accuracy 0.8337758779525757
Model loss 0.568471074104309
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,811 | [
[
-0.043609619140625,
-0.0391845703125,
0.0214080810546875,
0.020538330078125,
-0.03155517578125,
0.0119476318359375,
-0.01430511474609375,
0.0011224746704101562,
-0.006725311279296875,
-0.0053253173828125,
-0.053192138671875,
-0.03765869140625,
-0.0648193359375,
... |
denaneek/building-with-llms | 2023-05-17T19:17:05.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | denaneek | null | null | denaneek/building-with-llms | 0 | 2 | transformers | 2023-05-17T00:48:58 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: building-with-llms
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# building-with-llms
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,443 | [
[
-0.03863525390625,
-0.045928955078125,
0.032501220703125,
0.01160430908203125,
-0.03997802734375,
-0.0054779052734375,
-0.0106964111328125,
-0.00728607177734375,
0.0098724365234375,
0.0087127685546875,
-0.054718017578125,
-0.057586669921875,
-0.0655517578125,
... |
AustinCarthy/Baseline_50Kphish_benignFall_20_20_20 | 2023-05-17T16:29:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_50Kphish_benignFall_20_20_20 | 0 | 2 | transformers | 2023-05-17T01:28:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_50Kphish_benignFall_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_50Kphish_benignFall_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0282
- Accuracy: 0.9962
- F1: 0.9580
- Precision: 0.9996
- Recall: 0.9198
- Roc Auc Score: 0.9599
- Tpr At Fpr 0.01: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0045 | 1.0 | 32813 | 0.0247 | 0.9960 | 0.9561 | 0.9937 | 0.9212 | 0.9605 | 0.8662 |
| 0.002 | 2.0 | 65626 | 0.0205 | 0.9965 | 0.9624 | 0.9987 | 0.9286 | 0.9643 | 0.9376 |
| 0.0021 | 3.0 | 98439 | 0.0302 | 0.9961 | 0.9569 | 0.9993 | 0.918 | 0.9590 | 0.9378 |
| 0.0017 | 4.0 | 131252 | 0.0297 | 0.9970 | 0.9672 | 0.9975 | 0.9388 | 0.9693 | 0.9368 |
| 0.0007 | 5.0 | 164065 | 0.0282 | 0.9962 | 0.9580 | 0.9996 | 0.9198 | 0.9599 | 0.94 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,241 | [
[
-0.041534423828125,
-0.043060302734375,
0.00836181640625,
0.00891876220703125,
-0.0198822021484375,
-0.02032470703125,
-0.0028324127197265625,
-0.0173492431640625,
0.0297088623046875,
0.0290374755859375,
-0.056182861328125,
-0.054473876953125,
-0.04925537109375,... |
vocabtrimmer/roberta-base-xnli-en | 2023-05-17T02:40:18.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/roberta-base-xnli-en | 0 | 2 | transformers | 2023-05-17T02:38:56 | # `vocabtrimmer/roberta-base-xnli-en`
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the
[xnli](https://huggingface.co/datasets/xnli) (en).
Following metrics are computed on the `test` split of
[xnli](https://huggingface.co/datasets/xnli)(en).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 87.54 | 87.54 | 87.54 | 87.6 | 87.54 | 87.86 | 87.54 |
Check the result file [here](https://huggingface.co/vocabtrimmer/roberta-base-xnli-en/raw/main/eval.json). | 867 | [
[
-0.0285491943359375,
-0.0281219482421875,
0.0254364013671875,
-0.0012454986572265625,
-0.0209197998046875,
0.00420379638671875,
-0.0212554931640625,
-0.023834228515625,
0.037567138671875,
0.035919189453125,
-0.051849365234375,
-0.06549072265625,
-0.04150390625,
... |
lferncastro/distilbert_classifier | 2023-05-17T02:48:40.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | lferncastro | null | null | lferncastro/distilbert_classifier | 0 | 2 | transformers | 2023-05-17T02:48:08 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,449 | [
[
-0.037841796875,
-0.043121337890625,
0.0213775634765625,
0.004852294921875,
-0.032958984375,
-0.0074310302734375,
-0.0099639892578125,
-0.01055908203125,
-0.002788543701171875,
-0.00560760498046875,
-0.04150390625,
-0.04998779296875,
-0.06793212890625,
-0.01... |
WangCo/distilbert-base-uncased_emotion_ft_0416 | 2023-05-17T03:01:18.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | WangCo | null | null | WangCo/distilbert-base-uncased_emotion_ft_0416 | 0 | 2 | transformers | 2023-05-17T02:50:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased_emotion_ft_0416
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft_0416
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.11.0
| 1,079 | [
[
-0.04046630859375,
-0.043548583984375,
0.0177001953125,
0.0269927978515625,
-0.035247802734375,
-0.01548004150390625,
-0.01110076904296875,
-0.00920867919921875,
0.015350341796875,
0.009307861328125,
-0.05615234375,
-0.042022705078125,
-0.0579833984375,
0.00... |
AustinCarthy/Benign10MGPT2_fromP_BFall_10KGen_toP_0.75 | 2023-05-17T16:43:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Benign10MGPT2_fromP_BFall_10KGen_toP_0.75 | 0 | 2 | transformers | 2023-05-17T04:52:53 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromP_BFall_10KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromP_BFall_10KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1046
- Accuracy: 0.9898
- F1: 0.8806
- Precision: 0.9952
- Recall: 0.7896
- Roc Auc Score: 0.8947
- Tpr At Fpr 0.01: 0.7606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0104 | 1.0 | 13125 | 0.0568 | 0.9869 | 0.8415 | 0.9964 | 0.7282 | 0.8640 | 0.7054 |
| 0.0078 | 2.0 | 26250 | 0.0722 | 0.9871 | 0.8440 | 0.9932 | 0.7338 | 0.8668 | 0.6516 |
| 0.0047 | 3.0 | 39375 | 0.0675 | 0.9900 | 0.8833 | 0.9913 | 0.7966 | 0.8981 | 0.7312 |
| 0.0011 | 4.0 | 52500 | 0.0811 | 0.9904 | 0.8888 | 0.9936 | 0.804 | 0.9019 | 0.7698 |
| 0.0 | 5.0 | 65625 | 0.1046 | 0.9898 | 0.8806 | 0.9952 | 0.7896 | 0.8947 | 0.7606 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,244 | [
[
-0.04278564453125,
-0.0428466796875,
0.0087127685546875,
0.00922393798828125,
-0.020751953125,
-0.02496337890625,
-0.007389068603515625,
-0.0186004638671875,
0.027191162109375,
0.0248565673828125,
-0.050567626953125,
-0.0472412109375,
-0.052734375,
-0.016387... |
amqdn/distilbert-clf-20newsgroups | 2023-05-17T05:28:23.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | amqdn | null | null | amqdn/distilbert-clf-20newsgroups | 0 | 2 | transformers | 2023-05-17T05:16:21 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-clf-20newsgroups
results: []
---
# distilbert-clf-20newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on 20newsgroups.
It achieves the following results on the evaluation set:
* loss: 0.5506
* accuracy: 0.8401
## Model description
## Intended uses & limitations
## Training and evaluation data
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
* loss: 0.2480
* accuracy: 0.9422
* val_loss: 0.3633
* val_accuracy: 0.8940
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,307 | [
[
-0.0467529296875,
-0.04278564453125,
0.0164794921875,
0.036865234375,
-0.029937744140625,
0.006031036376953125,
-0.0215301513671875,
-0.0078277587890625,
-0.0038089752197265625,
0.0025806427001953125,
-0.056182861328125,
-0.050750732421875,
-0.06585693359375,
... |
AustinCarthy/Benign10MGPT2_fromP_BFall_20KGen_toP_0.75 | 2023-05-17T16:56:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Benign10MGPT2_fromP_BFall_20KGen_toP_0.75 | 0 | 2 | transformers | 2023-05-17T06:22:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromP_BFall_20KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromP_BFall_20KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1101
- Accuracy: 0.9888
- F1: 0.8669
- Precision: 0.9948
- Recall: 0.7682
- Roc Auc Score: 0.884
- Tpr At Fpr 0.01: 0.7442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0105 | 1.0 | 19688 | 0.0686 | 0.9851 | 0.8158 | 0.9957 | 0.691 | 0.8454 | 0.654 |
| 0.0069 | 2.0 | 39376 | 0.0458 | 0.9901 | 0.8866 | 0.9794 | 0.8098 | 0.9045 | 0.679 |
| 0.0051 | 3.0 | 59064 | 0.0698 | 0.9903 | 0.8874 | 0.9901 | 0.804 | 0.9018 | 0.747 |
| 0.0013 | 4.0 | 78752 | 0.0980 | 0.9893 | 0.8737 | 0.9949 | 0.7788 | 0.8893 | 0.7374 |
| 0.0007 | 5.0 | 98440 | 0.1101 | 0.9888 | 0.8669 | 0.9948 | 0.7682 | 0.884 | 0.7442 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,243 | [
[
-0.042694091796875,
-0.0423583984375,
0.01018524169921875,
0.00881195068359375,
-0.0209808349609375,
-0.0235748291015625,
-0.006938934326171875,
-0.018798828125,
0.026092529296875,
0.024993896484375,
-0.051025390625,
-0.047332763671875,
-0.05316162109375,
-0... |
bhattronak14/distilbert-base-uncased-finetuned-rte | 2023-05-18T10:55:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | bhattronak14 | null | null | bhattronak14/distilbert-base-uncased-finetuned-rte | 0 | 2 | transformers | 2023-05-17T06:31:18 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-rte
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-rte
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,041 | [
[
-0.035186767578125,
-0.05596923828125,
0.013397216796875,
0.0168304443359375,
-0.0345458984375,
-0.0183868408203125,
-0.00923919677734375,
-0.01197052001953125,
0.0079193115234375,
0.025543212890625,
-0.04901123046875,
-0.03875732421875,
-0.060150146484375,
... |
vnktrmnb/fine_tune_bert_output_te_ner | 2023-05-17T06:48:09.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | vnktrmnb | null | null | vnktrmnb/fine_tune_bert_output_te_ner | 0 | 2 | transformers | 2023-05-17T06:46:40 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
model-index:
- name: fine_tune_bert_output_te_ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tune_bert_output_te_ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,096 | [
[
-0.03564453125,
-0.0528564453125,
0.00009357929229736328,
0.0117950439453125,
-0.034423828125,
-0.037506103515625,
-0.033355712890625,
-0.0199737548828125,
0.017547607421875,
0.0280303955078125,
-0.052642822265625,
-0.043609619140625,
-0.049346923828125,
0.0... |
AustinCarthy/Benign10MGPT2_fromP_BFall_30KGen_toP_0.75 | 2023-05-17T17:08:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Benign10MGPT2_fromP_BFall_30KGen_toP_0.75 | 0 | 2 | transformers | 2023-05-17T08:31:17 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromP_BFall_30KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromP_BFall_30KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0981
- Accuracy: 0.9876
- F1: 0.8504
- Precision: 0.9938
- Recall: 0.7432
- Roc Auc Score: 0.8715
- Tpr At Fpr 0.01: 0.6914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0097 | 1.0 | 26250 | 0.0808 | 0.9840 | 0.8004 | 0.9874 | 0.673 | 0.8363 | 0.6018 |
| 0.011 | 2.0 | 52500 | 0.0652 | 0.9867 | 0.8389 | 0.9881 | 0.7288 | 0.8642 | 0.6536 |
| 0.0025 | 3.0 | 78750 | 0.0730 | 0.9868 | 0.8401 | 0.9889 | 0.7302 | 0.8649 | 0.649 |
| 0.0023 | 4.0 | 105000 | 0.1064 | 0.9866 | 0.8367 | 0.9937 | 0.7226 | 0.8612 | 0.6878 |
| 0.0011 | 5.0 | 131250 | 0.0981 | 0.9876 | 0.8504 | 0.9938 | 0.7432 | 0.8715 | 0.6914 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,251 | [
[
-0.04339599609375,
-0.041473388671875,
0.00933074951171875,
0.0094451904296875,
-0.0207672119140625,
-0.0255126953125,
-0.006969451904296875,
-0.0194244384765625,
0.0260162353515625,
0.0242767333984375,
-0.050567626953125,
-0.046539306640625,
-0.052825927734375,... |
AustinCarthy/Benign10MGPT2_fromP_BFall_40KGen_toP_0.75 | 2023-05-17T17:21:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Benign10MGPT2_fromP_BFall_40KGen_toP_0.75 | 0 | 2 | transformers | 2023-05-17T11:17:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromP_BFall_40KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromP_BFall_40KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0969
- Accuracy: 0.9891
- F1: 0.8714
- Precision: 0.9941
- Recall: 0.7756
- Roc Auc Score: 0.8877
- Tpr At Fpr 0.01: 0.7466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0125 | 1.0 | 32813 | 0.0595 | 0.9879 | 0.8582 | 0.9722 | 0.7682 | 0.8836 | 0.4626 |
| 0.0073 | 2.0 | 65626 | 0.0586 | 0.9881 | 0.8574 | 0.9934 | 0.7542 | 0.8770 | 0.7238 |
| 0.0057 | 3.0 | 98439 | 0.0760 | 0.987 | 0.8426 | 0.9948 | 0.7308 | 0.8653 | 0.7106 |
| 0.0028 | 4.0 | 131252 | 0.0734 | 0.9896 | 0.8778 | 0.9937 | 0.7862 | 0.8930 | 0.7676 |
| 0.0013 | 5.0 | 164065 | 0.0969 | 0.9891 | 0.8714 | 0.9941 | 0.7756 | 0.8877 | 0.7466 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,251 | [
[
-0.043304443359375,
-0.0413818359375,
0.00933837890625,
0.00862884521484375,
-0.02166748046875,
-0.023834228515625,
-0.0079193115234375,
-0.018646240234375,
0.028778076171875,
0.0251007080078125,
-0.05072021484375,
-0.047332763671875,
-0.05291748046875,
-0.0... |
soteroshanthi/distilbert-base-uncased | 2023-05-17T12:24:33.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | soteroshanthi | null | null | soteroshanthi/distilbert-base-uncased | 0 | 2 | transformers | 2023-05-17T12:24:21 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,453 | [
[
-0.04034423828125,
-0.048492431640625,
0.0232391357421875,
0.01050567626953125,
-0.03857421875,
-0.004161834716796875,
-0.01224517822265625,
-0.006725311279296875,
0.004802703857421875,
0.00621795654296875,
-0.047149658203125,
-0.0501708984375,
-0.06512451171875... |
bastienm/dqn-SpaceInvadersNoFrameskip-v4 | 2023-05-17T12:53:34.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | bastienm | null | null | bastienm/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-17T12:52:59 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 552.00 +/- 166.57
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bastienm -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bastienm -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bastienm
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,691 | [
[
-0.0418701171875,
-0.037506103515625,
0.021484375,
0.025390625,
-0.01096343994140625,
-0.018035888671875,
0.01194000244140625,
-0.0140228271484375,
0.013397216796875,
0.0240631103515625,
-0.06884765625,
-0.035736083984375,
-0.0268096923828125,
-0.00357818603... |
Gaivoronsky/dqn-SpaceInvadersNoFrameskip-v4 | 2023-05-17T14:00:26.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Gaivoronsky | null | null | Gaivoronsky/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-17T13:59:54 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 559.50 +/- 161.98
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Gaivoronsky -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Gaivoronsky -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Gaivoronsky
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,698 | [
[
-0.041717529296875,
-0.037017822265625,
0.0230255126953125,
0.0249786376953125,
-0.01047515869140625,
-0.018310546875,
0.0124053955078125,
-0.01418304443359375,
0.01427459716796875,
0.0249786376953125,
-0.0711669921875,
-0.03546142578125,
-0.027252197265625,
... |
chenbowen-184/distilbert_classifier_newsgroups | 2023-05-17T14:22:57.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | chenbowen-184 | null | null | chenbowen-184/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-17T14:22:25 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.0386962890625,
-0.042022705078125,
0.021240234375,
0.0084228515625,
-0.033599853515625,
-0.0068359375,
-0.01174163818359375,
-0.010833740234375,
-0.002910614013671875,
-0.00620269775390625,
-0.041534423828125,
-0.050445556640625,
-0.067138671875,
-0.01020... |
TasmiaAzmi/masked-sentence-generation-t5-base | 2023-05-19T06:33:53.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | TasmiaAzmi | null | null | TasmiaAzmi/masked-sentence-generation-t5-base | 0 | 2 | transformers | 2023-05-17T15:56:02 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: masked-sentence-generation-t5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# masked-sentence-generation-t5-base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9984 | 0.05 | 80 | 2.7041 |
| 2.8752 | 0.1 | 160 | 2.7021 |
| 2.9314 | 0.15 | 240 | 2.6966 |
| 2.8541 | 0.2 | 320 | 2.6968 |
| 2.8674 | 0.25 | 400 | 2.6900 |
| 2.8706 | 0.3 | 480 | 2.6886 |
| 2.7718 | 0.34 | 560 | 2.6908 |
| 2.8503 | 0.39 | 640 | 2.6877 |
| 2.8195 | 0.44 | 720 | 2.6902 |
| 2.8569 | 0.49 | 800 | 2.6893 |
| 2.8372 | 0.54 | 880 | 2.6859 |
| 2.8915 | 0.59 | 960 | 2.6898 |
| 2.9687 | 0.64 | 1040 | 2.6909 |
| 2.832 | 0.69 | 1120 | 2.6841 |
| 2.8425 | 0.74 | 1200 | 2.6842 |
| 2.8114 | 0.79 | 1280 | 2.6766 |
| 2.8101 | 0.84 | 1360 | 2.6783 |
| 2.8837 | 0.89 | 1440 | 2.6781 |
| 2.894 | 0.94 | 1520 | 2.6754 |
| 2.9183 | 0.99 | 1600 | 2.6762 |
| 2.6916 | 1.03 | 1680 | 2.6889 |
| 2.5812 | 1.08 | 1760 | 2.6896 |
| 2.5522 | 1.13 | 1840 | 2.6943 |
| 2.5368 | 1.18 | 1920 | 2.6928 |
| 2.5987 | 1.23 | 2000 | 2.6927 |
| 2.5625 | 1.28 | 2080 | 2.6899 |
| 2.4946 | 1.33 | 2160 | 2.6942 |
| 2.5902 | 1.38 | 2240 | 2.6900 |
| 2.5415 | 1.43 | 2320 | 2.6897 |
| 2.5767 | 1.48 | 2400 | 2.6858 |
| 2.6262 | 1.53 | 2480 | 2.6825 |
| 2.6066 | 1.58 | 2560 | 2.6818 |
| 2.5387 | 1.63 | 2640 | 2.6840 |
| 2.5795 | 1.67 | 2720 | 2.6828 |
| 2.5521 | 1.72 | 2800 | 2.6871 |
| 2.5477 | 1.77 | 2880 | 2.6836 |
| 2.587 | 1.82 | 2960 | 2.6824 |
| 2.529 | 1.87 | 3040 | 2.6871 |
| 2.6221 | 1.92 | 3120 | 2.6838 |
| 2.6353 | 1.97 | 3200 | 2.6803 |
| 2.5419 | 2.02 | 3280 | 2.6879 |
| 2.4521 | 2.07 | 3360 | 2.7027 |
| 2.3415 | 2.12 | 3440 | 2.7105 |
| 2.3483 | 2.17 | 3520 | 2.7140 |
| 2.3493 | 2.22 | 3600 | 2.7144 |
| 2.3967 | 2.27 | 3680 | 2.7134 |
| 2.3544 | 2.32 | 3760 | 2.7122 |
| 2.3192 | 2.36 | 3840 | 2.7175 |
| 2.3381 | 2.41 | 3920 | 2.7166 |
| 2.3667 | 2.46 | 4000 | 2.7165 |
| 2.3997 | 2.51 | 4080 | 2.7106 |
| 2.3178 | 2.56 | 4160 | 2.7154 |
| 2.4036 | 2.61 | 4240 | 2.7144 |
| 2.3797 | 2.66 | 4320 | 2.7129 |
| 2.3354 | 2.71 | 4400 | 2.7136 |
| 2.4109 | 2.76 | 4480 | 2.7118 |
| 2.387 | 2.81 | 4560 | 2.7097 |
| 2.3934 | 2.86 | 4640 | 2.7103 |
| 2.3956 | 2.91 | 4720 | 2.7103 |
| 2.4086 | 2.96 | 4800 | 2.7111 |
| 2.4083 | 3.0 | 4880 | 2.7110 |
| 2.3121 | 3.05 | 4960 | 2.7230 |
| 2.263 | 3.1 | 5040 | 2.7252 |
| 2.2722 | 3.15 | 5120 | 2.7296 |
| 2.2053 | 3.2 | 5200 | 2.7309 |
| 2.1969 | 3.25 | 5280 | 2.7363 |
| 2.2684 | 3.3 | 5360 | 2.7396 |
| 2.2789 | 3.35 | 5440 | 2.7376 |
| 2.2227 | 3.4 | 5520 | 2.7384 |
| 2.2886 | 3.45 | 5600 | 2.7390 |
| 2.2182 | 3.5 | 5680 | 2.7376 |
| 2.2738 | 3.55 | 5760 | 2.7394 |
| 2.1687 | 3.6 | 5840 | 2.7386 |
| 2.2548 | 3.65 | 5920 | 2.7371 |
| 2.2391 | 3.69 | 6000 | 2.7372 |
| 2.2031 | 3.74 | 6080 | 2.7391 |
| 2.1885 | 3.79 | 6160 | 2.7400 |
| 2.216 | 3.84 | 6240 | 2.7406 |
| 2.272 | 3.89 | 6320 | 2.7401 |
| 2.3455 | 3.94 | 6400 | 2.7395 |
| 2.2889 | 3.99 | 6480 | 2.7392 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.11.0
| 5,341 | [
[
-0.04644775390625,
-0.0258636474609375,
0.020172119140625,
0.00786590576171875,
0.0007300376892089844,
0.006900787353515625,
0.0146636962890625,
0.0023479461669921875,
0.0438232421875,
0.0303497314453125,
-0.043182373046875,
-0.044281005859375,
-0.04226684570312... |
Ankit93/distilbert-base-uncased-finetuned-emotion | 2023-05-18T19:30:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Ankit93 | null | null | Ankit93/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-17T15:57:25 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9284458409041368
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2192
- Accuracy: 0.9285
- F1: 0.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8301 | 1.0 | 250 | 0.3214 | 0.905 | 0.9010 |
| 0.2508 | 2.0 | 500 | 0.2192 | 0.9285 | 0.9284 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.037933349609375,
-0.041290283203125,
0.014129638671875,
0.0222930908203125,
-0.0257415771484375,
-0.0192108154296875,
-0.0137176513671875,
-0.00858306884765625,
0.01030731201171875,
0.00792694091796875,
-0.0557861328125,
-0.052459716796875,
-0.060302734375,
... |
land25/distilbert-base-uncased_emotion_ft_0517 | 2023-05-17T16:26:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | land25 | null | null | land25/distilbert-base-uncased_emotion_ft_0517 | 0 | 2 | transformers | 2023-05-17T16:04:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
- precision
model-index:
- name: distilbert-base-uncased_emotion_ft_0517
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9345
- name: F1
type: f1
value: 0.9346851141275695
- name: Precision
type: precision
value: 0.9087842847016905
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft_0517
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1479
- Accuracy: 0.9345
- F1: 0.9347
- Precision: 0.9088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|
| 0.7913 | 1.0 | 250 | 0.2689 | 0.918 | 0.9162 | 0.9016 |
| 0.2142 | 2.0 | 500 | 0.1764 | 0.929 | 0.9290 | 0.9109 |
| 0.1415 | 3.0 | 750 | 0.1541 | 0.934 | 0.9345 | 0.8995 |
| 0.1128 | 4.0 | 1000 | 0.1479 | 0.9345 | 0.9347 | 0.9088 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,166 | [
[
-0.035430908203125,
-0.0360107421875,
0.013519287109375,
0.01934814453125,
-0.021820068359375,
-0.0158843994140625,
-0.00878143310546875,
-0.006938934326171875,
0.01280975341796875,
0.00792694091796875,
-0.053314208984375,
-0.05096435546875,
-0.060638427734375,
... |
cmagganas/distilbert_classifier_newsgroups | 2023-05-17T16:39:08.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | cmagganas | null | null | cmagganas/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-17T16:36:38 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
Achieved 83.4% acc.
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,490 | [
[
-0.0394287109375,
-0.04119873046875,
0.0207977294921875,
0.0084991455078125,
-0.033355712890625,
-0.005680084228515625,
-0.0114288330078125,
-0.01103973388671875,
-0.002521514892578125,
-0.006534576416015625,
-0.04058837890625,
-0.0511474609375,
-0.069091796875,... |
estevez-work/distilbert_classifier_newsgroups | 2023-05-17T18:31:45.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | estevez-work | null | null | estevez-work/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-17T18:31:11 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.0386962890625,
-0.042022705078125,
0.021209716796875,
0.00839996337890625,
-0.033599853515625,
-0.006809234619140625,
-0.01171875,
-0.010833740234375,
-0.0028934478759765625,
-0.006195068359375,
-0.041534423828125,
-0.0504150390625,
-0.067138671875,
-0.01... |
AustinCarthy/Benign10MGPT2_fromB_BFall_10KGen_toP_0.75 | 2023-05-17T20:50:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Benign10MGPT2_fromB_BFall_10KGen_toP_0.75 | 0 | 2 | transformers | 2023-05-17T19:06:42 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromB_BFall_10KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromB_BFall_10KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0932
- Accuracy: 0.9863
- F1: 0.8426
- Precision: 0.9285
- Recall: 0.7712
- Roc Auc Score: 0.8841
- Tpr At Fpr 0.01: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0731 | 1.0 | 13125 | 0.0701 | 0.9834 | 0.8069 | 0.9013 | 0.7304 | 0.8632 | 0.5672 |
| 0.0595 | 2.0 | 26250 | 0.0720 | 0.9812 | 0.7700 | 0.9192 | 0.6624 | 0.8297 | 0.5038 |
| 0.0457 | 3.0 | 39375 | 0.0667 | 0.9864 | 0.8459 | 0.9193 | 0.7834 | 0.8900 | 0.0 |
| 0.0301 | 4.0 | 52500 | 0.0803 | 0.9861 | 0.8368 | 0.9467 | 0.7498 | 0.8738 | 0.0 |
| 0.02 | 5.0 | 65625 | 0.0932 | 0.9863 | 0.8426 | 0.9285 | 0.7712 | 0.8841 | 0.0 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,241 | [
[
-0.04339599609375,
-0.043548583984375,
0.00920867919921875,
0.009613037109375,
-0.0210418701171875,
-0.0233612060546875,
-0.00659942626953125,
-0.019439697265625,
0.0265350341796875,
0.02630615234375,
-0.05145263671875,
-0.0477294921875,
-0.052703857421875,
... |
everyl12/user_class_L | 2023-05-17T20:58:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | everyl12 | null | null | everyl12/user_class_L | 0 | 2 | transformers | 2023-05-17T20:49:10 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: user_class_L
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# user_class_L
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5451
- Accuracy: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.8e-05
- train_batch_size: 30
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1572 | 1.0 | 24 | 0.2433 | 0.9025 |
| 0.1649 | 2.0 | 48 | 0.2262 | 0.9237 |
| 0.2498 | 3.0 | 72 | 0.2584 | 0.9237 |
| 0.006 | 4.0 | 96 | 0.3393 | 0.9153 |
| 0.0035 | 5.0 | 120 | 0.3967 | 0.9153 |
| 0.0017 | 6.0 | 144 | 0.4777 | 0.9153 |
| 0.0006 | 7.0 | 168 | 0.6257 | 0.8898 |
| 0.0005 | 8.0 | 192 | 0.5752 | 0.9153 |
| 0.0002 | 9.0 | 216 | 0.5182 | 0.9237 |
| 0.0003 | 10.0 | 240 | 0.5041 | 0.9195 |
| 0.0002 | 11.0 | 264 | 0.5051 | 0.9195 |
| 0.0001 | 12.0 | 288 | 0.5292 | 0.9195 |
| 0.0002 | 13.0 | 312 | 0.5391 | 0.9237 |
| 0.0002 | 14.0 | 336 | 0.5437 | 0.9237 |
| 0.0002 | 15.0 | 360 | 0.5451 | 0.9237 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.2
| 2,185 | [
[
-0.040069580078125,
-0.04022216796875,
0.01326751708984375,
0.007648468017578125,
-0.015533447265625,
-0.0177154541015625,
-0.00664520263671875,
-0.0100250244140625,
0.0237274169921875,
0.021453857421875,
-0.0556640625,
-0.051239013671875,
-0.048736572265625,
... |
AustinCarthy/Benign10MGPT2_fromB_BFall_20KGen_toP_0.75 | 2023-05-18T02:40:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Benign10MGPT2_fromB_BFall_20KGen_toP_0.75 | 0 | 2 | transformers | 2023-05-17T20:51:32 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromB_BFall_20KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromB_BFall_20KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1022
- Accuracy: 0.9840
- F1: 0.8164
- Precision: 0.8982
- Recall: 0.7482
- Roc Auc Score: 0.8720
- Tpr At Fpr 0.01: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0758 | 1.0 | 19688 | 0.0958 | 0.9786 | 0.7257 | 0.9311 | 0.5946 | 0.7962 | 0.5118 |
| 0.0634 | 2.0 | 39376 | 0.0682 | 0.9823 | 0.7843 | 0.9367 | 0.6746 | 0.8362 | 0.4936 |
| 0.0515 | 3.0 | 59064 | 0.0760 | 0.9823 | 0.7955 | 0.8855 | 0.7222 | 0.8588 | 0.6002 |
| 0.0372 | 4.0 | 78752 | 0.0951 | 0.9831 | 0.8034 | 0.8979 | 0.7268 | 0.8613 | 0.0 |
| 0.0339 | 5.0 | 98440 | 0.1022 | 0.9840 | 0.8164 | 0.8982 | 0.7482 | 0.8720 | 0.0 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,241 | [
[
-0.04351806640625,
-0.04364013671875,
0.0081787109375,
0.0094146728515625,
-0.0219573974609375,
-0.0241851806640625,
-0.007724761962890625,
-0.0199127197265625,
0.02691650390625,
0.0247650146484375,
-0.050689697265625,
-0.046356201171875,
-0.052703857421875,
... |
cs608/billsum-full-data | 2023-05-18T00:06:56.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | cs608 | null | null | cs608/billsum-full-data | 0 | 2 | transformers | 2023-05-17T21:02:45 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: billsum-full-data
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: train[:95%]
args: default
metrics:
- name: Rouge1
type: rouge
value: 18.0383
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum-full-data
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6583
- Rouge1: 18.0383
- Rouge2: 14.8462
- Rougel: 17.6086
- Rougelsum: 17.6843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.1401 | 1.0 | 8101 | 1.8087 | 17.8461 | 14.6015 | 17.3956 | 17.4842 |
| 1.7596 | 2.0 | 16202 | 1.6980 | 18.0568 | 14.7833 | 17.6068 | 17.6999 |
| 1.5789 | 3.0 | 24303 | 1.6583 | 18.0383 | 14.8462 | 17.6086 | 17.6843 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,978 | [
[
-0.034942626953125,
-0.0443115234375,
0.01727294921875,
0.00757598876953125,
-0.0203857421875,
-0.0242767333984375,
0.000431060791015625,
-0.01404571533203125,
0.0209503173828125,
0.04193115234375,
-0.0482177734375,
-0.0494384765625,
-0.041778564453125,
-0.0... |
christinacdl/XLM_Roberta_Large_Greek_Offensive | 2023-05-20T10:15:09.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta-xl",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | christinacdl | null | null | christinacdl/XLM_Roberta_Large_Greek_Offensive | 0 | 2 | transformers | 2023-05-17T21:10:57 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: XLM_Roberta_Large_Greek_Offensive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM_Roberta_Large_Greek_Offensive
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7552
- Macro F1: 0.7352
- Micro F1: 0.7989
- Accuracy: 0.7989
Results on test set:
-Accuracy: 0.905440414507772
-F1 score: 0.8394228385651885
-Precision: 0.8115009990009989
-Recall : 0.8800129489279049
-Matthews Correlation Coefficient: 0.6881116572893037
-Precision of each class: [0.96915584 0.65384615]
-Recall of each class: [0.91705069 0.84297521]
-F1 score of each class: [0.94238358 0.73646209]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Micro F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:--------:|
| 0.547 | 1.0 | 1967 | 0.6330 | 0.7245 | 0.8057 | 0.8057 |
| 0.5369 | 2.0 | 3934 | 0.5186 | 0.7328 | 0.8057 | 0.8057 |
| 0.5571 | 3.0 | 5901 | 0.6156 | 0.7495 | 0.8149 | 0.8149 |
| 0.5426 | 4.0 | 7868 | 0.6820 | 0.7388 | 0.8126 | 0.8126 |
| 0.4842 | 5.0 | 9835 | 0.7268 | 0.7386 | 0.7897 | 0.7897 |
| 0.5113 | 6.0 | 11802 | 0.7552 | 0.7352 | 0.7989 | 0.7989 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
| 2,332 | [
[
-0.044464111328125,
-0.050506591796875,
0.0223846435546875,
-0.007274627685546875,
-0.015777587890625,
-0.015106201171875,
-0.01288604736328125,
-0.0211029052734375,
0.0260467529296875,
0.0272064208984375,
-0.047821044921875,
-0.05108642578125,
-0.06082153320312... |
modelscope-unofficial/damo-csanmt-zh-en-large-tfs | 2023-05-18T19:51:07.000Z | [
"keras",
"translation",
"license:apache-2.0",
"region:us"
] | translation | modelscope-unofficial | null | null | modelscope-unofficial/damo-csanmt-zh-en-large-tfs | 0 | 2 | keras | 2023-05-17T23:29:46 | ---
license: apache-2.0
pipeline_tag: translation
---
TensorFlow saved model version of the original model:
https://www.modelscope.cn/models/damo/nlp_csanmt_translation_zh2en/summary
| 183 | [
[
-0.01357269287109375,
-0.01654052734375,
0.0201568603515625,
0.00988006591796875,
-0.0246734619140625,
-0.04241943359375,
0.01322174072265625,
-0.01194000244140625,
0.0347900390625,
0.07757568359375,
-0.038482666015625,
-0.033294677734375,
-0.03192138671875,
... |
pkuong/distilbert_classifier_newsgroups | 2023-05-18T00:22:29.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | pkuong | null | null | pkuong/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-18T00:22:11 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.038665771484375,
-0.0419921875,
0.0211944580078125,
0.0084075927734375,
-0.033599853515625,
-0.006816864013671875,
-0.01171112060546875,
-0.010833740234375,
-0.0029201507568359375,
-0.006214141845703125,
-0.04150390625,
-0.050384521484375,
-0.067138671875,
... |
yarak001/distilbert-base-uncased-finetuned-emotion | 2023-05-18T01:03:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yarak001 | null | null | yarak001/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-18T00:28:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9225635095680048
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8134 | 1.0 | 250 | 0.3127 | 0.903 | 0.9000 |
| 0.247 | 2.0 | 500 | 0.2207 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.03759765625,
-0.041168212890625,
0.014984130859375,
0.0219268798828125,
-0.0259246826171875,
-0.0190277099609375,
-0.0133209228515625,
-0.00859832763671875,
0.01068115234375,
0.0082550048828125,
-0.05670166015625,
-0.052154541015625,
-0.05987548828125,
-0... |
korelidw/bert_simple_classifier | 2023-05-18T02:25:27.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | korelidw | null | null | korelidw/bert_simple_classifier | 0 | 2 | transformers | 2023-05-18T02:24:39 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert_simple_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert_simple_classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3054, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,439 | [
[
-0.0416259765625,
-0.042510986328125,
0.0230865478515625,
0.0011167526245117188,
-0.03521728515625,
-0.0234527587890625,
-0.019287109375,
-0.0223236083984375,
-0.001331329345703125,
0.0036716461181640625,
-0.04693603515625,
-0.046875,
-0.052734375,
-0.020828... |
AustinCarthy/Benign10MGPT2_fromB_BFall_30KGen_toP_0.75 | 2023-05-18T05:44:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Benign10MGPT2_fromB_BFall_30KGen_toP_0.75 | 0 | 2 | transformers | 2023-05-18T02:42:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromB_BFall_30KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromB_BFall_30KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1066
- Accuracy: 0.9827
- F1: 0.7997
- Precision: 0.8920
- Recall: 0.7248
- Roc Auc Score: 0.8602
- Tpr At Fpr 0.01: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0859 | 1.0 | 26250 | 0.0749 | 0.9823 | 0.7832 | 0.9388 | 0.6718 | 0.8348 | 0.5556 |
| 0.074 | 2.0 | 52500 | 0.0810 | 0.9803 | 0.7718 | 0.8628 | 0.6982 | 0.8463 | 0.5496 |
| 0.0534 | 3.0 | 78750 | 0.0735 | 0.9846 | 0.8211 | 0.9211 | 0.7406 | 0.8687 | 0.5882 |
| 0.0374 | 4.0 | 105000 | 0.0877 | 0.9830 | 0.8023 | 0.8976 | 0.7254 | 0.8606 | 0.0 |
| 0.0267 | 5.0 | 131250 | 0.1066 | 0.9827 | 0.7997 | 0.8920 | 0.7248 | 0.8602 | 0.0 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,248 | [
[
-0.043060302734375,
-0.043426513671875,
0.0084381103515625,
0.0095672607421875,
-0.0221710205078125,
-0.0239715576171875,
-0.00749969482421875,
-0.0201873779296875,
0.0261383056640625,
0.025115966796875,
-0.050689697265625,
-0.0474853515625,
-0.05389404296875,
... |
SHENMU007/neunit_testv1.1 | 2023-05-18T05:51:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | SHENMU007 | null | null | SHENMU007/neunit_testv1.1 | 0 | 2 | transformers | 2023-05-18T03:36:40 | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
| 1,251 | [
[
-0.0350341796875,
-0.051727294921875,
-0.005931854248046875,
0.01265716552734375,
-0.025390625,
-0.0193939208984375,
-0.01763916015625,
-0.0265045166015625,
0.0114288330078125,
0.021270751953125,
-0.0411376953125,
-0.050048828125,
-0.04315185546875,
0.008583... |
moghis/ppo-Pyramids | 2023-05-18T04:53:06.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | moghis | null | null | moghis/ppo-Pyramids | 0 | 2 | ml-agents | 2023-05-18T04:53:01 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: moghis/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 949 | [
[
-0.02734375,
-0.01934814453125,
-0.0010852813720703125,
0.0255889892578125,
-0.0098114013671875,
0.005950927734375,
0.027740478515625,
-0.0028057098388671875,
0.0355224609375,
0.035247802734375,
-0.035858154296875,
-0.052001953125,
-0.035369873046875,
-0.010... |
fredymad/distilbert_Pfinal_2e-5_16_2 | 2023-06-02T10:35:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | fredymad | null | null | fredymad/distilbert_Pfinal_2e-5_16_2 | 0 | 2 | transformers | 2023-05-18T04:59:25 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert_Pfinal_2e-5_16_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_Pfinal_2e-5_16_2
This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2200
- F1: 0.7289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2427 | 1.0 | 669 | 0.1984 | 0.7270 |
| 0.1799 | 2.0 | 1338 | 0.2200 | 0.7289 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,417 | [
[
-0.029205322265625,
-0.047515869140625,
0.0124969482421875,
0.0257720947265625,
-0.029327392578125,
-0.019378662109375,
-0.01070404052734375,
-0.007778167724609375,
0.0007295608520507812,
0.01364898681640625,
-0.05157470703125,
-0.045196533203125,
-0.05462646484... |
atrytone/scibert_uncased_claim_id | 2023-06-17T15:16:47.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | atrytone | null | null | atrytone/scibert_uncased_claim_id | 0 | 2 | transformers | 2023-05-18T05:07:15 | ---
license: apache-2.0
language:
- en
---
Fine-tuned SciBERT uncased model [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) for claim detection from abstracts. | 204 | [
[
-0.0132598876953125,
-0.027069091796875,
0.04119873046875,
0.030029296875,
-0.020233154296875,
0.015716552734375,
0.020965576171875,
-0.045074462890625,
0.06378173828125,
0.0278778076171875,
-0.037506103515625,
-0.034912109375,
-0.0247344970703125,
0.0014514... |
suraj47K/keras-dummy-sequential | 2023-05-18T05:42:09.000Z | [
"keras",
"region:us"
] | null | suraj47K | null | null | suraj47K/keras-dummy-sequential | 0 | 2 | keras | 2023-05-18T05:42:07 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 841 | [
[
-0.037200927734375,
-0.03997802734375,
0.031890869140625,
0.0081634521484375,
-0.043243408203125,
-0.0177154541015625,
0.01097869873046875,
-0.0033969879150390625,
0.0204620361328125,
0.030517578125,
-0.04376220703125,
-0.05120849609375,
-0.040008544921875,
... |
AustinCarthy/Benign10MGPT2_fromB_BFall_40KGen_toP_0.75 | 2023-05-18T09:25:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Benign10MGPT2_fromB_BFall_40KGen_toP_0.75 | 0 | 2 | transformers | 2023-05-18T05:44:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromB_BFall_40KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromB_BFall_40KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1061
- Accuracy: 0.9824
- F1: 0.7918
- Precision: 0.9034
- Recall: 0.7048
- Roc Auc Score: 0.8505
- Tpr At Fpr 0.01: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0873 | 1.0 | 32813 | 0.0943 | 0.9790 | 0.7389 | 0.9064 | 0.6236 | 0.8102 | 0.19 |
| 0.0715 | 2.0 | 65626 | 0.0807 | 0.9817 | 0.7803 | 0.9099 | 0.683 | 0.8398 | 0.4716 |
| 0.0501 | 3.0 | 98439 | 0.0727 | 0.9834 | 0.8103 | 0.8917 | 0.7426 | 0.8690 | 0.0 |
| 0.0436 | 4.0 | 131252 | 0.0833 | 0.9843 | 0.8217 | 0.8976 | 0.7576 | 0.8766 | 0.0 |
| 0.0292 | 5.0 | 164065 | 0.1061 | 0.9824 | 0.7918 | 0.9034 | 0.7048 | 0.8505 | 0.0 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,248 | [
[
-0.0438232421875,
-0.042449951171875,
0.00881195068359375,
0.0086822509765625,
-0.02276611328125,
-0.0237884521484375,
-0.00719451904296875,
-0.0203094482421875,
0.02789306640625,
0.0255279541015625,
-0.051666259765625,
-0.048126220703125,
-0.053466796875,
-... |
ashleyradford/my_awesome_food_model | 2023-05-18T21:04:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"en",
"dataset:food101",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | ashleyradford | null | null | ashleyradford/my_awesome_food_model | 0 | 2 | transformers | 2023-05-18T06:15:19 | ---
datasets:
- food101
language:
- en
metrics:
- accuracy
library_name: transformers
---
# Image Classification
Classifies food images using a subset of the food101 dataset.<br>
Uses PyTorch for the preprocessing, training, and inference.
```
output_dir="cats_vs_dogs_model"
remove_unused_columns=False
evaluation_strategy="epoch"
save_strategy="epoch"
learning_rate=5e-5
per_device_train_batch_size=16
gradient_accumulation_steps=4
per_device_eval_batch_size=16
num_train_epochs=3
warmup_ratio=0.1
logging_steps=10
load_best_model_at_end=True
metric_for_best_model="accuracy"
push_to_hub=True
``` | 629 | [
[
-0.040191650390625,
-0.0290069580078125,
-0.01561737060546875,
0.005580902099609375,
-0.01617431640625,
-0.018768310546875,
0.00585174560546875,
-0.017578125,
-0.002353668212890625,
0.011871337890625,
-0.021331787109375,
-0.043670654296875,
-0.035430908203125,
... |
SHENMU007/neunit_tts_1.0 | 2023-05-18T07:58:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | SHENMU007 | null | null | SHENMU007/neunit_tts_1.0 | 0 | 2 | transformers | 2023-05-18T06:15:59 | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
| 1,251 | [
[
-0.0350341796875,
-0.051727294921875,
-0.005931854248046875,
0.01265716552734375,
-0.025390625,
-0.0193939208984375,
-0.01763916015625,
-0.0265045166015625,
0.0114288330078125,
0.021270751953125,
-0.0411376953125,
-0.050048828125,
-0.04315185546875,
0.008583... |
gkrishnan/distilbert_classifier_newsgroups | 2023-05-18T06:39:35.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gkrishnan | null | null | gkrishnan/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-18T06:39:03 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.0386962890625,
-0.042022705078125,
0.021240234375,
0.0084228515625,
-0.033599853515625,
-0.0068359375,
-0.01174163818359375,
-0.010833740234375,
-0.002910614013671875,
-0.00620269775390625,
-0.041534423828125,
-0.050445556640625,
-0.067138671875,
-0.01020... |
againeureka/imdb_binary_classifier_roberta_base | 2023-06-20T07:47:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | againeureka | null | null | againeureka/imdb_binary_classifier_roberta_base | 0 | 2 | transformers | 2023-05-18T07:00:29 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: imdb_binary_classifier_roberta_base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9538
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb_binary_classifier_roberta_base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2530
- Accuracy: 0.9538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2575 | 1.0 | 782 | 0.1513 | 0.9461 |
| 0.1272 | 2.0 | 1564 | 0.1784 | 0.9482 |
| 0.0859 | 3.0 | 2346 | 0.1854 | 0.9510 |
| 0.0506 | 4.0 | 3128 | 0.2193 | 0.9529 |
| 0.0341 | 5.0 | 3910 | 0.2530 | 0.9538 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.2
| 1,869 | [
[
-0.03485107421875,
-0.035247802734375,
0.01290130615234375,
-0.00879669189453125,
-0.0249481201171875,
-0.01078033447265625,
0.0008835792541503906,
-0.0156097412109375,
0.01096343994140625,
0.027862548828125,
-0.0533447265625,
-0.045623779296875,
-0.066528320312... |
vnsaipa1/t5-small-finetuned | 2023-05-18T07:09:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | vnsaipa1 | null | null | vnsaipa1/t5-small-finetuned | 0 | 2 | transformers | 2023-05-18T07:07:33 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 991 | [
[
-0.036651611328125,
-0.03936767578125,
0.0175933837890625,
0.00450897216796875,
-0.0343017578125,
-0.0305023193359375,
-0.012542724609375,
-0.021697998046875,
0.006526947021484375,
0.0224609375,
-0.059295654296875,
-0.0413818359375,
-0.049896240234375,
0.006... |
bortle/moon-detector-v5.a | 2023-05-18T09:22:43.000Z | [
"transformers",
"pytorch",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | bortle | null | null | bortle/moon-detector-v5.a | 0 | 2 | transformers | 2023-05-18T07:34:54 | ---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: moon-detector-v5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9949622166246851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# moon-detector-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0238
- Accuracy: 0.9950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0548 | 1.0 | 281 | 0.0616 | 0.9798 |
| 0.1366 | 2.0 | 562 | 0.0340 | 0.9899 |
| 0.0218 | 3.0 | 843 | 0.0430 | 0.9874 |
| 0.0403 | 4.0 | 1124 | 0.0406 | 0.9874 |
| 0.0184 | 5.0 | 1405 | 0.0238 | 0.9950 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,954 | [
[
-0.031494140625,
-0.031005859375,
0.0269775390625,
-0.004146575927734375,
-0.034576416015625,
-0.0242919921875,
0.01070404052734375,
-0.02264404296875,
0.0103302001953125,
0.0298004150390625,
-0.04791259765625,
-0.047698974609375,
-0.058746337890625,
-0.0093... |
Zamill/distilbert-base-uncased-finetuned-emotion | 2023-05-18T08:06:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Zamill | null | null | Zamill/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-18T08:01:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9209583313765042
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2331
- Accuracy: 0.921
- F1: 0.9210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8643 | 1.0 | 250 | 0.3494 | 0.8965 | 0.8909 |
| 0.2629 | 2.0 | 500 | 0.2331 | 0.921 | 0.9210 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.0380859375,
-0.04156494140625,
0.0163116455078125,
0.021392822265625,
-0.0260162353515625,
-0.0197296142578125,
-0.01323699951171875,
-0.00885009765625,
0.009979248046875,
0.0084075927734375,
-0.056884765625,
-0.051788330078125,
-0.058990478515625,
-0.009... |
DuyTuan/distilbert-base-uncased-finetuned-emotion | 2023-10-04T06:04:15.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | DuyTuan | null | null | DuyTuan/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-18T08:35:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9232244925505232
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2199
- Accuracy: 0.923
- F1: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8288 | 1.0 | 250 | 0.3054 | 0.9065 | 0.9036 |
| 0.2521 | 2.0 | 500 | 0.2199 | 0.923 | 0.9232 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
| 1,802 | [
[
-0.037506103515625,
-0.0416259765625,
0.01410675048828125,
0.022216796875,
-0.02545166015625,
-0.018798828125,
-0.01308441162109375,
-0.008758544921875,
0.010467529296875,
0.00835418701171875,
-0.056243896484375,
-0.05157470703125,
-0.05999755859375,
-0.0075... |
MrPark97/distillbert-base-uncased-finetuned-clinc | 2023-05-18T14:37:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | MrPark97 | null | null | MrPark97/distillbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-05-18T09:15:51 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distillbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,616 | [
[
-0.035491943359375,
-0.042572021484375,
0.0160369873046875,
0.0131683349609375,
-0.02557373046875,
-0.019500732421875,
-0.0101318359375,
-0.003925323486328125,
0.0029201507568359375,
0.0194549560546875,
-0.04888916015625,
-0.04693603515625,
-0.060699462890625,
... |
AlexC98/commitRoBertaGood | 2023-05-18T13:08:21.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AlexC98 | null | null | AlexC98/commitRoBertaGood | 0 | 2 | transformers | 2023-05-18T11:08:56 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: commitRoBertaGood
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# commitRoBertaGood
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9193
- Accuracy: 0.8242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 371 | 0.5855 | 0.7091 |
| 0.5618 | 2.0 | 742 | 0.7041 | 0.7939 |
| 0.4278 | 3.0 | 1113 | 0.7003 | 0.8182 |
| 0.4278 | 4.0 | 1484 | 0.9193 | 0.8242 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,513 | [
[
-0.032623291015625,
-0.039825439453125,
0.0123291015625,
0.01351165771484375,
-0.0296173095703125,
-0.0289154052734375,
-0.018157958984375,
-0.018402099609375,
0.00916290283203125,
0.0259552001953125,
-0.056488037109375,
-0.04742431640625,
-0.050537109375,
-... |
HasinMDG/all-distilroberta-v1-IPTC-L1 | 2023-05-18T14:54:50.000Z | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | HasinMDG | null | null | HasinMDG/all-distilroberta-v1-IPTC-L1 | 0 | 2 | sentence-transformers | 2023-05-18T12:52:01 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/all-distilroberta-v1-IPTC-L1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/all-distilroberta-v1-IPTC-L1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,563 | [
[
-0.00896453857421875,
-0.06353759765625,
0.028045654296875,
-0.0026988983154296875,
-0.01751708984375,
-0.014007568359375,
-0.015655517578125,
-0.0033664703369140625,
-0.0013332366943359375,
0.028350830078125,
-0.04266357421875,
-0.02459716796875,
-0.05187988281... |
LamaAldakhil/SL-CvT | 2023-05-18T20:27:17.000Z | [
"transformers",
"pytorch",
"tensorboard",
"cvt",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | LamaAldakhil | null | null | LamaAldakhil/SL-CvT | 0 | 2 | transformers | 2023-05-18T12:55:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- f1
- accuracy
model-index:
- name: SL-CvT
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: F1
type: f1
value: 0.9297928229609359
- name: Accuracy
type: accuracy
value: 0.9316640584246219
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SL-CvT
This model is a fine-tuned version of [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3430
- F1: 0.9298
- Roc Auc: 0.9777
- Accuracy: 0.9317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 1.2379 | 1.0 | 60 | 1.0716 | 0.6422 | 0.7323 | 0.7246 |
| 1.0186 | 2.0 | 120 | 0.8477 | 0.6425 | 0.7879 | 0.7293 |
| 0.9433 | 3.0 | 180 | 0.7473 | 0.7060 | 0.8454 | 0.7538 |
| 0.8644 | 4.0 | 240 | 0.6831 | 0.7188 | 0.8696 | 0.7663 |
| 0.7985 | 5.0 | 300 | 0.6420 | 0.7409 | 0.8943 | 0.7799 |
| 0.7322 | 6.0 | 360 | 0.5713 | 0.7886 | 0.9196 | 0.8101 |
| 0.725 | 7.0 | 420 | 0.5311 | 0.7989 | 0.9324 | 0.8190 |
| 0.6529 | 8.0 | 480 | 0.5246 | 0.7852 | 0.9404 | 0.8117 |
| 0.6224 | 9.0 | 540 | 0.4598 | 0.8282 | 0.9517 | 0.8440 |
| 0.6315 | 10.0 | 600 | 0.4363 | 0.8457 | 0.9585 | 0.8529 |
| 0.5651 | 11.0 | 660 | 0.4437 | 0.8323 | 0.9564 | 0.8503 |
| 0.574 | 12.0 | 720 | 0.4003 | 0.8531 | 0.9617 | 0.8638 |
| 0.5269 | 13.0 | 780 | 0.3901 | 0.8676 | 0.9671 | 0.8722 |
| 0.5138 | 14.0 | 840 | 0.3984 | 0.8607 | 0.9685 | 0.8732 |
| 0.4839 | 15.0 | 900 | 0.3763 | 0.8683 | 0.9701 | 0.8769 |
| 0.463 | 16.0 | 960 | 0.3398 | 0.8837 | 0.9718 | 0.8894 |
| 0.4767 | 17.0 | 1020 | 0.3293 | 0.8846 | 0.9738 | 0.8915 |
| 0.4985 | 18.0 | 1080 | 0.3350 | 0.8852 | 0.9763 | 0.8863 |
| 0.4657 | 19.0 | 1140 | 0.3369 | 0.8872 | 0.9746 | 0.8951 |
| 0.4514 | 20.0 | 1200 | 0.3213 | 0.8880 | 0.9750 | 0.8925 |
| 0.4207 | 21.0 | 1260 | 0.3175 | 0.8943 | 0.9771 | 0.8978 |
| 0.4522 | 22.0 | 1320 | 0.3229 | 0.8970 | 0.9767 | 0.8983 |
| 0.4328 | 23.0 | 1380 | 0.3121 | 0.8948 | 0.9791 | 0.8978 |
| 0.3942 | 24.0 | 1440 | 0.3111 | 0.8993 | 0.9765 | 0.9030 |
| 0.4414 | 25.0 | 1500 | 0.3062 | 0.9032 | 0.9763 | 0.9061 |
| 0.3608 | 26.0 | 1560 | 0.3099 | 0.8997 | 0.9787 | 0.9014 |
| 0.3729 | 27.0 | 1620 | 0.3050 | 0.9029 | 0.9783 | 0.9082 |
| 0.393 | 28.0 | 1680 | 0.2970 | 0.9090 | 0.9797 | 0.9108 |
| 0.402 | 29.0 | 1740 | 0.2986 | 0.9087 | 0.9793 | 0.9113 |
| 0.3697 | 30.0 | 1800 | 0.3384 | 0.8968 | 0.9769 | 0.9025 |
| 0.3502 | 31.0 | 1860 | 0.3035 | 0.9058 | 0.9789 | 0.9103 |
| 0.3653 | 32.0 | 1920 | 0.3127 | 0.9024 | 0.9788 | 0.9025 |
| 0.3898 | 33.0 | 1980 | 0.3222 | 0.9050 | 0.9778 | 0.9061 |
| 0.317 | 34.0 | 2040 | 0.3013 | 0.9124 | 0.9798 | 0.9139 |
| 0.3166 | 35.0 | 2100 | 0.3185 | 0.9095 | 0.9775 | 0.9134 |
| 0.3771 | 36.0 | 2160 | 0.3067 | 0.9049 | 0.9782 | 0.9066 |
| 0.3487 | 37.0 | 2220 | 0.2948 | 0.9118 | 0.9801 | 0.9134 |
| 0.3202 | 38.0 | 2280 | 0.2916 | 0.9168 | 0.9788 | 0.9186 |
| 0.3163 | 39.0 | 2340 | 0.3149 | 0.9141 | 0.9777 | 0.9155 |
| 0.3605 | 40.0 | 2400 | 0.2964 | 0.9192 | 0.9797 | 0.9207 |
| 0.3636 | 41.0 | 2460 | 0.3142 | 0.9111 | 0.9810 | 0.9134 |
| 0.3454 | 42.0 | 2520 | 0.3133 | 0.9111 | 0.9792 | 0.9113 |
| 0.3561 | 43.0 | 2580 | 0.3090 | 0.9073 | 0.9804 | 0.9077 |
| 0.3136 | 44.0 | 2640 | 0.3236 | 0.9144 | 0.9782 | 0.9176 |
| 0.3529 | 45.0 | 2700 | 0.3054 | 0.9175 | 0.9800 | 0.9202 |
| 0.2987 | 46.0 | 2760 | 0.2944 | 0.9222 | 0.9802 | 0.9233 |
| 0.2966 | 47.0 | 2820 | 0.3215 | 0.9201 | 0.9786 | 0.9233 |
| 0.3203 | 48.0 | 2880 | 0.3150 | 0.9219 | 0.9797 | 0.9244 |
| 0.2821 | 49.0 | 2940 | 0.3072 | 0.9273 | 0.9800 | 0.9291 |
| 0.2852 | 50.0 | 3000 | 0.3265 | 0.9155 | 0.9792 | 0.9176 |
| 0.3544 | 51.0 | 3060 | 0.3175 | 0.9150 | 0.9802 | 0.9150 |
| 0.3327 | 52.0 | 3120 | 0.3134 | 0.9222 | 0.9802 | 0.9244 |
| 0.2877 | 53.0 | 3180 | 0.3222 | 0.9154 | 0.9805 | 0.9165 |
| 0.3089 | 54.0 | 3240 | 0.3045 | 0.9248 | 0.9811 | 0.9259 |
| 0.2904 | 55.0 | 3300 | 0.3301 | 0.9175 | 0.9787 | 0.9186 |
| 0.2821 | 56.0 | 3360 | 0.3069 | 0.9206 | 0.9810 | 0.9218 |
| 0.321 | 57.0 | 3420 | 0.3209 | 0.9254 | 0.9800 | 0.9270 |
| 0.2995 | 58.0 | 3480 | 0.3281 | 0.9202 | 0.9802 | 0.9233 |
| 0.2683 | 59.0 | 3540 | 0.3263 | 0.9174 | 0.9802 | 0.9202 |
| 0.3021 | 60.0 | 3600 | 0.3484 | 0.9170 | 0.9788 | 0.9186 |
| 0.3262 | 61.0 | 3660 | 0.3270 | 0.9151 | 0.9807 | 0.9165 |
| 0.2329 | 62.0 | 3720 | 0.3280 | 0.9211 | 0.9807 | 0.9233 |
| 0.2935 | 63.0 | 3780 | 0.3296 | 0.9244 | 0.9807 | 0.9264 |
| 0.2856 | 64.0 | 3840 | 0.3323 | 0.9209 | 0.9811 | 0.9218 |
| 0.2829 | 65.0 | 3900 | 0.3390 | 0.9200 | 0.9802 | 0.9218 |
| 0.3044 | 66.0 | 3960 | 0.3324 | 0.9215 | 0.9799 | 0.9228 |
| 0.2767 | 67.0 | 4020 | 0.3496 | 0.9150 | 0.9778 | 0.9160 |
| 0.2936 | 68.0 | 4080 | 0.3378 | 0.9257 | 0.9790 | 0.9275 |
| 0.2884 | 69.0 | 4140 | 0.3493 | 0.9227 | 0.9790 | 0.9249 |
| 0.2906 | 70.0 | 4200 | 0.3408 | 0.9259 | 0.9794 | 0.9275 |
| 0.2542 | 71.0 | 4260 | 0.3559 | 0.9233 | 0.9769 | 0.9249 |
| 0.2557 | 72.0 | 4320 | 0.3481 | 0.9237 | 0.9779 | 0.9254 |
| 0.2266 | 73.0 | 4380 | 0.3518 | 0.9208 | 0.9781 | 0.9223 |
| 0.2771 | 74.0 | 4440 | 0.3544 | 0.9231 | 0.9776 | 0.9254 |
| 0.2747 | 75.0 | 4500 | 0.3469 | 0.9270 | 0.9780 | 0.9285 |
| 0.2443 | 76.0 | 4560 | 0.3513 | 0.9216 | 0.9767 | 0.9233 |
| 0.2859 | 77.0 | 4620 | 0.3456 | 0.9234 | 0.9771 | 0.9254 |
| 0.2677 | 78.0 | 4680 | 0.3474 | 0.9239 | 0.9780 | 0.9254 |
| 0.2492 | 79.0 | 4740 | 0.3513 | 0.9235 | 0.9778 | 0.9254 |
| 0.2532 | 80.0 | 4800 | 0.3524 | 0.9210 | 0.9773 | 0.9233 |
| 0.2646 | 81.0 | 4860 | 0.3529 | 0.9240 | 0.9784 | 0.9238 |
| 0.2842 | 82.0 | 4920 | 0.3433 | 0.9260 | 0.9777 | 0.9280 |
| 0.2872 | 83.0 | 4980 | 0.3584 | 0.9272 | 0.9771 | 0.9285 |
| 0.2678 | 84.0 | 5040 | 0.3430 | 0.9298 | 0.9777 | 0.9317 |
| 0.2705 | 85.0 | 5100 | 0.3534 | 0.9268 | 0.9777 | 0.9291 |
| 0.2605 | 86.0 | 5160 | 0.3574 | 0.9272 | 0.9777 | 0.9296 |
| 0.2572 | 87.0 | 5220 | 0.3426 | 0.9273 | 0.9781 | 0.9291 |
| 0.2646 | 88.0 | 5280 | 0.3472 | 0.9234 | 0.9789 | 0.9244 |
| 0.2831 | 89.0 | 5340 | 0.3433 | 0.9272 | 0.9779 | 0.9291 |
| 0.277 | 90.0 | 5400 | 0.3441 | 0.9263 | 0.9789 | 0.9280 |
| 0.2584 | 91.0 | 5460 | 0.3432 | 0.9236 | 0.9788 | 0.9249 |
| 0.2703 | 92.0 | 5520 | 0.3409 | 0.9248 | 0.9789 | 0.9259 |
| 0.2811 | 93.0 | 5580 | 0.3449 | 0.9215 | 0.9795 | 0.9228 |
| 0.2786 | 94.0 | 5640 | 0.3465 | 0.9260 | 0.9789 | 0.9280 |
| 0.267 | 95.0 | 5700 | 0.3472 | 0.9260 | 0.9791 | 0.9275 |
| 0.2695 | 96.0 | 5760 | 0.3500 | 0.9268 | 0.9786 | 0.9285 |
| 0.279 | 97.0 | 5820 | 0.3582 | 0.9249 | 0.9782 | 0.9270 |
| 0.2774 | 98.0 | 5880 | 0.3486 | 0.9251 | 0.9790 | 0.9270 |
| 0.2512 | 99.0 | 5940 | 0.3514 | 0.9287 | 0.9786 | 0.9306 |
| 0.2218 | 100.0 | 6000 | 0.3482 | 0.9269 | 0.9789 | 0.9285 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 9,887 | [
[
-0.03985595703125,
-0.03851318359375,
0.019561767578125,
0.00434112548828125,
0.002544403076171875,
0.00884246826171875,
0.00846099853515625,
0.0072479248046875,
0.05462646484375,
0.0301055908203125,
-0.04486083984375,
-0.043121337890625,
-0.040618896484375,
... |
phoen1x/T5-Finetuned-INlegaldocsum | 2023-05-18T14:31:52.000Z | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | phoen1x | null | null | phoen1x/T5-Finetuned-INlegaldocsum | 0 | 2 | transformers | 2023-05-18T14:30:58 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: T5-Finetuned-INlegaldocsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# T5-Finetuned-INlegaldocsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6619
- Validation Loss: 2.2688
- Train Rougel: tf.Tensor(0.1290423, shape=(), dtype=float32)
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rougel | Epoch |
|:----------:|:---------------:|:---------------------------------------------:|:-----:|
| 2.6619 | 2.2688 | tf.Tensor(0.1290423, shape=(), dtype=float32) | 0 |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.12.1
| 1,652 | [
[
-0.04052734375,
-0.03729248046875,
0.0228271484375,
0.0025005340576171875,
-0.0355224609375,
-0.02655029296875,
-0.015838623046875,
-0.019439697265625,
0.011199951171875,
0.0117340087890625,
-0.0545654296875,
-0.056304931640625,
-0.058685302734375,
-0.008369... |
AnanthZeke/tabert-4k-naamapadam | 2023-05-18T16:40:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | AnanthZeke | null | null | AnanthZeke/tabert-4k-naamapadam | 0 | 2 | transformers | 2023-05-18T15:13:11 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tabert-4k-naamapadam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tabert-4k-naamapadam
This model is a fine-tuned version of [livinNector/tabert-4k](https://huggingface.co/livinNector/tabert-4k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2805
- Precision: 0.7758
- Recall: 0.8034
- F1: 0.7894
- Accuracy: 0.9077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4467 | 0.05 | 400 | 0.3882 | 0.7144 | 0.6655 | 0.6891 | 0.8755 |
| 0.3775 | 0.1 | 800 | 0.3540 | 0.7122 | 0.7155 | 0.7138 | 0.8845 |
| 0.3571 | 0.15 | 1200 | 0.3432 | 0.7329 | 0.7266 | 0.7297 | 0.8872 |
| 0.3461 | 0.21 | 1600 | 0.3360 | 0.7252 | 0.7368 | 0.7309 | 0.8893 |
| 0.3456 | 0.26 | 2000 | 0.3359 | 0.7388 | 0.7470 | 0.7428 | 0.8896 |
| 0.3318 | 0.31 | 2400 | 0.3298 | 0.7460 | 0.7435 | 0.7447 | 0.8908 |
| 0.326 | 0.36 | 2800 | 0.3255 | 0.7490 | 0.7391 | 0.7440 | 0.8940 |
| 0.3264 | 0.41 | 3200 | 0.3243 | 0.7493 | 0.7605 | 0.7549 | 0.8953 |
| 0.3189 | 0.46 | 3600 | 0.3231 | 0.7305 | 0.7715 | 0.7504 | 0.8936 |
| 0.3119 | 0.51 | 4000 | 0.3125 | 0.7645 | 0.7525 | 0.7584 | 0.8985 |
| 0.3111 | 0.57 | 4400 | 0.3100 | 0.7479 | 0.7729 | 0.7602 | 0.8970 |
| 0.3088 | 0.62 | 4800 | 0.3148 | 0.7510 | 0.7749 | 0.7628 | 0.8966 |
| 0.3047 | 0.67 | 5200 | 0.3089 | 0.7581 | 0.7728 | 0.7654 | 0.8981 |
| 0.3054 | 0.72 | 5600 | 0.3073 | 0.7615 | 0.7709 | 0.7662 | 0.8990 |
| 0.3028 | 0.77 | 6000 | 0.3066 | 0.7466 | 0.7835 | 0.7646 | 0.8984 |
| 0.3007 | 0.82 | 6400 | 0.3035 | 0.7555 | 0.7791 | 0.7671 | 0.8995 |
| 0.2923 | 0.87 | 6800 | 0.3004 | 0.7647 | 0.7829 | 0.7737 | 0.9008 |
| 0.2927 | 0.93 | 7200 | 0.3050 | 0.7700 | 0.7646 | 0.7673 | 0.9002 |
| 0.2949 | 0.98 | 7600 | 0.2979 | 0.7686 | 0.7723 | 0.7704 | 0.9014 |
| 0.2758 | 1.03 | 8000 | 0.3013 | 0.7713 | 0.7783 | 0.7748 | 0.9030 |
| 0.2699 | 1.08 | 8400 | 0.3019 | 0.7503 | 0.7997 | 0.7742 | 0.9017 |
| 0.2688 | 1.13 | 8800 | 0.3002 | 0.7593 | 0.7940 | 0.7762 | 0.9017 |
| 0.2625 | 1.18 | 9200 | 0.2926 | 0.7590 | 0.7941 | 0.7762 | 0.9033 |
| 0.2671 | 1.23 | 9600 | 0.2922 | 0.7640 | 0.8019 | 0.7825 | 0.9043 |
| 0.267 | 1.29 | 10000 | 0.2895 | 0.7719 | 0.7877 | 0.7797 | 0.9044 |
| 0.2611 | 1.34 | 10400 | 0.2897 | 0.7704 | 0.7978 | 0.7839 | 0.9053 |
| 0.2666 | 1.39 | 10800 | 0.2896 | 0.7688 | 0.7887 | 0.7786 | 0.9042 |
| 0.2563 | 1.44 | 11200 | 0.2894 | 0.7672 | 0.7981 | 0.7823 | 0.9045 |
| 0.2598 | 1.49 | 11600 | 0.2841 | 0.7705 | 0.7960 | 0.7831 | 0.9058 |
| 0.2549 | 1.54 | 12000 | 0.2854 | 0.7695 | 0.7975 | 0.7832 | 0.9065 |
| 0.2558 | 1.59 | 12400 | 0.2873 | 0.7619 | 0.8108 | 0.7856 | 0.9045 |
| 0.2564 | 1.65 | 12800 | 0.2863 | 0.7757 | 0.7897 | 0.7826 | 0.9062 |
| 0.2618 | 1.7 | 13200 | 0.2860 | 0.7778 | 0.7899 | 0.7838 | 0.9066 |
| 0.2659 | 1.75 | 13600 | 0.2831 | 0.7748 | 0.8013 | 0.7879 | 0.9073 |
| 0.254 | 1.8 | 14000 | 0.2811 | 0.7761 | 0.7978 | 0.7868 | 0.9079 |
| 0.2628 | 1.85 | 14400 | 0.2807 | 0.7713 | 0.8028 | 0.7868 | 0.9069 |
| 0.2552 | 1.9 | 14800 | 0.2806 | 0.7756 | 0.7990 | 0.7872 | 0.9077 |
| 0.2568 | 1.95 | 15200 | 0.2805 | 0.7758 | 0.8034 | 0.7894 | 0.9077 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 4,925 | [
[
-0.041778564453125,
-0.0399169921875,
0.0125885009765625,
0.0021724700927734375,
-0.0010747909545898438,
0.00489044189453125,
0.0026683807373046875,
0.002655029296875,
0.049591064453125,
0.031219482421875,
-0.03955078125,
-0.04705810546875,
-0.03875732421875,
... |
AustinCarthy/Onlyphish_10K_fromP_BFall_10KGen_topP_0.75 | 2023-05-18T17:16:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Onlyphish_10K_fromP_BFall_10KGen_topP_0.75 | 0 | 2 | transformers | 2023-05-18T15:33:34 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_10K_fromP_BFall_10KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_10K_fromP_BFall_10KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0759
- Accuracy: 0.9929
- F1: 0.9193
- Precision: 1.0
- Recall: 0.8506
- Roc Auc Score: 0.9253
- Tpr At Fpr 0.01: 0.8776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0055 | 1.0 | 13125 | 0.0436 | 0.9901 | 0.8844 | 0.9933 | 0.797 | 0.8984 | 0.7488 |
| 0.0032 | 2.0 | 26250 | 0.1145 | 0.9853 | 0.8171 | 0.9994 | 0.691 | 0.8455 | 0.756 |
| 0.0025 | 3.0 | 39375 | 0.0705 | 0.9919 | 0.9076 | 0.9978 | 0.8324 | 0.9162 | 0.8332 |
| 0.0018 | 4.0 | 52500 | 0.0848 | 0.9919 | 0.9065 | 0.9998 | 0.8292 | 0.9146 | 0.8506 |
| 0.0008 | 5.0 | 65625 | 0.0759 | 0.9929 | 0.9193 | 1.0 | 0.8506 | 0.9253 | 0.8776 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,243 | [
[
-0.041778564453125,
-0.04248046875,
0.008087158203125,
0.00994110107421875,
-0.0203857421875,
-0.0232086181640625,
-0.007415771484375,
-0.018280029296875,
0.0298309326171875,
0.0283355712890625,
-0.052764892578125,
-0.053497314453125,
-0.048797607421875,
-0.... |
mingmingmom888/distilbert_classifier_newsgroups | 2023-05-18T15:56:51.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | mingmingmom888 | null | null | mingmingmom888/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-18T15:56:19 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.0386962890625,
-0.042022705078125,
0.021209716796875,
0.00841522216796875,
-0.033599853515625,
-0.00681304931640625,
-0.01171875,
-0.010833740234375,
-0.0029144287109375,
-0.006221771240234375,
-0.041534423828125,
-0.0504150390625,
-0.067138671875,
-0.010... |
AlexC98/testing | 2023-05-18T17:28:37.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | AlexC98 | null | null | AlexC98/testing | 0 | 2 | transformers | 2023-05-18T17:12:03 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testing
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5427
- Accuracy: 0.7455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 47 | 0.6397 | 0.6364 |
| No log | 2.0 | 94 | 0.6157 | 0.6788 |
| No log | 3.0 | 141 | 0.5956 | 0.6788 |
| No log | 4.0 | 188 | 0.5866 | 0.6848 |
| No log | 5.0 | 235 | 0.5727 | 0.6788 |
| No log | 6.0 | 282 | 0.5663 | 0.6970 |
| No log | 7.0 | 329 | 0.5610 | 0.7091 |
| No log | 8.0 | 376 | 0.5548 | 0.7091 |
| No log | 9.0 | 423 | 0.5536 | 0.7212 |
| No log | 10.0 | 470 | 0.5486 | 0.7273 |
| 0.583 | 11.0 | 517 | 0.5451 | 0.7273 |
| 0.583 | 12.0 | 564 | 0.5468 | 0.7333 |
| 0.583 | 13.0 | 611 | 0.5423 | 0.7394 |
| 0.583 | 14.0 | 658 | 0.5396 | 0.7394 |
| 0.583 | 15.0 | 705 | 0.5466 | 0.7394 |
| 0.583 | 16.0 | 752 | 0.5411 | 0.7455 |
| 0.583 | 17.0 | 799 | 0.5427 | 0.7455 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,298 | [
[
-0.041534423828125,
-0.047637939453125,
0.00919342041015625,
0.000019788742065429688,
-0.0115966796875,
-0.022857666015625,
-0.00969696044921875,
-0.0105133056640625,
0.023193359375,
0.01105499267578125,
-0.0567626953125,
-0.0413818359375,
-0.044403076171875,
... |
oyesaurav/dwellbert | 2023-05-22T07:30:04.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"ditilbert",
"text classification",
"clinical notes",
"wellnation",
"en",
"endpoints_compatible",
"region:us"
] | text-classification | oyesaurav | null | null | oyesaurav/dwellbert | 1 | 2 | transformers | 2023-05-18T17:39:42 | ---
language:
- en
tags:
- ditilbert
- text classification
- clinical notes
- wellnation
---
<pre>
labels map =
{
"0": "Gastroenterology",
"1": "Neurology",
"2": "Orthopedic",
"3": "Radiology",
"4": "Urology"
}
</pre>
<h2><i>The fine tuned model has been trained on around 2300 medical transcriptions, to classify medical specialty.
More classes will be added as data would be available.</i></h2> | 413 | [
[
-0.0066375732421875,
-0.02618408203125,
0.044677734375,
-0.0181732177734375,
0.004261016845703125,
-0.01042938232421875,
-0.00965118408203125,
-0.03887939453125,
0.0306549072265625,
0.046722412109375,
-0.022125244140625,
-0.06878662109375,
-0.0712890625,
0.0... |
carinavdzee/distilbert_classifier_newsgroups | 2023-05-18T18:05:06.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | carinavdzee | null | null | carinavdzee/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-18T18:04:34 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.0386962890625,
-0.042022705078125,
0.021240234375,
0.0084228515625,
-0.033599853515625,
-0.0068359375,
-0.01174163818359375,
-0.010833740234375,
-0.002910614013671875,
-0.00620269775390625,
-0.041534423828125,
-0.050445556640625,
-0.067138671875,
-0.01020... |
trinadutta/distilbert-base-uncased-finetuned-stsb | 2023-05-18T21:44:50.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | trinadutta | null | null | trinadutta/distilbert-base-uncased-finetuned-stsb | 0 | 2 | transformers | 2023-05-18T18:27:24 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: distilbert-base-uncased-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8703919468681796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-stsb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5388
- Pearson: 0.8740
- Spearmanr: 0.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 360 | 0.6683 | 0.8599 | 0.8575 |
| 1.0348 | 2.0 | 720 | 0.5413 | 0.8715 | 0.8685 |
| 0.3974 | 3.0 | 1080 | 0.5560 | 0.8725 | 0.8692 |
| 0.3974 | 4.0 | 1440 | 0.5666 | 0.8737 | 0.8703 |
| 0.2516 | 5.0 | 1800 | 0.5388 | 0.8740 | 0.8704 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,009 | [
[
-0.029388427734375,
-0.04736328125,
0.01087188720703125,
0.0164794921875,
-0.0255889892578125,
-0.0159454345703125,
-0.00756072998046875,
-0.002742767333984375,
0.0123138427734375,
0.01407623291015625,
-0.047515869140625,
-0.044647216796875,
-0.06268310546875,
... |
qcz/en-fr-UFAL-medical | 2023-05-18T19:15:45.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | qcz | null | null | qcz/en-fr-UFAL-medical | 0 | 2 | transformers | 2023-05-18T19:09:39 | ---
language:
- en
- fr
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: en-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-fr
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5956
- Bleu: 53.2928
- Gen Len: 53.437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,152 | [
[
-0.036041259765625,
-0.040802001953125,
0.019439697265625,
0.0024547576904296875,
-0.038177490234375,
-0.036376953125,
-0.017303466796875,
-0.016632080078125,
0.00860595703125,
0.0177459716796875,
-0.062286376953125,
-0.03955078125,
-0.05706787109375,
0.0041... |
lcrodriguez/test_trainer | 2023-05-18T20:54:55.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | lcrodriguez | null | null | lcrodriguez/test_trainer | 0 | 2 | transformers | 2023-05-18T20:32:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: test_trainer
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.539
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6184
- Accuracy: 0.539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 2.0325 | 0.499 |
| No log | 2.0 | 250 | 2.3086 | 0.547 |
| No log | 3.0 | 375 | 2.6184 | 0.539 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,770 | [
[
-0.0318603515625,
-0.045684814453125,
0.01300048828125,
0.01055145263671875,
-0.026458740234375,
-0.03704833984375,
-0.016021728515625,
-0.0189361572265625,
0.0117950439453125,
0.020751953125,
-0.057525634765625,
-0.0421142578125,
-0.042388916015625,
-0.0203... |
lcrodriguez/learn-finetuning-bert | 2023-05-18T21:00:57.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | lcrodriguez | null | null | lcrodriguez/learn-finetuning-bert | 0 | 2 | transformers | 2023-05-18T20:58:35 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
model-index:
- name: learn-finetuning-bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# learn-finetuning-bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,050 | [
[
-0.039703369140625,
-0.058929443359375,
0.01293182373046875,
0.0034694671630859375,
-0.03411865234375,
-0.04376220703125,
-0.018157958984375,
-0.0205230712890625,
0.01285552978515625,
0.029815673828125,
-0.06451416015625,
-0.036834716796875,
-0.035675048828125,
... |
Forna/bert-base-uncased-finetuned-emotion | 2023-05-18T21:22:49.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Forna | null | null | Forna/bert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-18T21:02:04 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: bert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2082
- Accuratezza: 0.918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuratezza |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|
| No log | 1.0 | 250 | 0.2918 | 0.9045 |
| 0.5145 | 2.0 | 500 | 0.2082 | 0.918 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,441 | [
[
-0.041595458984375,
-0.044586181640625,
0.0105438232421875,
0.02154541015625,
-0.03076171875,
-0.029571533203125,
-0.026885986328125,
-0.0172119140625,
0.01340484619140625,
0.015716552734375,
-0.06268310546875,
-0.04571533203125,
-0.04925537109375,
-0.022201... |
AustinCarthy/Baseline_10Kphish_benignWinter_20_20_20 | 2023-05-18T22:14:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_10Kphish_benignWinter_20_20_20 | 0 | 2 | transformers | 2023-05-18T21:07:42 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_10Kphish_benignWinter_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_10Kphish_benignWinter_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0869
- Accuracy: 0.991
- F1: 0.8960
- Precision: 0.9966
- Recall: 0.8138
- Roc Auc Score: 0.9068
- Tpr At Fpr 0.01: 0.7918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0115 | 1.0 | 6563 | 0.0605 | 0.9872 | 0.8462 | 0.9938 | 0.7368 | 0.8683 | 0.6832 |
| 0.006 | 2.0 | 13126 | 0.0538 | 0.9911 | 0.8975 | 0.9946 | 0.8176 | 0.9087 | 0.7928 |
| 0.0033 | 3.0 | 19689 | 0.0496 | 0.9917 | 0.9049 | 0.9959 | 0.8292 | 0.9145 | 0.805 |
| 0.001 | 4.0 | 26252 | 0.0791 | 0.9911 | 0.8970 | 0.9959 | 0.816 | 0.9079 | 0.7806 |
| 0.0002 | 5.0 | 32815 | 0.0869 | 0.991 | 0.8960 | 0.9966 | 0.8138 | 0.9068 | 0.7918 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,239 | [
[
-0.041839599609375,
-0.0411376953125,
0.00974273681640625,
0.007083892822265625,
-0.0197601318359375,
-0.0222930908203125,
-0.005588531494140625,
-0.0198974609375,
0.02850341796875,
0.0262603759765625,
-0.05438232421875,
-0.054901123046875,
-0.049346923828125,
... |
petersa2/distilbert-code | 2023-05-19T03:46:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | petersa2 | null | null | petersa2/distilbert-code | 0 | 2 | transformers | 2023-05-18T21:17:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-code
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.56
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-code
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6881
- Accuracy: 0.56
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.6890 | 0.52 |
| No log | 2.0 | 14 | 0.6881 | 0.56 |
### Framework versions
- Transformers 4.11.3
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.10.3
| 1,623 | [
[
-0.0325927734375,
-0.041412353515625,
0.012725830078125,
0.01117706298828125,
-0.027679443359375,
-0.010284423828125,
0.0014600753784179688,
-0.0033435821533203125,
0.00777435302734375,
0.0183258056640625,
-0.053680419921875,
-0.0419921875,
-0.06597900390625,
... |
wiorz/bert_legal_binary_sm_pair | 2023-05-18T22:54:16.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/bert_legal_binary_sm_pair | 0 | 2 | transformers | 2023-05-18T22:53:40 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert_legal_binary_sm_pair
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_legal_binary_sm_pair
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,116 | [
[
-0.0325927734375,
-0.04547119140625,
0.009613037109375,
0.01568603515625,
-0.04052734375,
-0.02728271484375,
-0.0148162841796875,
-0.0219879150390625,
0.0219879150390625,
0.031463623046875,
-0.049835205078125,
-0.04156494140625,
-0.043731689453125,
-0.018203... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.