modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
clayygodd/distilbert-base-uncased-distilled-clinc | 2023-04-27T06:09:10.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | clayygodd | null | null | clayygodd/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-04-27T05:54:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9509677419354838
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3223
- Accuracy: 0.9510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 2.0952 | 0.7513 |
| 2.4883 | 2.0 | 636 | 1.0578 | 0.8613 |
| 2.4883 | 3.0 | 954 | 0.5967 | 0.9184 |
| 0.9387 | 4.0 | 1272 | 0.4331 | 0.9361 |
| 0.4221 | 5.0 | 1590 | 0.3734 | 0.9445 |
| 0.4221 | 6.0 | 1908 | 0.3483 | 0.9481 |
| 0.2906 | 7.0 | 2226 | 0.3332 | 0.9506 |
| 0.2464 | 8.0 | 2544 | 0.3274 | 0.9494 |
| 0.2464 | 9.0 | 2862 | 0.3245 | 0.9506 |
| 0.2315 | 10.0 | 3180 | 0.3223 | 0.9510 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,243 | [
[
-0.0341796875,
-0.03790283203125,
0.016632080078125,
0.00460052490234375,
-0.0225982666015625,
-0.0171356201171875,
-0.007709503173828125,
-0.005077362060546875,
0.01059722900390625,
0.0224609375,
-0.043304443359375,
-0.04803466796875,
-0.06011962890625,
-0.... |
dan21cg/distilbert-base-uncased-finetuned-emotion | 2023-04-28T04:58:53.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | dan21cg | null | null | dan21cg/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-27T06:42:56 | Temporary Redirect. Redirecting to /jupitercoder/distilbert-base-uncased-finetuned-emotion/resolve/main/README.md | 113 | [
[
-0.047149658203125,
-0.05267333984375,
0.054718017578125,
0.01049041748046875,
-0.04302978515625,
0.036102294921875,
-0.0193939208984375,
0.01953125,
0.04852294921875,
0.03826904296875,
-0.06060791015625,
-0.047088623046875,
-0.04962158203125,
0.013671875,
... |
phinate/make-your-own-bee-movie | 2023-04-27T10:25:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | phinate | null | null | phinate/make-your-own-bee-movie | 0 | 2 | transformers | 2023-04-27T09:21:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: make-your-own-bee-movie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# make-your-own-bee-movie
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 3.3214 |
| No log | 2.0 | 34 | 3.1133 |
| No log | 3.0 | 51 | 3.0216 |
| No log | 4.0 | 68 | 2.9806 |
| No log | 5.0 | 85 | 2.9679 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,454 | [
[
-0.030914306640625,
-0.0548095703125,
0.01425933837890625,
0.0107421875,
-0.03179931640625,
-0.0267486572265625,
-0.00506591796875,
-0.01544952392578125,
-0.00156402587890625,
0.01082611083984375,
-0.06341552734375,
-0.0302734375,
-0.0550537109375,
-0.008605... |
manasviiiiiiiiiiiiiiiiiiiiiiiiii/autotrain-tais-roberta-53328125642 | 2023-04-27T11:42:58.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:manasviiiiiiiiiiiiiiiiiiiiiiiiii/autotrain-data-tais-roberta",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | manasviiiiiiiiiiiiiiiiiiiiiiiiii | null | null | manasviiiiiiiiiiiiiiiiiiiiiiiiii/autotrain-tais-roberta-53328125642 | 0 | 2 | transformers | 2023-04-27T11:42:18 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- manasviiiiiiiiiiiiiiiiiiiiiiiiii/autotrain-data-tais-roberta
co2_eq_emissions:
emissions: 0.3828638429601619
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 53328125642
- CO2 Emissions (in grams): 0.3829
## Validation Metrics
- Loss: 0.092
- Accuracy: 0.978
- Precision: 0.995
- Recall: 0.960
- AUC: 0.999
- F1: 0.977
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/manasviiiiiiiiiiiiiiiiiiiiiiiiii/autotrain-tais-roberta-53328125642
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("manasviiiiiiiiiiiiiiiiiiiiiiiiii/autotrain-tais-roberta-53328125642", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("manasviiiiiiiiiiiiiiiiiiiiiiiiii/autotrain-tais-roberta-53328125642", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,243 | [
[
-0.0259246826171875,
-0.0290069580078125,
0.01334381103515625,
0.01091766357421875,
-0.00606536865234375,
0.0006279945373535156,
0.00778961181640625,
-0.00948333740234375,
-0.00002682209014892578,
0.01035308837890625,
-0.05328369140625,
-0.031982421875,
-0.05871... |
gitsagitsat/autotrain-bert-wiki-53340125670 | 2023-04-27T12:19:05.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:gitsagitsat/autotrain-data-bert-wiki",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | gitsagitsat | null | null | gitsagitsat/autotrain-bert-wiki-53340125670 | 0 | 2 | transformers | 2023-04-27T12:17:50 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- gitsagitsat/autotrain-data-bert-wiki
co2_eq_emissions:
emissions: 0.5874363963158769
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 53340125670
- CO2 Emissions (in grams): 0.5874
## Validation Metrics
- Loss: 0.365
- Accuracy: 0.850
- Precision: 0.969
- Recall: 0.723
- AUC: 0.962
- F1: 0.828
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/gitsagitsat/autotrain-bert-wiki-53340125670
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("gitsagitsat/autotrain-bert-wiki-53340125670", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("gitsagitsat/autotrain-bert-wiki-53340125670", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,146 | [
[
-0.03143310546875,
-0.032318115234375,
0.0141143798828125,
0.00853729248046875,
-0.007511138916015625,
-0.0021686553955078125,
0.0001583099365234375,
-0.016754150390625,
0.0022754669189453125,
0.005817413330078125,
-0.05780029296875,
-0.032745361328125,
-0.06066... |
Nimishaaaa/autotrain-taisproject-53343125680 | 2023-04-27T12:25:41.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:Nimishaaaa/autotrain-data-taisproject",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Nimishaaaa | null | null | Nimishaaaa/autotrain-taisproject-53343125680 | 0 | 2 | transformers | 2023-04-27T12:24:00 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Nimishaaaa/autotrain-data-taisproject
co2_eq_emissions:
emissions: 0.6377772207656673
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 53343125680
- CO2 Emissions (in grams): 0.6378
## Validation Metrics
- Loss: 0.506
- Accuracy: 0.857
- Precision: 0.969
- Recall: 0.737
- AUC: 0.881
- F1: 0.837
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Nimishaaaa/autotrain-taisproject-53343125680
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Nimishaaaa/autotrain-taisproject-53343125680", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Nimishaaaa/autotrain-taisproject-53343125680", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,150 | [
[
-0.0291900634765625,
-0.0290985107421875,
0.0128631591796875,
0.0093994140625,
-0.007556915283203125,
0.0031986236572265625,
0.00936126708984375,
-0.0167388916015625,
0.005901336669921875,
0.0086517333984375,
-0.055084228515625,
-0.031829833984375,
-0.0622863769... |
scroobiustrip/sov-model-v1 | 2023-04-27T14:01:38.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | scroobiustrip | null | null | scroobiustrip/sov-model-v1 | 0 | 2 | sentence-transformers | 2023-04-27T14:01:26 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# scroobiustrip/sov-model-v1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("scroobiustrip/sov-model-v1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,541 | [
[
-0.0032634735107421875,
-0.05572509765625,
0.031036376953125,
-0.005580902099609375,
-0.0120849609375,
-0.01226806640625,
-0.01157379150390625,
-0.002124786376953125,
0.00972747802734375,
0.037811279296875,
-0.04461669921875,
-0.019195556640625,
-0.0464782714843... |
Apv/Flaubert2704_v1 | 2023-04-27T15:28:24.000Z | [
"transformers",
"tf",
"flaubert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Apv | null | null | Apv/Flaubert2704_v1 | 0 | 2 | transformers | 2023-04-27T15:00:03 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Apv/Flaubert2704_v1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apv/Flaubert2704_v1
This model is a fine-tuned version of [flaubert/flaubert_base_cased](https://huggingface.co/flaubert/flaubert_base_cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6198
- Validation Loss: 0.6599
- Train Accuracy: 0.7333
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 804, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.9034 | 0.7880 | 0.5956 | 0 |
| 0.7819 | 0.7210 | 0.6933 | 1 |
| 0.6369 | 0.6599 | 0.7333 | 2 |
| 0.6341 | 0.6599 | 0.7333 | 3 |
| 0.6243 | 0.6599 | 0.7333 | 4 |
| 0.6198 | 0.6599 | 0.7333 | 5 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,993 | [
[
-0.046478271484375,
-0.041656494140625,
0.017120361328125,
0.000766754150390625,
-0.0167694091796875,
-0.0238800048828125,
-0.01016998291015625,
-0.0139923095703125,
0.01209259033203125,
0.007648468017578125,
-0.05157470703125,
-0.045623779296875,
-0.0517578125,... |
kaezf/Irony | 2023-04-28T13:58:36.000Z | [
"diffusers",
"en",
"zh",
"region:us"
] | null | kaezf | null | null | kaezf/Irony | 1 | 2 | diffusers | 2023-04-27T15:19:43 | ---
language:
- en
- zh
library_name: diffusers
---
# Overview
this is the model trained with dreambooth based on the novelai model.
still under training.
# 概览
这个模型是基于novelai的模型通过dreambooth训练的。
仍然在训练。 | 200 | [
[
0.00873565673828125,
-0.020538330078125,
-0.0007925033569335938,
0.01483917236328125,
-0.04315185546875,
0.016937255859375,
0.034393310546875,
-0.040863037109375,
0.0482177734375,
0.0299072265625,
-0.035858154296875,
0.0104217529296875,
-0.0192718505859375,
... |
FCameCode/BERT_model_new | 2023-05-06T17:43:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | FCameCode | null | null | FCameCode/BERT_model_new | 0 | 2 | transformers | 2023-04-27T17:01:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: BERT_model_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_model_new
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1206
- F1: 0.8301
## Model description
train_df = pd.read_csv('/content/drive/My Drive/DATASETS/wiki_toxic/train.csv')\
validation_df = pd.read_csv('/content/drive/My Drive/DATASETS/wiki_toxic/validation.csv')\
#test_df = pd.read_csv('/content/drive/My Drive/wiki_toxic/test.csv')\
frac = 0.9\
#TRAIN\
print(train_df.shape[0]) # get the number of rows in the dataframe\
rows_to_delete = train_df.sample(frac=frac, random_state=1)\
train_df = train_df.drop(rows_to_delete.index)\
print(train_df.shape[0])\
#VALIDATION\
print(validation_df.shape[0]) # get the number of rows in the dataframe\
rows_to_delete = validation_df.sample(frac=frac, random_state=1)\
validation_df = validation_df.drop(rows_to_delete.index)\
print(validation_df.shape[0])\
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 399 | 0.0940 | 0.8273 |
| 0.1262 | 2.0 | 798 | 0.1206 | 0.8301 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,046 | [
[
-0.0274658203125,
-0.045928955078125,
0.019561767578125,
0.0114593505859375,
-0.0153656005859375,
-0.04180908203125,
-0.00891876220703125,
-0.01079559326171875,
-0.000537872314453125,
0.0203399658203125,
-0.039398193359375,
-0.041412353515625,
-0.0416259765625,
... |
bekbote/autotrain-dl-phrasebank-53436126044 | 2023-04-27T17:15:58.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:bekbote/autotrain-data-dl-phrasebank",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | bekbote | null | null | bekbote/autotrain-dl-phrasebank-53436126044 | 0 | 2 | transformers | 2023-04-27T17:15:02 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- bekbote/autotrain-data-dl-phrasebank
co2_eq_emissions:
emissions: 0.4524765972761284
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 53436126044
- CO2 Emissions (in grams): 0.4525
## Validation Metrics
- Loss: 0.078
- Accuracy: 0.978
- Macro F1: 0.970
- Micro F1: 0.978
- Weighted F1: 0.978
- Macro Precision: 0.967
- Micro Precision: 0.978
- Weighted Precision: 0.978
- Macro Recall: 0.973
- Micro Recall: 0.978
- Weighted Recall: 0.978
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/bekbote/autotrain-dl-phrasebank-53436126044
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bekbote/autotrain-dl-phrasebank-53436126044", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bekbote/autotrain-dl-phrasebank-53436126044", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,296 | [
[
-0.026824951171875,
-0.031280517578125,
0.0058441162109375,
0.0167999267578125,
-0.006359100341796875,
0.0085296630859375,
-0.003509521484375,
-0.01091766357421875,
0.0005369186401367188,
0.01380157470703125,
-0.048675537109375,
-0.03533935546875,
-0.06604003906... |
Pendo/finetuned-Sentiment-classfication-DISTILBERT-base-uncased-model | 2023-04-27T19:56:11.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Pendo | null | null | Pendo/finetuned-Sentiment-classfication-DISTILBERT-base-uncased-model | 0 | 2 | transformers | 2023-04-27T19:27:45 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuned-Sentiment-classfication-DISTILBERT-base-uncased-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-Sentiment-classfication-DISTILBERT-base-uncased-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5738
- Rmse: 0.6315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6978 | 4.0 | 500 | 0.5738 | 0.6315 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,535 | [
[
-0.045745849609375,
-0.051361083984375,
0.012542724609375,
0.0179901123046875,
-0.034423828125,
-0.01654052734375,
-0.024139404296875,
0.003185272216796875,
0.004703521728515625,
0.0196685791015625,
-0.056396484375,
-0.050506591796875,
-0.060882568359375,
-0... |
JoelVIU/roberta-base-bne-jou-amazon_reviews_multi | 2023-04-27T21:16:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | JoelVIU | null | null | JoelVIU/roberta-base-bne-jou-amazon_reviews_multi | 0 | 2 | transformers | 2023-04-27T20:59:07 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-jou-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-jou-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2289
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1988 | 1.0 | 1250 | 0.1670 | 0.9335 |
| 0.0989 | 2.0 | 2500 | 0.2289 | 0.9335 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,782 | [
[
-0.036346435546875,
-0.048126220703125,
0.01074981689453125,
0.014892578125,
-0.0295257568359375,
-0.0299530029296875,
-0.01494598388671875,
-0.0177764892578125,
0.01010894775390625,
0.0290679931640625,
-0.049102783203125,
-0.0447998046875,
-0.054962158203125,
... |
alikanakar/whisper-synthesized-turkish-8-hour-hlr | 2023-04-28T15:27:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | alikanakar | null | null | alikanakar/whisper-synthesized-turkish-8-hour-hlr | 0 | 2 | transformers | 2023-04-28T02:04:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-synthesized-turkish-8-hour-hlr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-synthesized-turkish-8-hour-hlr
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3824
- Wer: 49.2902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7481 | 0.52 | 100 | 0.2675 | 14.6834 |
| 0.1975 | 1.04 | 200 | 0.2534 | 13.2144 |
| 0.1388 | 1.56 | 300 | 0.2755 | 15.6647 |
| 0.1585 | 2.08 | 400 | 0.3080 | 14.6649 |
| 0.1153 | 2.6 | 500 | 0.3421 | 17.7447 |
| 0.1241 | 3.12 | 600 | 0.3570 | 16.8189 |
| 0.1093 | 3.65 | 700 | 0.3776 | 18.8125 |
| 0.09 | 4.17 | 800 | 0.3859 | 30.0518 |
| 0.0751 | 4.69 | 900 | 0.3874 | 17.3929 |
| 0.0758 | 5.21 | 1000 | 0.3987 | 20.0901 |
| 0.0602 | 5.73 | 1100 | 0.4017 | 17.1460 |
| 0.0568 | 6.25 | 1200 | 0.3824 | 15.6154 |
| 0.0454 | 6.77 | 1300 | 0.3926 | 15.8808 |
| 0.0433 | 7.29 | 1400 | 0.4146 | 16.3869 |
| 0.0341 | 7.81 | 1500 | 0.4078 | 16.1153 |
| 0.0295 | 8.33 | 1600 | 0.4192 | 17.1275 |
| 0.0274 | 8.85 | 1700 | 0.4140 | 16.3745 |
| 0.0246 | 9.38 | 1800 | 0.4077 | 21.0344 |
| 0.0211 | 9.9 | 1900 | 0.4003 | 19.8741 |
| 0.0149 | 10.42 | 2000 | 0.4054 | 108.7335 |
| 0.0172 | 10.94 | 2100 | 0.3917 | 20.6024 |
| 0.0138 | 11.46 | 2200 | 0.3942 | 889.4643 |
| 0.0108 | 11.98 | 2300 | 0.3906 | 55.0673 |
| 0.0099 | 12.5 | 2400 | 0.3834 | 29.9778 |
| 0.0067 | 13.02 | 2500 | 0.3947 | 34.5883 |
| 0.0045 | 13.54 | 2600 | 0.3940 | 20.9789 |
| 0.0035 | 14.06 | 2700 | 0.3911 | 15.6462 |
| 0.0031 | 14.58 | 2800 | 0.3905 | 18.3990 |
| 0.0018 | 15.1 | 2900 | 0.3919 | 16.3190 |
| 0.0011 | 15.62 | 3000 | 0.3906 | 18.0286 |
| 0.001 | 16.15 | 3100 | 0.3911 | 17.6521 |
| 0.0006 | 16.67 | 3200 | 0.3813 | 27.6879 |
| 0.0007 | 17.19 | 3300 | 0.3800 | 45.7536 |
| 0.0003 | 17.71 | 3400 | 0.3805 | 51.2529 |
| 0.0001 | 18.23 | 3500 | 0.3815 | 51.7282 |
| 0.0001 | 18.75 | 3600 | 0.3821 | 47.0065 |
| 0.0002 | 19.27 | 3700 | 0.3821 | 45.8585 |
| 0.0001 | 19.79 | 3800 | 0.3823 | 47.7904 |
| 0.0001 | 20.31 | 3900 | 0.3824 | 49.2594 |
| 0.0003 | 20.83 | 4000 | 0.3824 | 49.2902 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 3,862 | [
[
-0.0411376953125,
-0.039947509765625,
0.00978851318359375,
0.006565093994140625,
-0.011260986328125,
-0.00971221923828125,
-0.001979827880859375,
-0.0084228515625,
0.039031982421875,
0.028564453125,
-0.0447998046875,
-0.04400634765625,
-0.045867919921875,
-0... |
r10521708/albert-base-chinese-finetuned-qqp-TM-5x | 2023-05-01T06:44:48.000Z | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | text-classification | r10521708 | null | null | r10521708/albert-base-chinese-finetuned-qqp-TM-5x | 0 | 2 | transformers | 2023-04-28T05:59:46 | ---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: albert-base-chinese-finetuned-qqp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-chinese-finetuned-qqp
This model is a fine-tuned version of [ckiplab/albert-base-chinese](https://huggingface.co/ckiplab/albert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.19846130907535553
- Accuracy: 0.925531914893617
- F1: 0.9263157894736843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| No log | 1.0 | 30 | 0.691918 | 0.617021 | 0.647059 |
| No log | 2.0 | 60 | 0.629044 | 0.819149 | 0.813187 |
| No log | 3.0 | 90 | 0.340141 | 0.882979 | 0.893204 |
| No log | 4.0 | 120 | 0.198461 | 0.925532 | 0.926316 |
| No log | 5.0 | 150 | 0.171799 | 0.925532 | 0.926316 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.0.dev0
| 1,753 | [
[
-0.0302886962890625,
-0.0297393798828125,
0.0009136199951171875,
0.0222625732421875,
-0.019500732421875,
-0.0276336669921875,
-0.0037593841552734375,
-0.01483154296875,
0.0047607421875,
0.02996826171875,
-0.0462646484375,
-0.0469970703125,
-0.037872314453125,
... |
r10521708/albert-base-chinese-finetuned-qqp-FHTM-5x | 2023-05-01T06:36:21.000Z | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | text-classification | r10521708 | null | null | r10521708/albert-base-chinese-finetuned-qqp-FHTM-5x | 0 | 2 | transformers | 2023-04-28T06:27:01 | ---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: albert-base-chinese-finetuned-qqp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-chinese-finetuned-qqp
This model is a fine-tuned version of [ckiplab/albert-base-chinese](https://huggingface.co/ckiplab/albert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3385688364505768
- Accuracy: 0.8357142857142857
- F1: 0.8244274809160306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| No log | 1.0 | 30 | 0.654749 | 0.642857 | 0.719101 |
| No log | 2.0 | 60 | 0.614816 | 0.728571 | 0.707692 |
| No log | 3.0 | 90 | 0.443354 | 0.807143 | 0.802920 |
| No log | 4.0 | 120 | 0.338569 | 0.835714 | 0.824427 |
| No log | 5.0 | 150 | 0.339324 | 0.828571 | 0.806452 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.0.dev0
| 1,753 | [
[
-0.03106689453125,
-0.0289459228515625,
0.002796173095703125,
0.022125244140625,
-0.0185394287109375,
-0.0279083251953125,
-0.0032138824462890625,
-0.01342010498046875,
0.0048370361328125,
0.0306243896484375,
-0.0450439453125,
-0.0460205078125,
-0.0360107421875,... |
speedppc/autotrain-beeline-q-a-refi-purchase-unknown-53621126301 | 2023-04-28T08:39:10.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:speedppc/autotrain-data-beeline-q-a-refi-purchase-unknown",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | speedppc | null | null | speedppc/autotrain-beeline-q-a-refi-purchase-unknown-53621126301 | 0 | 2 | transformers | 2023-04-28T08:38:01 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- speedppc/autotrain-data-beeline-q-a-refi-purchase-unknown
co2_eq_emissions:
emissions: 0.00253926395613742
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 53621126301
- CO2 Emissions (in grams): 0.0025
## Validation Metrics
- Loss: 0.000
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/speedppc/autotrain-beeline-q-a-refi-purchase-unknown-53621126301
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("speedppc/autotrain-beeline-q-a-refi-purchase-unknown-53621126301", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("speedppc/autotrain-beeline-q-a-refi-purchase-unknown-53621126301", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,381 | [
[
-0.0325927734375,
-0.02203369140625,
0.01250457763671875,
0.01373291015625,
0.00026607513427734375,
0.0009298324584960938,
0.0012416839599609375,
-0.01528167724609375,
-0.00702667236328125,
0.0023250579833984375,
-0.05316162109375,
-0.0273895263671875,
-0.050689... |
zonias2510/clasificar_reviews | 2023-04-28T13:52:53.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | zonias2510 | null | null | zonias2510/clasificar_reviews | 0 | 2 | transformers | 2023-04-28T13:51:53 | ---
license: apache-2.0
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificar_reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificar_reviews
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1970
- Accuracy: 0.58
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 366 | 1.0342 | 0.532 |
| 1.1072 | 2.0 | 732 | 1.0594 | 0.572 |
| 0.6374 | 3.0 | 1098 | 1.1970 | 0.58 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,467 | [
[
-0.037078857421875,
-0.040771484375,
0.01197052001953125,
0.01285552978515625,
-0.0330810546875,
-0.038848876953125,
-0.0196990966796875,
-0.0246429443359375,
0.00814056396484375,
0.025299072265625,
-0.05401611328125,
-0.048370361328125,
-0.04144287109375,
-... |
IslemTouati/setfit_french | 2023-05-15T09:37:22.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | IslemTouati | null | null | IslemTouati/setfit_french | 0 | 2 | sentence-transformers | 2023-04-28T14:52:04 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# IslemTouati/setfit_french
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("IslemTouati/setfit_french")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,539 | [
[
-0.0088653564453125,
-0.059906005859375,
0.0287322998046875,
-0.01139068603515625,
-0.01080322265625,
-0.0145111083984375,
-0.0179901123046875,
-0.006427764892578125,
0.00388336181640625,
0.041717529296875,
-0.038787841796875,
-0.017822265625,
-0.042236328125,
... |
Gracevonoiste/distilbert-base-uncased-finetuned-cola | 2023-05-13T02:07:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Gracevonoiste | null | null | Gracevonoiste/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-28T15:55:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.4580724598795155
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4865
- Matthews Correlation: 0.4581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.543 | 1.0 | 856 | 0.4865 | 0.4581 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,746 | [
[
-0.02008056640625,
-0.053802490234375,
0.014251708984375,
0.022918701171875,
-0.0257720947265625,
-0.0099334716796875,
-0.007904052734375,
-0.004322052001953125,
0.0216522216796875,
0.0107421875,
-0.04266357421875,
-0.033111572265625,
-0.06158447265625,
-0.0... |
adamthekiwi/toki-pona | 2023-04-29T03:45:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | adamthekiwi | null | null | adamthekiwi/toki-pona | 0 | 2 | transformers | 2023-04-28T22:00:04 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: toki-pona
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# toki-pona
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7747 | 1.0 | 11978 | 1.6708 |
| 1.6538 | 2.0 | 23956 | 1.5588 |
| 1.6185 | 3.0 | 35934 | 1.5251 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,331 | [
[
-0.0248870849609375,
-0.04681396484375,
0.0193634033203125,
0.01023101806640625,
-0.034637451171875,
-0.03729248046875,
-0.00553131103515625,
-0.0028934478759765625,
0.004169464111328125,
0.023406982421875,
-0.050994873046875,
-0.04425048828125,
-0.0559997558593... |
damoref/clasificador-tweet-sentiment | 2023-04-28T22:55:48.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | damoref | null | null | damoref/clasificador-tweet-sentiment | 0 | 2 | transformers | 2023-04-28T22:55:12 | ---
license: apache-2.0
tags:
- classification
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: clasificador-tweet-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: stance_feminist
split: test
args: stance_feminist
metrics:
- name: Accuracy
type: accuracy
value: 0.6
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-tweet-sentiment
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9057
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 75 | 0.7909 | 0.6596 |
| No log | 2.0 | 150 | 0.7958 | 0.6281 |
| No log | 3.0 | 225 | 0.9057 | 0.6 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,793 | [
[
-0.0271453857421875,
-0.045440673828125,
0.01393890380859375,
0.0246124267578125,
-0.0361328125,
-0.0172119140625,
-0.021820068359375,
-0.01611328125,
0.01226043701171875,
0.0165557861328125,
-0.05499267578125,
-0.06219482421875,
-0.05084228515625,
-0.028503... |
rhiga/distilbert-base-uncased-finetuned-emotion | 2023-04-29T00:17:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | rhiga | null | null | rhiga/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-29T00:02:53 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9185
- name: F1
type: f1
value: 0.9185586323168572
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2189
- Accuracy: 0.9185
- F1: 0.9186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7972 | 1.0 | 250 | 0.3171 | 0.903 | 0.8995 |
| 0.2464 | 2.0 | 500 | 0.2189 | 0.9185 | 0.9186 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
| 1,848 | [
[
-0.03790283203125,
-0.04180908203125,
0.0144195556640625,
0.0218505859375,
-0.026092529296875,
-0.0192413330078125,
-0.01326751708984375,
-0.00811767578125,
0.01068878173828125,
0.00905609130859375,
-0.056488037109375,
-0.05169677734375,
-0.0596923828125,
-0... |
butchland/distilbert-base-uncased-finetuned-emotion | 2023-04-29T06:45:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | butchland | null | null | butchland/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-29T02:38:51 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9205
- name: F1
type: f1
value: 0.9205628267502548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2210
- Accuracy: 0.9205
- F1: 0.9206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8789 | 1.0 | 250 | 0.3274 | 0.908 | 0.9059 |
| 0.255 | 2.0 | 500 | 0.2210 | 0.9205 | 0.9206 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.037811279296875,
-0.0413818359375,
0.01438140869140625,
0.0214385986328125,
-0.02630615234375,
-0.0189971923828125,
-0.0130615234375,
-0.00858306884765625,
0.0104827880859375,
0.00798797607421875,
-0.056671142578125,
-0.05230712890625,
-0.059967041015625,
... |
crumb/ColabInstruct-Z-1.1B | 2023-04-29T04:57:11.000Z | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | crumb | null | null | crumb/ColabInstruct-Z-1.1B | 0 | 2 | transformers | 2023-04-29T04:10:20 | ---
language:
- en
---
```
81,920 TRAIN EXAMPLES
2:28:41 TIME SPENT
1.977 FINAL TRAIN LOSS
<instruction> ... <input> ... <output>
<instruction> ... <output>
``` | 160 | [
[
0.0185394287109375,
-0.025970458984375,
0.049713134765625,
0.034942626953125,
-0.032470703125,
-0.0291290283203125,
0.0124664306640625,
0.0174407958984375,
-0.01425933837890625,
0.0242919921875,
-0.059539794921875,
0.00565338134765625,
-0.0262908935546875,
-... |
adamthekiwi/toki-pona-better | 2023-04-29T23:23:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | adamthekiwi | null | null | adamthekiwi/toki-pona-better | 0 | 2 | transformers | 2023-04-29T04:13:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: toki-pona-better
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# toki-pona-better
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9908 | 1.0 | 15916 | 1.8937 |
| 1.8501 | 2.0 | 31832 | 1.7470 |
| 1.7636 | 3.0 | 47748 | 1.6663 |
| 1.704 | 4.0 | 63664 | 1.6184 |
| 1.6656 | 5.0 | 79580 | 1.5890 |
| 1.6331 | 6.0 | 95496 | 1.5782 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,532 | [
[
-0.0290069580078125,
-0.04229736328125,
0.01458740234375,
0.007587432861328125,
-0.0305023193359375,
-0.03289794921875,
-0.007472991943359375,
-0.0030498504638671875,
0.001537322998046875,
0.0164031982421875,
-0.051544189453125,
-0.04107666015625,
-0.05624389648... |
huolongguo10/check_sec | 2023-07-17T03:00:12.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"code",
"en",
"dataset:huolongguo10/insecure",
"license:openrail",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | huolongguo10 | null | null | huolongguo10/check_sec | 0 | 2 | transformers | 2023-04-29T05:14:01 | ---
license: openrail
datasets:
- huolongguo10/insecure
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- code
---
# check_sec
检查web参数安全性,支持多种payload(v0.1.2)
注意:该版本不再维护,请使用tiny版。
## 类型
```
LABEL_0: secure
LABEL_1: insecure(可能包含payload)
```
## 使用
```python
import transformers
from transformers import BertTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('huolongguo10/check_sec_tiny')
model = AutoModelForSequenceClassification.from_pretrained('huolongguo10/check_sec_tiny', num_labels=2)
import torch
def check(text):
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
print(f'{logits.argmax().item()}:{text}')
return 'secure' if predicted_class_id==0 else 'insecure'
``` | 916 | [
[
-0.0214691162109375,
-0.046417236328125,
-0.0152587890625,
0.0111541748046875,
-0.0440673828125,
-0.0081634521484375,
0.0078582763671875,
-0.034576416015625,
-0.00018894672393798828,
0.01253509521484375,
-0.035400390625,
-0.043121337890625,
-0.0570068359375,
... |
arikf/distilbert-base-uncased-finetuned-emotion | 2023-04-29T06:48:46.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | arikf | null | null | arikf/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-29T05:43:44 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9285439912301902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.9285
- F1: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8381 | 1.0 | 250 | 0.3165 | 0.9075 | 0.9040 |
| 0.2524 | 2.0 | 500 | 0.2183 | 0.9285 | 0.9285 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.038482666015625,
-0.0411376953125,
0.0148162841796875,
0.021575927734375,
-0.026275634765625,
-0.01910400390625,
-0.0130767822265625,
-0.00856781005859375,
0.0108184814453125,
0.00888824462890625,
-0.05682373046875,
-0.051910400390625,
-0.059661865234375,
... |
Aitrepreneur/stable-vicuna-13B-GPTQ-4bit-128g | 2023-04-29T08:58:32.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Aitrepreneur | null | null | Aitrepreneur/stable-vicuna-13B-GPTQ-4bit-128g | 2 | 2 | transformers | 2023-04-29T08:50:58 | ---
license: cc-by-nc-sa-4.0
---
Just an easy to download copy of https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ | 120 | [
[
-0.02410888671875,
-0.0401611328125,
0.0221099853515625,
0.07110595703125,
-0.048858642578125,
-0.01486968994140625,
0.01039886474609375,
-0.024078369140625,
0.0462646484375,
0.03759765625,
-0.04925537109375,
-0.033966064453125,
-0.008087158203125,
0.0110321... |
ga21902298/bert-base-uncased-finetuned-cola | 2023-04-30T16:25:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ga21902298 | null | null | ga21902298/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-29T11:54:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5579019759628809
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7128
- Matthews Correlation: 0.5579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.804671477280995e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 586
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4895 | 1.0 | 535 | 0.4845 | 0.5025 |
| 0.3003 | 2.0 | 1070 | 0.5757 | 0.5380 |
| 0.1814 | 3.0 | 1605 | 0.7128 | 0.5579 |
| 0.1133 | 4.0 | 2140 | 0.8350 | 0.5530 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,961 | [
[
-0.026336669921875,
-0.050750732421875,
0.008148193359375,
0.0169830322265625,
-0.0226287841796875,
-0.019012451171875,
-0.0159454345703125,
-0.01407623291015625,
0.0272674560546875,
0.0171661376953125,
-0.05126953125,
-0.032012939453125,
-0.052276611328125,
... |
GregLed/distilbert-base-uncased-finetuned-emotion | 2023-04-29T14:34:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | GregLed | null | null | GregLed/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-29T14:01:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.924743633535266
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 0.9245
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8152 | 1.0 | 250 | 0.2978 | 0.9095 | 0.9072 |
| 0.2414 | 2.0 | 500 | 0.2144 | 0.9245 | 0.9247 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,803 | [
[
-0.03802490234375,
-0.04180908203125,
0.01419830322265625,
0.0226593017578125,
-0.0254974365234375,
-0.01934814453125,
-0.01343536376953125,
-0.008026123046875,
0.0105133056640625,
0.00827789306640625,
-0.056243896484375,
-0.051483154296875,
-0.05999755859375,
... |
intanm/mBERT-squad | 2023-04-29T16:24:14.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | intanm | null | null | intanm/mBERT-squad | 0 | 2 | transformers | 2023-04-29T15:18:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: mBERT-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0138 | 1.0 | 5475 | 0.9567 |
| 0.7478 | 2.0 | 10950 | 0.9419 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,338 | [
[
-0.038299560546875,
-0.047943115234375,
0.01499176025390625,
0.02496337890625,
-0.0284423828125,
0.0096588134765625,
-0.010833740234375,
-0.01514434814453125,
0.00911712646484375,
0.026336669921875,
-0.0601806640625,
-0.040191650390625,
-0.044158935546875,
-... |
Bainbridge/bert-incl | 2023-04-29T15:29:57.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Bainbridge | null | null | Bainbridge/bert-incl | 0 | 2 | transformers | 2023-04-29T15:20:01 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-incl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-incl
This model is a fine-tuned version of [dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0004
- Acc: 1.0
- F1 Macro: 1.0
- F1 Weight: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | F1 Macro | F1 Weight |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|
| 0.684 | 0.67 | 20 | 0.6378 | 0.5588 | 0.3585 | 0.4007 |
| 0.4681 | 1.33 | 40 | 0.1762 | 0.9559 | 0.9547 | 0.9556 |
| 0.0989 | 2.0 | 60 | 0.0058 | 1.0 | 1.0 | 1.0 |
| 0.0032 | 2.67 | 80 | 0.0009 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 3.33 | 100 | 0.0005 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 4.0 | 120 | 0.0004 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 4.67 | 140 | 0.0004 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,927 | [
[
-0.04400634765625,
-0.042694091796875,
0.00798797607421875,
0.01490020751953125,
-0.0242767333984375,
-0.0277862548828125,
-0.01168060302734375,
-0.0171661376953125,
0.0203704833984375,
0.02264404296875,
-0.06219482421875,
-0.0467529296875,
-0.045745849609375,
... |
yuceelege/bert-base-uncased-finetuned-cola | 2023-05-04T21:19:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yuceelege | null | null | yuceelege/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-29T16:15:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.4913288678758369
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4656
- Matthews Correlation: 0.4913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4939 | 1.0 | 535 | 0.4656 | 0.4913 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,722 | [
[
-0.0249176025390625,
-0.052642822265625,
0.01190185546875,
0.0210723876953125,
-0.027618408203125,
-0.02239990234375,
-0.0191650390625,
-0.01531219482421875,
0.0254974365234375,
0.0160980224609375,
-0.04925537109375,
-0.0312042236328125,
-0.050689697265625,
... |
Bainbridge/bert-xxl-incl | 2023-04-29T16:30:25.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Bainbridge | null | null | Bainbridge/bert-xxl-incl | 0 | 2 | transformers | 2023-04-29T16:25:52 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-xxl-incl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-xxl-incl
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-uncased](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Acc: 1.0
- F1 Macro: 1.0
- F1 Weight: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | F1 Macro | F1 Weight |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|
| 0.6075 | 2.5 | 20 | 0.2080 | 0.9853 | 0.9851 | 0.9853 |
| 0.0448 | 5.0 | 40 | 0.0012 | 1.0 | 1.0 | 1.0 |
| 0.001 | 7.5 | 60 | 0.0005 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 10.0 | 80 | 0.0005 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,699 | [
[
-0.043182373046875,
-0.0396728515625,
0.009674072265625,
0.01560211181640625,
-0.0274810791015625,
-0.0291748046875,
-0.01493072509765625,
-0.023773193359375,
0.0196380615234375,
0.022796630859375,
-0.0631103515625,
-0.043548583984375,
-0.04681396484375,
-0.... |
Aman0112/bert_emo_classifier | 2023-04-29T18:51:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Aman0112 | null | null | Aman0112/bert_emo_classifier | 0 | 2 | transformers | 2023-04-29T17:56:14 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: bert_emo_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_emo_classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9319 | 0.25 | 500 | 0.4107 |
| 0.3265 | 0.5 | 1000 | 0.3068 |
| 0.2458 | 0.75 | 1500 | 0.2721 |
| 0.2487 | 1.0 | 2000 | 0.2313 |
| 0.158 | 1.25 | 2500 | 0.2422 |
| 0.1796 | 1.5 | 3000 | 0.2162 |
| 0.145 | 1.75 | 3500 | 0.1951 |
| 0.1648 | 2.0 | 4000 | 0.1908 |
| 0.1048 | 2.25 | 4500 | 0.2399 |
| 0.1171 | 2.5 | 5000 | 0.2230 |
| 0.1116 | 2.75 | 5500 | 0.2244 |
| 0.1122 | 3.0 | 6000 | 0.2250 |
| 0.0713 | 3.25 | 6500 | 0.2616 |
| 0.0697 | 3.5 | 7000 | 0.2672 |
| 0.0775 | 3.75 | 7500 | 0.2748 |
| 0.0742 | 4.0 | 8000 | 0.2724 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,044 | [
[
-0.03570556640625,
-0.04254150390625,
0.012908935546875,
0.00244140625,
-0.0162506103515625,
-0.0220489501953125,
-0.01297760009765625,
-0.0129241943359375,
0.020843505859375,
0.01406097412109375,
-0.052886962890625,
-0.0545654296875,
-0.051513671875,
-0.005... |
MerlinTK/poca-SoccerTwos | 2023-04-29T19:55:00.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | MerlinTK | null | null | MerlinTK/poca-SoccerTwos | 0 | 2 | ml-agents | 2023-04-29T19:54:54 |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: MerlinTK/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,031 | [
[
-0.034393310546875,
-0.037139892578125,
0.01445770263671875,
0.026031494140625,
-0.01367950439453125,
0.01555633544921875,
0.0202484130859375,
-0.0211181640625,
0.054107666015625,
0.0241851806640625,
-0.05706787109375,
-0.060394287109375,
-0.031463623046875,
... |
KursunBilek/bert-base-uncased-finetuned-cola | 2023-05-09T23:48:43.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | KursunBilek | null | null | KursunBilek/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-29T20:04:42 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5338774230813111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4455
- Matthews Correlation: 0.5339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4811 | 1.0 | 535 | 0.4455 | 0.5339 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cpu
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,720 | [
[
-0.025390625,
-0.053314208984375,
0.0104827880859375,
0.0210723876953125,
-0.0285797119140625,
-0.02166748046875,
-0.0198211669921875,
-0.015472412109375,
0.0250091552734375,
0.0168304443359375,
-0.049957275390625,
-0.029571533203125,
-0.050628662109375,
-0.... |
Apv/Flaubert2904_v2 | 2023-04-29T20:55:44.000Z | [
"transformers",
"tf",
"flaubert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Apv | null | null | Apv/Flaubert2904_v2 | 0 | 2 | transformers | 2023-04-29T20:44:28 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Apv/Flaubert2904_v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apv/Flaubert2904_v2
This model is a fine-tuned version of [flaubert/flaubert_base_cased](https://huggingface.co/flaubert/flaubert_base_cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0288
- Validation Loss: 1.0387
- Train Accuracy: 0.5407
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 755, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.2265 | 1.1301 | 0.5185 | 0 |
| 1.0377 | 1.0387 | 0.5407 | 1 |
| 1.0230 | 1.0387 | 0.5407 | 2 |
| 1.0235 | 1.0387 | 0.5407 | 3 |
| 1.0288 | 1.0387 | 0.5407 | 4 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,935 | [
[
-0.043701171875,
-0.0413818359375,
0.018463134765625,
0.0026645660400390625,
-0.01690673828125,
-0.024688720703125,
-0.01055908203125,
-0.0154876708984375,
0.0082244873046875,
0.006397247314453125,
-0.050567626953125,
-0.044281005859375,
-0.051483154296875,
... |
butchland/distilbert-base-uncased-finetuned-imdb | 2023-04-30T05:33:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | butchland | null | null | butchland/distilbert-base-uncased-finetuned-imdb | 0 | 2 | transformers | 2023-04-30T04:14:23 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93132
- name: F1
type: f1
value: 0.931310435665062
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2791
- Accuracy: 0.9313
- F1: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3014 | 1.0 | 3125 | 0.2343 | 0.9198 | 0.9197 |
| 0.1645 | 2.0 | 6250 | 0.2791 | 0.9313 | 0.9313 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,832 | [
[
-0.0396728515625,
-0.0416259765625,
0.011077880859375,
0.007843017578125,
-0.0290374755859375,
-0.01226806640625,
-0.002483367919921875,
-0.004291534423828125,
0.0142669677734375,
0.0233917236328125,
-0.0550537109375,
-0.03973388671875,
-0.06182861328125,
-0... |
salwakr1/SADAF_test3 | 2023-05-07T07:27:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | salwakr1 | null | null | salwakr1/SADAF_test3 | 0 | 2 | transformers | 2023-04-30T07:41:07 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: SADAF_test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SADAF_test3
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0061
- Macro F1: 0.7951
- Precision: 0.7874
- Recall: 0.8073
- Kappa: 0.7169
- Accuracy: 0.8073
## Model description
Relation identification for explicit dataset
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 25
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall | Kappa | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 76 | 1.0304 | 0.6632 | 0.6222 | 0.7436 | 0.5881 | 0.7436 |
| No log | 2.0 | 152 | 0.8135 | 0.7191 | 0.6933 | 0.7800 | 0.6519 | 0.7800 |
| No log | 3.0 | 228 | 0.7417 | 0.7715 | 0.7663 | 0.8007 | 0.6973 | 0.8007 |
| No log | 4.0 | 304 | 0.7449 | 0.7807 | 0.7704 | 0.7957 | 0.6999 | 0.7957 |
| No log | 5.0 | 380 | 0.7447 | 0.7874 | 0.7770 | 0.8089 | 0.7128 | 0.8089 |
| No log | 6.0 | 456 | 0.8034 | 0.7654 | 0.7599 | 0.7750 | 0.6761 | 0.7750 |
| 0.7186 | 7.0 | 532 | 0.8874 | 0.7672 | 0.7669 | 0.7750 | 0.6785 | 0.7750 |
| 0.7186 | 8.0 | 608 | 0.8737 | 0.7830 | 0.7729 | 0.7974 | 0.7030 | 0.7974 |
| 0.7186 | 9.0 | 684 | 0.8964 | 0.7785 | 0.7675 | 0.7924 | 0.6978 | 0.7924 |
| 0.7186 | 10.0 | 760 | 0.9368 | 0.7863 | 0.7761 | 0.7998 | 0.7071 | 0.7998 |
| 0.7186 | 11.0 | 836 | 0.9717 | 0.7897 | 0.7803 | 0.8040 | 0.7119 | 0.8040 |
| 0.7186 | 12.0 | 912 | 0.9876 | 0.7883 | 0.7810 | 0.8007 | 0.7086 | 0.8007 |
| 0.7186 | 13.0 | 988 | 0.9893 | 0.7893 | 0.7812 | 0.8023 | 0.7106 | 0.8023 |
| 0.1542 | 14.0 | 1064 | 0.9999 | 0.7917 | 0.7841 | 0.8023 | 0.7109 | 0.8023 |
| 0.1542 | 15.0 | 1140 | 1.0061 | 0.7951 | 0.7874 | 0.8073 | 0.7169 | 0.8073 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 3,048 | [
[
-0.050567626953125,
-0.0462646484375,
0.01031494140625,
0.0033359527587890625,
-0.00518798828125,
-0.0061798095703125,
0.002216339111328125,
-0.006072998046875,
0.036468505859375,
0.02398681640625,
-0.04962158203125,
-0.05084228515625,
-0.04962158203125,
-0.... |
cruiser/bert_model_kaggle | 2023-04-30T08:55:12.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | cruiser | null | null | cruiser/bert_model_kaggle | 0 | 2 | transformers | 2023-04-30T08:05:40 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cruiser/bert_model_kaggle
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cruiser/bert_model_kaggle
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0986
- Train Accuracy: 0.3554
- Validation Loss: 1.0986
- Validation Accuracy: 0.3814
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 1.1128 | 0.3360 | 1.0986 | 0.3356 | 0 |
| 1.0990 | 0.3370 | 1.0986 | 0.3823 | 1 |
| 1.0996 | 0.3631 | 1.0986 | 0.3814 | 2 |
| 1.0986 | 0.3556 | 1.0986 | 0.3814 | 3 |
| 1.0986 | 0.3554 | 1.0986 | 0.3814 | 4 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,758 | [
[
-0.045257568359375,
-0.052093505859375,
0.030487060546875,
0.0003886222839355469,
-0.03143310546875,
-0.01666259765625,
-0.0146636962890625,
-0.0224456787109375,
0.02374267578125,
0.021026611328125,
-0.057861328125,
-0.047698974609375,
-0.059967041015625,
-0... |
cruiser/distilbert_model_kaggle | 2023-04-30T09:54:34.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | cruiser | null | null | cruiser/distilbert_model_kaggle | 0 | 2 | transformers | 2023-04-30T09:04:41 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cruiser/distilbert_model_kaggle
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cruiser/distilbert_model_kaggle
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0986
- Train Accuracy: 0.4049
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 1.1284 | 0.4020 | 0 |
| 1.0986 | 0.4049 | 1 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,521 | [
[
-0.037689208984375,
-0.059478759765625,
0.033477783203125,
0.003391265869140625,
-0.0369873046875,
-0.00763702392578125,
-0.0118408203125,
-0.01031494140625,
0.020660400390625,
0.006740570068359375,
-0.053985595703125,
-0.047149658203125,
-0.07122802734375,
... |
xinyixiuxiu/albert-base-v2-SST2-_incremental_pre_training | 2023-04-30T09:18:28.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | xinyixiuxiu | null | null | xinyixiuxiu/albert-base-v2-SST2-_incremental_pre_training | 0 | 2 | transformers | 2023-04-30T09:14:18 | ---
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-base-v2-SST2-_incremental_pre_training
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-base-v2-SST2-_incremental_pre_training
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2295
- Train Accuracy: 0.9080
- Validation Loss: 0.2354
- Validation Accuracy: 0.9243
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2295 | 0.9080 | 0.2354 | 0.9243 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,419 | [
[
-0.0281829833984375,
-0.0294036865234375,
0.025726318359375,
0.011749267578125,
-0.035125732421875,
-0.025848388671875,
-0.0025081634521484375,
-0.0237884521484375,
0.005779266357421875,
0.0195770263671875,
-0.05218505859375,
-0.04376220703125,
-0.05645751953125... |
cruiser/distilbert_model_kaggle_200_epoch | 2023-04-30T10:18:58.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | cruiser | null | null | cruiser/distilbert_model_kaggle_200_epoch | 0 | 2 | transformers | 2023-04-30T09:56:33 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cruiser/distilbert_model_kaggle_200_epoch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cruiser/distilbert_model_kaggle_200_epoch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1017
- Train Accuracy: 0.3545
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 1.1017 | 0.3545 | 0 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,501 | [
[
-0.0384521484375,
-0.060760498046875,
0.032958984375,
0.006443023681640625,
-0.03643798828125,
-0.01032257080078125,
-0.01263427734375,
-0.0121917724609375,
0.016937255859375,
0.00498199462890625,
-0.05194091796875,
-0.04583740234375,
-0.0694580078125,
-0.02... |
huolongguo10/check_sec_tiny | 2023-07-17T03:07:14.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"code",
"en",
"dataset:huolongguo10/insecure",
"license:openrail",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | huolongguo10 | null | null | huolongguo10/check_sec_tiny | 1 | 2 | transformers | 2023-04-30T10:04:00 | ---
license: openrail
datasets:
- huolongguo10/insecure
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- code
---
# check_sec_tiny
检查web参数安全性,支持多种payload(v0.2.0-tiny)
## 类型
```
LABEL_0: secure
LABEL_1: insecure(可能包含payload)
```
## 使用
```python
import transformers
from transformers import BertTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('huolongguo10/check_sec_tiny')
model = AutoModelForSequenceClassification.from_pretrained('huolongguo10/check_sec_tiny', num_labels=2)
import torch
def check(text):
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
print(f'{logits.argmax().item()}:{text}')
return 'secure' if predicted_class_id==0 else 'insecure'
``` | 906 | [
[
-0.0218353271484375,
-0.047271728515625,
-0.01151275634765625,
0.007511138916015625,
-0.0418701171875,
-0.010345458984375,
-0.0009222030639648438,
-0.030303955078125,
0.0016803741455078125,
0.00936126708984375,
-0.033935546875,
-0.03729248046875,
-0.056457519531... |
maksim2000153/bert-base-uncased-finetuned-ChemProt-corpus-re | 2023-06-21T13:09:45.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-classification | maksim2000153 | null | null | maksim2000153/bert-base-uncased-finetuned-ChemProt-corpus-re | 0 | 2 | transformers | 2023-04-30T10:38:45 | ---
language:
- en
widget:
- text: "The functional protein contains 1160 << amino acids >> with a large central [[ mucin domain ]], three consensus sites for glycosaminoglycan attachment, two epidermal growth factor-like repeats, a putative hyaluronan-binding motif, and a potential transmembrane domain near the C-terminal."
example_title: "PART-OF"
- text: "<< Theophylline >> exposure resulted in a sustained increase in mRNA expression for CysS and [[ PDE3A ]], but PDE4D gene expression was unchanged."
example_title: "REG-POS"
- text: "These results suggested that << DMBT >> could inhibit invasion and angiogenesis by downregulation of [[ VEGF ]]and MMP-9, resulting from the inhibition of Akt pathway."
example_title: "REG-NEG"
- text: "Colonic cyclooxygenase-2 and << interkeukin-1beta >> mRNA and spinal c-FOS mRNA expression were significantly down-regulated by ATB-429, but not by [[ mesalamine ]]."
example_title: "NOT"
---
# Model Card
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [ChemProt corpus: BioCreative VI](https://biocreative.bioinformatics.udel.edu/news/corpora/chemprot-corpus-biocreative-vi/) dataset.
<!--
## Model Details
### Model Description
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
[More Information Needed]
### Downstream Use [optional]
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
[More Information Needed]
### Training Procedure
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed]
#### Speeds, Sizes, Times [optional]
[More Information Needed]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
[More Information Needed]
#### Factors
[More Information Needed]
#### Metrics
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
[More Information Needed]
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
-->
| 3,858 | [
[
-0.0377197265625,
-0.038299560546875,
0.036376953125,
-0.0023345947265625,
-0.02874755859375,
-0.013427734375,
-0.0012979507446289062,
-0.0296630859375,
0.018798828125,
0.047027587890625,
-0.060760498046875,
-0.05242919921875,
-0.03887939453125,
-0.018905639... |
aysin/bert-base-uncased-finetuned-cola | 2023-05-06T17:44:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | aysin | null | null | aysin/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-30T11:34:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.555170
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4500
- Matthews Correlation: 0.555170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- dropout: 0.18
- max_length: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.4692 | 0.4912 |
| 0.4636 | 2.0 | 536 | 0.4500 | 0.5313 |
| 0.4636 | 3.0 | 804 | 0.4809 | 0.5233 |
|0.01977 | 10.0 |- | - | 0.5552 |
Average Training Accuracy: 99.553%
Average Validation Accuracy: 82.69%
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,044 | [
[
-0.024261474609375,
-0.053436279296875,
0.00774383544921875,
0.015106201171875,
-0.021759033203125,
-0.019256591796875,
-0.0174102783203125,
-0.0162811279296875,
0.0261077880859375,
0.0159149169921875,
-0.0491943359375,
-0.034698486328125,
-0.054229736328125,
... |
salsabiilashifa11/gpt-cv | 2023-04-30T13:24:14.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | salsabiilashifa11 | null | null | salsabiilashifa11/gpt-cv | 0 | 2 | transformers | 2023-04-30T13:16:46 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt-cv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-cv
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 958 | [
[
-0.02978515625,
-0.05120849609375,
0.0257720947265625,
0.004734039306640625,
-0.0377197265625,
-0.02740478515625,
0.0015354156494140625,
-0.0180816650390625,
-0.00331878662109375,
0.02337646484375,
-0.05133056640625,
-0.034881591796875,
-0.055938720703125,
-... |
Yostaka/distilbert-base-uncased-finetuned-emotion | 2023-04-30T14:51:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Yostaka | null | null | Yostaka/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-30T13:20:14 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9235647957765342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2155
- Accuracy: 0.9235
- F1: 0.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3117 | 0.9065 | 0.9034 |
| No log | 2.0 | 500 | 0.2155 | 0.9235 | 0.9236 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.03607177734375,
-0.042572021484375,
0.0133056640625,
0.0229034423828125,
-0.026123046875,
-0.01947021484375,
-0.01334381103515625,
-0.0107879638671875,
0.01114654541015625,
0.0083770751953125,
-0.056549072265625,
-0.051361083984375,
-0.059814453125,
-0.00... |
polymonyrks/distilbert-base-uncased-finetuned-emotion | 2023-09-29T10:46:39.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | polymonyrks | null | null | polymonyrks/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-30T14:56:42 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9255688957679862
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2237
- Accuracy: 0.9255
- F1: 0.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8556 | 1.0 | 250 | 0.3192 | 0.908 | 0.9055 |
| 0.2538 | 2.0 | 500 | 0.2237 | 0.9255 | 0.9256 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.038055419921875,
-0.0416259765625,
0.015106201171875,
0.0215301513671875,
-0.026519775390625,
-0.0191650390625,
-0.0130157470703125,
-0.00846099853515625,
0.010223388671875,
0.00799560546875,
-0.056884765625,
-0.05145263671875,
-0.05926513671875,
-0.00851... |
GowthamSubash/distilbert-base-uncased-finetuned-emotion | 2023-05-01T05:50:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | GowthamSubash | null | null | GowthamSubash/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-30T15:31:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9265254169154161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2167
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8025 | 1.0 | 250 | 0.3076 | 0.9055 | 0.9032 |
| 0.2454 | 2.0 | 500 | 0.2167 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cpu
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.0380859375,
-0.042022705078125,
0.01432037353515625,
0.021484375,
-0.02642822265625,
-0.01983642578125,
-0.01384735107421875,
-0.00859832763671875,
0.0101776123046875,
0.00860595703125,
-0.05694580078125,
-0.050384521484375,
-0.0589599609375,
-0.008850097... |
ssamper/autotrain-deepentregable2-54196127214 | 2023-04-30T16:00:03.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:ssamper/autotrain-data-deepentregable2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | ssamper | null | null | ssamper/autotrain-deepentregable2-54196127214 | 0 | 2 | transformers | 2023-04-30T15:57:51 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ssamper/autotrain-data-deepentregable2
co2_eq_emissions:
emissions: 0.8730303110593549
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 54196127214
- CO2 Emissions (in grams): 0.8730
## Validation Metrics
- Loss: 0.079
- Accuracy: 0.986
- Macro F1: 0.986
- Micro F1: 0.986
- Weighted F1: 0.985
- Macro Precision: 0.991
- Micro Precision: 0.986
- Weighted Precision: 0.987
- Macro Recall: 0.983
- Micro Recall: 0.986
- Weighted Recall: 0.986
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ssamper/autotrain-deepentregable2-54196127214
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ssamper/autotrain-deepentregable2-54196127214", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ssamper/autotrain-deepentregable2-54196127214", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,304 | [
[
-0.033111572265625,
-0.023590087890625,
0.01031494140625,
0.0093994140625,
-0.0028781890869140625,
0.005687713623046875,
-0.004917144775390625,
-0.01300811767578125,
-0.0034084320068359375,
0.00506591796875,
-0.048126220703125,
-0.0305328369140625,
-0.0583496093... |
ga21902298/bert-base-uncased-optuna-finetuned-cola | 2023-04-30T19:17:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ga21902298 | null | null | ga21902298/bert-base-uncased-optuna-finetuned-cola | 0 | 2 | transformers | 2023-04-30T16:45:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-optuna-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5329669602160133
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-optuna-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5046
- Matthews Correlation: 0.5330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2576148764469367e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 586
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.4753 | 0.5220 |
| 0.4264 | 2.0 | 536 | 0.5046 | 0.5330 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,828 | [
[
-0.0263824462890625,
-0.05657958984375,
0.01142120361328125,
0.0139007568359375,
-0.0294036865234375,
-0.0223846435546875,
-0.0181884765625,
-0.0183868408203125,
0.0283203125,
0.0197906494140625,
-0.04986572265625,
-0.0290374755859375,
-0.047088623046875,
-0... |
haseebasif100/autotrain-mbti-lower2-54224127235 | 2023-04-30T17:48:18.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:haseebasif100/autotrain-data-mbti-lower2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | haseebasif100 | null | null | haseebasif100/autotrain-mbti-lower2-54224127235 | 0 | 2 | transformers | 2023-04-30T17:42:12 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- haseebasif100/autotrain-data-mbti-lower2
co2_eq_emissions:
emissions: 0.010354475219985048
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 54224127235
- CO2 Emissions (in grams): 0.0104
## Validation Metrics
- Loss: 0.946
- Accuracy: 0.723
- Macro F1: 0.723
- Micro F1: 0.723
- Weighted F1: 0.723
- Macro Precision: 0.727
- Micro Precision: 0.723
- Weighted Precision: 0.727
- Macro Recall: 0.723
- Micro Recall: 0.723
- Weighted Recall: 0.723
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/haseebasif100/autotrain-mbti-lower2-54224127235
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("haseebasif100/autotrain-mbti-lower2-54224127235", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("haseebasif100/autotrain-mbti-lower2-54224127235", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,314 | [
[
-0.03582763671875,
-0.0217742919921875,
0.016082763671875,
0.011993408203125,
0.0013666152954101562,
0.005481719970703125,
0.0002086162567138672,
-0.0137786865234375,
0.0021800994873046875,
0.007411956787109375,
-0.04864501953125,
-0.0300140380859375,
-0.0611572... |
alexisbaladon/HUHU-autotrain-regression-mean-prejudice | 2023-04-30T18:37:42.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"text-regression",
"es",
"dataset:alexisbaladon/autotrain-data-huhu-prejudice",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | alexisbaladon | null | null | alexisbaladon/HUHU-autotrain-regression-mean-prejudice | 0 | 2 | transformers | 2023-04-30T18:36:46 | ---
tags:
- autotrain
- text-regression
language:
- es
widget:
- text: "I love AutoTrain 🤗"
datasets:
- alexisbaladon/autotrain-data-huhu-prejudice
co2_eq_emissions:
emissions: 0.0016647063749410328
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 54234127237
- CO2 Emissions (in grams): 0.0017
## Validation Metrics
- Loss: 0.514
- MSE: 0.514
- MAE: 0.552
- R2: 0.268
- RMSE: 0.717
- Explained Variance: 0.270
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/alexisbaladon/autotrain-huhu-prejudice-54234127237
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alexisbaladon/autotrain-huhu-prejudice-54234127237", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alexisbaladon/autotrain-huhu-prejudice-54234127237", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,178 | [
[
-0.027740478515625,
-0.03179931640625,
0.0205841064453125,
0.0139007568359375,
0.0006041526794433594,
-0.0078582763671875,
0.00923919677734375,
0.00139617919921875,
-0.0011043548583984375,
0.016021728515625,
-0.055328369140625,
-0.038787841796875,
-0.05383300781... |
Winnie-Kay/Finetuned_BertModel_SentimentAnalysis | 2023-05-07T02:28:07.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Winnie-Kay | null | null | Winnie-Kay/Finetuned_BertModel_SentimentAnalysis | 0 | 2 | transformers | 2023-04-30T19:10:58 | Model Description
This model is a finetuned text classification model for sentiment analysis
The model was created using the COVID19 tweet dataset and the bert-base-cased model from the hugging face library | 207 | [
[
-0.0404052734375,
-0.05596923828125,
-0.0030059814453125,
0.0225677490234375,
-0.004119873046875,
-0.006031036376953125,
0.00754547119140625,
-0.0231475830078125,
0.01346588134765625,
0.0390625,
-0.06524658203125,
-0.048004150390625,
-0.02630615234375,
-0.02... |
RyotaroAbe/distilbert-base-uncased-finetuned-emotion | 2023-04-30T20:16:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | RyotaroAbe | null | null | RyotaroAbe/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-30T19:39:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.938
- name: F1
type: f1
value: 0.9382243153053892
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1637
- Accuracy: 0.938
- F1: 0.9382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1089 | 1.0 | 250 | 0.1883 | 0.928 | 0.9279 |
| 0.1092 | 2.0 | 500 | 0.1637 | 0.938 | 0.9382 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.037506103515625,
-0.04248046875,
0.01456451416015625,
0.0228424072265625,
-0.026336669921875,
-0.018707275390625,
-0.0133056640625,
-0.00885009765625,
0.01116180419921875,
0.00815582275390625,
-0.056732177734375,
-0.05181884765625,
-0.060150146484375,
-0.... |
yyassin/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-30T23:13:51.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | yyassin | null | null | yyassin/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-30T23:12:43 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 709.00 +/- 316.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yyassin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yyassin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga yyassin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,688 | [
[
-0.04119873046875,
-0.036224365234375,
0.0217132568359375,
0.0243682861328125,
-0.0098724365234375,
-0.0177459716796875,
0.01296234130859375,
-0.01427459716796875,
0.0132598876953125,
0.0247039794921875,
-0.071533203125,
-0.034912109375,
-0.0273895263671875,
... |
Multi-Domain-Expert-Learning/expert-arxiv | 2023-05-01T02:18:10.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Multi-Domain-Expert-Learning | null | null | Multi-Domain-Expert-Learning/expert-arxiv | 0 | 2 | transformers | 2023-05-01T00:23:34 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: expert-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# expert-arxiv
This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8797
- Accuracy: 0.5852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8752 | 0.01 | 200 | 1.9087 | 0.5805 |
| 1.8809 | 0.01 | 400 | 1.9018 | 0.5815 |
| 1.9102 | 0.02 | 600 | 1.8933 | 0.5829 |
| 1.8764 | 0.02 | 800 | 1.8851 | 0.5843 |
| 1.8694 | 0.03 | 1000 | 1.8797 | 0.5852 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,729 | [
[
-0.0280914306640625,
-0.048675537109375,
0.00457000732421875,
-0.0027561187744140625,
-0.024383544921875,
-0.03228759765625,
-0.006290435791015625,
-0.01508331298828125,
0.0167999267578125,
0.01450347900390625,
-0.04541015625,
-0.0401611328125,
-0.04733276367187... |
NathanS-HuggingFace/A2C-ReachDense | 2023-05-14T05:26:46.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | NathanS-HuggingFace | null | null | NathanS-HuggingFace/A2C-ReachDense | 0 | 2 | stable-baselines3 | 2023-05-01T02:07:06 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.54 +/- 0.47
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.04742431640625,
-0.004787445068359375,
0.0469970703125,
-0.00018846988677978516,
-0.006023406982421875,
0.033172607421875,
-0.0249481201171875,
0.028045654296875,
0.042694091796875,
-0.06256103515625,
-0.0289764404296875,
-0.03277587890625... |
r10521708/albert-base-chinese-finetuned-qqp-FHTM-5x-weak | 2023-05-01T17:37:33.000Z | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | text-classification | r10521708 | null | null | r10521708/albert-base-chinese-finetuned-qqp-FHTM-5x-weak | 0 | 2 | transformers | 2023-05-01T03:04:11 | ---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: albert-base-chinese-finetuned-qqp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-chinese-finetuned-qqp
This model is a fine-tuned version of [ckiplab/albert-base-chinese](https://huggingface.co/ckiplab/albert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4448448717594147
- Accuracy: 0.95
- F1: 0.9473684210526316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| No log | 1.0 | 30 | 0.517563 | 0.500000 | 0.000000 |
| No log | 2.0 | 60 | 0.416847 | 0.850000 | 0.869565 |
| No log | 3.0 | 90 | 0.444845 | 0.950000 | 0.947368 |
| No log | 4.0 | 120 | 0.430313 | 0.900000 | 0.888889 |
| No log | 5.0 | 150 | 0.439254 | 0.900000 | 0.888889 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.0.dev0
| 1,739 | [
[
-0.0304412841796875,
-0.028717041015625,
0.0014209747314453125,
0.0240325927734375,
-0.01800537109375,
-0.0273590087890625,
-0.0032215118408203125,
-0.01500701904296875,
0.0044097900390625,
0.031463623046875,
-0.045684814453125,
-0.0474853515625,
-0.037963867187... |
xinyixiuxiu/albert-large-v2-SST2-incremental_pre_training | 2023-05-01T03:33:02.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | xinyixiuxiu | null | null | xinyixiuxiu/albert-large-v2-SST2-incremental_pre_training | 0 | 2 | transformers | 2023-05-01T03:06:07 | ---
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-large-v2-SST2-incremental_pre_training
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-large-v2-SST2-incremental_pre_training
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1018
- Train Accuracy: 0.9653
- Validation Loss: 0.1717
- Validation Accuracy: 0.9392
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 2e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2284 | 0.9105 | 0.1978 | 0.9335 | 0 |
| 0.1384 | 0.9495 | 0.1822 | 0.9346 | 1 |
| 0.1018 | 0.9653 | 0.1717 | 0.9392 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,579 | [
[
-0.0283203125,
-0.026611328125,
0.0292205810546875,
0.01143646240234375,
-0.0361328125,
-0.0231475830078125,
-0.0085601806640625,
-0.023651123046875,
0.0101470947265625,
0.01788330078125,
-0.05096435546875,
-0.037750244140625,
-0.0576171875,
-0.0205841064453... |
DunnBC22/vit-base-patch16-224-in21k-Mango_leaf_Disease | 2023-06-10T23:40:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | DunnBC22 | null | null | DunnBC22/vit-base-patch16-224-in21k-Mango_leaf_Disease | 1 | 2 | transformers | 2023-05-01T03:41:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: vit-base-patch16-224-in21k-Mango_leaf_Disease
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1
language:
- en
pipeline_tag: image-classification
---
# vit-base-patch16-224-in21k-Mango_leaf_Disease
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
It achieves the following results on the evaluation set:
- Loss: 0.0189
- Accuracy: 1.0
- Weighted f1: 1.0
- Micro f1: 1.0
- Macro f1: 1.0
- Weighted recall: 1.0
- Micro recall: 1.0
- Macro recall: 1.0
- Weighted precision: 1.0
- Micro precision: 1.0
- Macro precision: 1.0
## Model description
This is a multiclass image classification model of mango leaf diseases.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Mango%20Leaf%20Disease%20Dataset/Mango_Leaf_Disease_ViT.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/aryashah2k/mango-leaf-disease-dataset
_Sample Images From Dataset:_

## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.0554 | 1.0 | 200 | 0.0359 | 0.9988 | 0.9988 | 0.9988 | 0.9987 | 0.9988 | 0.9988 | 0.9987 | 0.9988 | 0.9988 | 0.9987 |
| 0.0192 | 2.0 | 400 | 0.0189 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3 | 3,087 | [
[
-0.041839599609375,
-0.048004150390625,
0.0191192626953125,
0.0057373046875,
-0.023101806640625,
-0.0005412101745605469,
0.0096893310546875,
-0.01776123046875,
0.032379150390625,
0.028778076171875,
-0.048919677734375,
-0.040130615234375,
-0.04571533203125,
-... |
xinyixiuxiu/albert-xlarge-v2-SST2-incremental_pre_training | 2023-05-01T05:05:11.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | xinyixiuxiu | null | null | xinyixiuxiu/albert-xlarge-v2-SST2-incremental_pre_training | 0 | 2 | transformers | 2023-05-01T04:02:42 | ---
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-xlarge-v2-SST2-incremental_pre_training
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-xlarge-v2-SST2-incremental_pre_training
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1059
- Train Accuracy: 0.9630
- Validation Loss: 0.1832
- Validation Accuracy: 0.9381
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2528 | 0.8917 | 0.2056 | 0.9323 | 0 |
| 0.1384 | 0.9503 | 0.1707 | 0.9461 | 1 |
| 0.1059 | 0.9630 | 0.1832 | 0.9381 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,581 | [
[
-0.031280517578125,
-0.0268707275390625,
0.0267486572265625,
0.004650115966796875,
-0.03350830078125,
-0.024200439453125,
-0.002044677734375,
-0.0251007080078125,
0.00908660888671875,
0.0169525146484375,
-0.0570068359375,
-0.039642333984375,
-0.06005859375,
... |
gelabgaboo/results | 2023-05-01T06:46:55.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | gelabgaboo | null | null | gelabgaboo/results | 0 | 2 | transformers | 2023-05-01T04:09:56 | ---
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [Rostlab/prot_bert_bfd](https://huggingface.co/Rostlab/prot_bert_bfd) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 128
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,085 | [
[
-0.0440673828125,
-0.064208984375,
0.0030975341796875,
0.01329803466796875,
-0.034088134765625,
-0.0278778076171875,
-0.00836181640625,
-0.027984619140625,
0.00891876220703125,
0.026580810546875,
-0.061279296875,
-0.0374755859375,
-0.0438232421875,
-0.013061... |
mrovejaxd/multilingual_1_5 | 2023-05-01T06:08:04.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | mrovejaxd | null | null | mrovejaxd/multilingual_1_5 | 0 | 2 | transformers | 2023-05-01T06:05:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: multilingual_1_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual_1_5
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4579
- Accuracy: 0.43
- F1: 0.1480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,178 | [
[
-0.0308685302734375,
-0.043243408203125,
0.00928497314453125,
0.0283966064453125,
-0.0297088623046875,
-0.025543212890625,
-0.035797119140625,
-0.0216217041015625,
0.0126953125,
0.016876220703125,
-0.051422119140625,
-0.04498291015625,
-0.047454833984375,
-0... |
nandodeomkar/autotrain-fracture-detection-using-google-vit-base-patch-16-54382127388 | 2023-05-01T07:45:11.000Z | [
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain",
"vision",
"dataset:nandodeomkar/autotrain-data-fracture-detection-using-google-vit-base-patch-16",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | nandodeomkar | null | null | nandodeomkar/autotrain-fracture-detection-using-google-vit-base-patch-16-54382127388 | 0 | 2 | transformers | 2023-05-01T07:43:09 | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- nandodeomkar/autotrain-data-fracture-detection-using-google-vit-base-patch-16
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.7558780597193974
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 54382127388
- CO2 Emissions (in grams): 0.7559
## Validation Metrics
- Loss: 0.378
- Accuracy: 0.846
- Precision: 1.000
- Recall: 0.500
- AUC: 0.917
- F1: 0.667 | 774 | [
[
-0.0175323486328125,
-0.017364501953125,
0.021728515625,
0.00044536590576171875,
0.004817962646484375,
0.0006318092346191406,
0.0198822021484375,
-0.01328277587890625,
-0.0175323486328125,
0.01314544677734375,
-0.037933349609375,
-0.04656982421875,
-0.0431213378... |
sajal2692/distilbert-base-uncased-finetuned_emotion | 2023-05-06T02:06:46.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | sajal2692 | null | null | sajal2692/distilbert-base-uncased-finetuned_emotion | 0 | 2 | transformers | 2023-05-01T08:45:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned_emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9355
- name: F1
type: f1
value: 0.9355276128027006
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned_emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1585
- Accuracy: 0.9355
- F1: 0.9355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8173 | 1.0 | 250 | 0.2842 | 0.915 | 0.9130 |
| 0.2224 | 2.0 | 500 | 0.1760 | 0.9295 | 0.9295 |
| 0.1511 | 3.0 | 750 | 0.1585 | 0.9355 | 0.9355 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,919 | [
[
-0.03759765625,
-0.0411376953125,
0.01416778564453125,
0.0202789306640625,
-0.024261474609375,
-0.0181732177734375,
-0.0111083984375,
-0.0081329345703125,
0.01177215576171875,
0.00872039794921875,
-0.05645751953125,
-0.052764892578125,
-0.06024169921875,
-0.... |
mrovejaxd/goemotions_bertspannish | 2023-05-01T10:19:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:go_emotions",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | mrovejaxd | null | null | mrovejaxd/goemotions_bertspannish | 0 | 2 | transformers | 2023-05-01T09:15:51 | ---
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
- f1
model-index:
- name: goemotions_bertspannish
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
config: simplified
split: test
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.43
- name: F1
type: f1
value: 0.13822367984075262
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goemotions_bertspannish
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0321
- Accuracy: 0.43
- F1: 0.1382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,553 | [
[
-0.0279388427734375,
-0.03997802734375,
0.012908935546875,
0.0249786376953125,
-0.03680419921875,
-0.0274200439453125,
-0.03033447265625,
-0.0224456787109375,
0.023773193359375,
0.0122833251953125,
-0.07159423828125,
-0.044830322265625,
-0.046630859375,
-0.0... |
Saitarun04/distilbert-base-uncased-finetuned-emotion | 2023-05-02T04:58:04.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Saitarun04 | null | null | Saitarun04/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-01T09:27:31 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9246439423793078
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2112
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8179 | 1.0 | 250 | 0.3117 | 0.902 | 0.8987 |
| 0.2415 | 2.0 | 500 | 0.2112 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.03814697265625,
-0.041412353515625,
0.0157623291015625,
0.0218505859375,
-0.02642822265625,
-0.0192718505859375,
-0.01287841796875,
-0.00858306884765625,
0.0107421875,
0.00860595703125,
-0.05694580078125,
-0.05157470703125,
-0.059356689453125,
-0.00836181... |
xinyixiuxiu/albert-base-v2-SST2-incremental_pre_training | 2023-05-01T09:48:50.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | xinyixiuxiu | null | null | xinyixiuxiu/albert-base-v2-SST2-incremental_pre_training | 0 | 2 | transformers | 2023-05-01T09:35:44 | ---
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-base-v2-SST2-incremental_pre_training
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-base-v2-SST2-incremental_pre_training
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1124
- Train Accuracy: 0.9606
- Validation Loss: 0.2290
- Validation Accuracy: 0.9106
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2793 | 0.8841 | 0.2209 | 0.9197 | 0 |
| 0.1514 | 0.9449 | 0.2252 | 0.9094 | 1 |
| 0.1124 | 0.9606 | 0.2290 | 0.9106 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,577 | [
[
-0.0296630859375,
-0.026947021484375,
0.023895263671875,
0.009613037109375,
-0.0328369140625,
-0.0246124267578125,
-0.00218963623046875,
-0.021636962890625,
0.006832122802734375,
0.0191650390625,
-0.053192138671875,
-0.042755126953125,
-0.057769775390625,
-0... |
petarpepi/all-MiniLM-L12-v2-twitter | 2023-05-01T09:44:16.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | petarpepi | null | null | petarpepi/all-MiniLM-L12-v2-twitter | 0 | 2 | sentence-transformers | 2023-05-01T09:38:23 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# petarpepi/all-MiniLM-L12-v2-sentiment140-twitter
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("petarpepi/all-MiniLM-L12-v2-sentiment140-twitter")
# Run inference
preds = model(["that pizza was the coolest", "pineapple on pizza is the worst 🤮"])
# class 1 = positive
# class 0 = negative
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,625 | [
[
-0.00986480712890625,
-0.06103515625,
0.0251007080078125,
-0.0054168701171875,
-0.0142822265625,
-0.01415252685546875,
-0.022796630859375,
-0.01128387451171875,
-0.0023403167724609375,
0.020538330078125,
-0.047088623046875,
-0.0241241455078125,
-0.03207397460937... |
petarpepi/all-MiniLM-L12-v2-amazon-reviews | 2023-05-01T10:18:59.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | petarpepi | null | null | petarpepi/all-MiniLM-L12-v2-amazon-reviews | 0 | 2 | sentence-transformers | 2023-05-01T10:15:51 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# petarpepi/all-MiniLM-L12-v2-amazon-reviews
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("petarpepi/all-MiniLM-L12-v2-amazon-reviews")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,573 | [
[
-0.01023101806640625,
-0.054931640625,
0.0254974365234375,
-0.0149383544921875,
-0.0177154541015625,
-0.01505279541015625,
-0.01526641845703125,
-0.0169830322265625,
-0.00339508056640625,
0.026580810546875,
-0.044097900390625,
-0.011199951171875,
-0.031280517578... |
mrovejaxd/goemotions_bertmultilingual | 2023-05-01T11:00:53.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:go_emotions",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | mrovejaxd | null | null | mrovejaxd/goemotions_bertmultilingual | 0 | 2 | transformers | 2023-05-01T10:49:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
- f1
model-index:
- name: goemotions_bertmultilingual
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
config: simplified
split: test
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.39666666666666667
- name: F1
type: f1
value: 0.08779206699732206
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goemotions_bertmultilingual
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3859
- Accuracy: 0.3967
- F1: 0.0878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,580 | [
[
-0.025360107421875,
-0.039398193359375,
0.00962066650390625,
0.029571533203125,
-0.036590576171875,
-0.03057861328125,
-0.033782958984375,
-0.020965576171875,
0.0160980224609375,
0.01256561279296875,
-0.05889892578125,
-0.042327880859375,
-0.04547119140625,
... |
alibidaran/distilbert-base-uncased-finetuned-emotion_detection | 2023-05-01T11:55:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"text_classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | alibidaran | null | null | alibidaran/distilbert-base-uncased-finetuned-emotion_detection | 0 | 2 | transformers | 2023-05-01T11:47:15 | ---
license: apache-2.0
tags:
- text_classification
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion_detection
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9210457518994596
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion_detection
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2211
- Accuracy: 0.921
- F1: 0.9210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7979 | 1.0 | 250 | 0.3147 | 0.906 | 0.9041 |
| 0.2464 | 2.0 | 500 | 0.2211 | 0.921 | 0.9210 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,888 | [
[
-0.04315185546875,
-0.040618896484375,
0.0251617431640625,
0.01763916015625,
-0.028900146484375,
-0.0247802734375,
-0.01004791259765625,
-0.01052093505859375,
0.007160186767578125,
0.0081329345703125,
-0.05499267578125,
-0.057647705078125,
-0.06158447265625,
... |
cruiser/final_model | 2023-05-01T13:44:30.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | cruiser | null | null | cruiser/final_model | 0 | 2 | transformers | 2023-05-01T12:56:21 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cruiser/final_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cruiser/final_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0316
- Validation Loss: 1.1405
- Train Accuracy: 0.7835
- Epoch: 10
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 1e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 34090, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 250, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6358 | 0.5405 | 0.7821 | 0 |
| 0.4380 | 0.5118 | 0.7844 | 1 |
| 0.3382 | 0.5437 | 0.7960 | 2 |
| 0.2327 | 0.6227 | 0.7878 | 3 |
| 0.1581 | 0.7234 | 0.7795 | 4 |
| 0.1104 | 0.8340 | 0.7832 | 5 |
| 0.0826 | 0.8824 | 0.7778 | 6 |
| 0.0608 | 1.0342 | 0.7827 | 7 |
| 0.0456 | 1.0815 | 0.7818 | 8 |
| 0.0396 | 1.0829 | 0.7852 | 9 |
| 0.0316 | 1.1405 | 0.7835 | 10 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 2,445 | [
[
-0.04925537109375,
-0.03814697265625,
0.0298309326171875,
-0.00550079345703125,
-0.0257110595703125,
-0.004253387451171875,
-0.01361083984375,
-0.01495361328125,
0.0272979736328125,
0.01224517822265625,
-0.051910400390625,
-0.052459716796875,
-0.0533447265625,
... |
Attakuan/bert-base-uncased-finetuned-cola | 2023-05-01T19:17:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Attakuan | null | null | Attakuan/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-01T13:24:15 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5788207437251082
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2150
- Matthews Correlation: 0.5788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.1521858230688484e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4531 | 1.0 | 1069 | 0.4530 | 0.5275 |
| 0.3349 | 2.0 | 2138 | 0.5377 | 0.5475 |
| 0.2624 | 3.0 | 3207 | 0.8287 | 0.5574 |
| 0.1903 | 4.0 | 4276 | 0.8971 | 0.5525 |
| 0.1356 | 5.0 | 5345 | 0.9994 | 0.5662 |
| 0.0861 | 6.0 | 6414 | 1.0434 | 0.5731 |
| 0.0576 | 7.0 | 7483 | 1.1683 | 0.5735 |
| 0.0504 | 8.0 | 8552 | 1.2150 | 0.5788 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,256 | [
[
-0.0295257568359375,
-0.0496826171875,
0.00823211669921875,
0.01617431640625,
-0.019622802734375,
-0.016845703125,
-0.0124053955078125,
-0.0121917724609375,
0.029388427734375,
0.0155792236328125,
-0.051727294921875,
-0.035430908203125,
-0.053497314453125,
-0... |
cruiser/twitter_roberta_final_model | 2023-05-01T14:50:59.000Z | [
"transformers",
"tf",
"xlm-roberta",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | cruiser | null | null | cruiser/twitter_roberta_final_model | 0 | 2 | transformers | 2023-05-01T13:50:44 | ---
tags:
- generated_from_keras_callback
model-index:
- name: cruiser/twitter_roberta_final_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cruiser/twitter_roberta_final_model
This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0648
- Validation Loss: 1.0107
- Train Accuracy: 0.7943
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 1e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 34090, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 250, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5482 | 0.4911 | 0.7991 | 0 |
| 0.4389 | 0.5053 | 0.7972 | 1 |
| 0.3567 | 0.5357 | 0.7935 | 2 |
| 0.2774 | 0.6193 | 0.7872 | 3 |
| 0.2080 | 0.6732 | 0.7989 | 4 |
| 0.1545 | 0.7639 | 0.7889 | 5 |
| 0.1162 | 0.8836 | 0.7855 | 6 |
| 0.0943 | 0.9301 | 0.7903 | 7 |
| 0.0768 | 0.9647 | 0.7929 | 8 |
| 0.0648 | 1.0107 | 0.7943 | 9 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 2,454 | [
[
-0.045501708984375,
-0.036102294921875,
0.0256195068359375,
-0.003116607666015625,
-0.026458740234375,
-0.007740020751953125,
-0.015899658203125,
-0.0139923095703125,
0.0243377685546875,
0.0092926025390625,
-0.054473876953125,
-0.05511474609375,
-0.0577392578125... |
caffsean/chilenoGPT | 2023-05-02T20:47:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | caffsean | null | null | caffsean/chilenoGPT | 0 | 2 | transformers | 2023-05-01T15:48:29 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: chilenoGPT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chilenoGPT
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 30414
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.4985 | 1.0 | 3802 | 4.3106 |
| 4.1063 | 2.0 | 7604 | 3.9798 |
| 3.8797 | 3.0 | 11406 | 3.7886 |
| 3.7554 | 4.0 | 15208 | 3.6645 |
| 3.616 | 5.0 | 19010 | 3.5792 |
| 3.534 | 6.0 | 22812 | 3.5152 |
| 3.4631 | 7.0 | 26614 | 3.4632 |
| 3.3867 | 8.0 | 30416 | 3.4330 |
| 3.2781 | 9.0 | 34218 | 3.3975 |
| 3.2074 | 10.0 | 38020 | 3.3921 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,698 | [
[
-0.034210205078125,
-0.0445556640625,
0.0201873779296875,
0.018890380859375,
-0.029205322265625,
-0.0283050537109375,
-0.009185791015625,
-0.0085601806640625,
-0.00424957275390625,
0.0145111083984375,
-0.05712890625,
-0.036865234375,
-0.0537109375,
-0.018600... |
michaelfeil/ct2fast-flan-ul2 | 2023-05-19T10:37:59.000Z | [
"transformers",
"ctranslate2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | michaelfeil | null | null | michaelfeil/ct2fast-flan-ul2 | 6 | 2 | transformers | 2023-05-01T16:05:00 | ---
license: apache-2.0
tags:
- ctranslate2
---
# Fast-Inference with Ctranslate2
Speedup inference by 2x-8x using int8 inference in C++
quantized version of [google/flan-ul2](https://huggingface.co/google/flan-ul2)
```bash
pip install hf_hub_ctranslate2>=2.0.6 ctranslate2>=3.13.0
```
Checkpoint compatible to [ctranslate2](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
model_name = "michaelfeil/ct2fast-flan-ul2"
model = TranslatorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16"
)
outputs = model.generate(
text=["How do you call a fast Flan-ingo?", "Translate to german: How are you doing?"],
min_decoding_length=24,
max_decoding_length=32,
max_input_length=512,
beam_size=5
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo. | 1,210 | [
[
-0.0218353271484375,
-0.047637939453125,
0.04766845703125,
0.043548583984375,
-0.0276336669921875,
0.0014963150024414062,
-0.0180511474609375,
-0.04840087890625,
0.00392913818359375,
0.030181884765625,
0.005710601806640625,
-0.0163116455078125,
-0.04046630859375... |
jfforero/a_different_name | 2023-05-31T20:12:34.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jfforero | null | null | jfforero/a_different_name | 0 | 2 | transformers | 2023-05-01T16:47:11 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: a_different_name
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# a_different_name
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
| 934 | [
[
-0.037811279296875,
-0.04742431640625,
0.0209808349609375,
0.003017425537109375,
-0.04736328125,
-0.0222320556640625,
-0.0115966796875,
-0.027618408203125,
0.010711669921875,
0.03399658203125,
-0.0518798828125,
-0.04034423828125,
-0.06463623046875,
-0.026077... |
roxanmlr/distilbert-base-uncased-finetuned-emotion | 2023-05-01T20:14:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | roxanmlr | null | null | roxanmlr/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-01T19:57:44 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251879205114556
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2273
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8401 | 1.0 | 250 | 0.3279 | 0.9025 | 0.8981 |
| 0.2575 | 2.0 | 500 | 0.2273 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.038116455078125,
-0.041290283203125,
0.015899658203125,
0.0208587646484375,
-0.026580810546875,
-0.019256591796875,
-0.0128021240234375,
-0.00864410400390625,
0.01062774658203125,
0.00867462158203125,
-0.0570068359375,
-0.05206298828125,
-0.0595703125,
-0... |
jinfwhuang/test_trainer | 2023-05-01T22:49:02.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jinfwhuang | null | null | jinfwhuang/test_trainer | 0 | 2 | transformers | 2023-05-01T22:33:38 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
metrics:
- accuracy
model-index:
- name: test_trainer
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes
type: rotten_tomatoes
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.501
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7153
- Accuracy: 0.501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6412 | 0.01 | 1 | 0.7288 | 0.501 |
| 0.6171 | 0.02 | 2 | 0.7083 | 0.501 |
| 0.5805 | 0.02 | 3 | 0.7153 | 0.501 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,744 | [
[
-0.034332275390625,
-0.043060302734375,
0.018341064453125,
0.0045013427734375,
-0.02215576171875,
-0.02850341796875,
-0.013885498046875,
-0.004413604736328125,
0.01525115966796875,
0.02557373046875,
-0.05450439453125,
-0.043182373046875,
-0.052886962890625,
... |
taegyun/distilbert-base-uncased-finetuned-emotion | 2023-05-01T23:09:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | taegyun | null | null | taegyun/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-01T22:53:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9221186592426542
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2225
- Accuracy: 0.922
- F1: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3273 | 0.9025 | 0.8984 |
| No log | 2.0 | 500 | 0.2225 | 0.922 | 0.9221 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.11.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,841 | [
[
-0.036956787109375,
-0.042388916015625,
0.01366424560546875,
0.0233917236328125,
-0.0268402099609375,
-0.02069091796875,
-0.01311492919921875,
-0.01026153564453125,
0.01080322265625,
0.0089569091796875,
-0.056610107421875,
-0.0528564453125,
-0.06005859375,
-... |
hamonk/distilbert-base-uncased-finetuned-emotion | 2023-05-02T04:00:42.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | hamonk | null | null | hamonk/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-01T23:23:07 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93208
- name: F1
type: f1
value: 0.9324367340442463
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2312
- Accuracy: 0.9321
- F1: 0.9324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2634 | 1.0 | 1563 | 0.1887 | 0.9275 | 0.9268 |
| 0.1467 | 2.0 | 3126 | 0.2312 | 0.9321 | 0.9324 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,835 | [
[
-0.043121337890625,
-0.0382080078125,
0.013763427734375,
0.0170440673828125,
-0.02996826171875,
-0.01263427734375,
-0.00664520263671875,
-0.00609588623046875,
0.0173797607421875,
0.01123046875,
-0.061553955078125,
-0.046173095703125,
-0.06463623046875,
-0.00... |
HuggingFaceStudent/mbart_EngToGuj | 2023-05-02T03:53:42.000Z | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | HuggingFaceStudent | null | null | HuggingFaceStudent/mbart_EngToGuj | 0 | 2 | transformers | 2023-05-02T02:53:21 | ---
tags:
- generated_from_trainer
model-index:
- name: mbart_EngToGuj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart_EngToGuj
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,010 | [
[
-0.037506103515625,
-0.057861328125,
0.0132598876953125,
0.0133209228515625,
-0.03173828125,
-0.0247955322265625,
-0.014556884765625,
-0.01904296875,
0.0243377685546875,
0.028289794921875,
-0.055755615234375,
-0.038116455078125,
-0.04144287109375,
-0.0100173... |
mrovejaxd/goemotions_bertspanish_finetunig_b | 2023-05-02T09:50:50.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:go_emotions",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | mrovejaxd | null | null | mrovejaxd/goemotions_bertspanish_finetunig_b | 0 | 2 | transformers | 2023-05-02T06:26:52 | ---
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
- f1
model-index:
- name: goemotions_bertspanish_finetunig_b
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
config: simplified
split: test
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.4525
- name: F1
type: f1
value: 0.3713030954282648
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goemotions_bertspanish_finetunig_b
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1211
- Accuracy: 0.4525
- F1: 0.3713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,578 | [
[
-0.0269775390625,
-0.04095458984375,
0.00966644287109375,
0.02362060546875,
-0.0362548828125,
-0.027801513671875,
-0.034698486328125,
-0.0224761962890625,
0.0213470458984375,
0.01088714599609375,
-0.07122802734375,
-0.045440673828125,
-0.0455322265625,
-0.01... |
Blgn94/mongolian-twitter-roberta-base-sentiment-ner | 2023-05-03T02:01:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"mn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | Blgn94 | null | null | Blgn94/mongolian-twitter-roberta-base-sentiment-ner | 0 | 2 | transformers | 2023-05-02T06:43:15 | ---
language:
- mn
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mongolian-twitter-roberta-base-sentiment-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mongolian-twitter-roberta-base-sentiment-ner
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1674
- Precision: 0.7560
- Recall: 0.8395
- F1: 0.7955
- Accuracy: 0.9540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4091 | 1.0 | 477 | 0.2507 | 0.5166 | 0.6789 | 0.5868 | 0.9162 |
| 0.2467 | 2.0 | 954 | 0.2363 | 0.6415 | 0.7465 | 0.6900 | 0.9243 |
| 0.2051 | 3.0 | 1431 | 0.1921 | 0.6732 | 0.7857 | 0.7251 | 0.9374 |
| 0.1738 | 4.0 | 1908 | 0.1746 | 0.6965 | 0.8038 | 0.7463 | 0.9440 |
| 0.1475 | 5.0 | 2385 | 0.1680 | 0.7217 | 0.8172 | 0.7665 | 0.9472 |
| 0.1305 | 6.0 | 2862 | 0.1736 | 0.7209 | 0.8228 | 0.7685 | 0.9483 |
| 0.1116 | 7.0 | 3339 | 0.1621 | 0.7337 | 0.8296 | 0.7787 | 0.9518 |
| 0.099 | 8.0 | 3816 | 0.1684 | 0.7353 | 0.8318 | 0.7806 | 0.9508 |
| 0.0882 | 9.0 | 4293 | 0.1666 | 0.7625 | 0.8417 | 0.8002 | 0.9547 |
| 0.0799 | 10.0 | 4770 | 0.1674 | 0.7560 | 0.8395 | 0.7955 | 0.9540 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,418 | [
[
-0.036407470703125,
-0.0374755859375,
0.0008893013000488281,
0.0075836181640625,
-0.02032470703125,
-0.0184478759765625,
-0.0102996826171875,
-0.01364898681640625,
0.0254058837890625,
0.02099609375,
-0.054534912109375,
-0.0640869140625,
-0.04986572265625,
-0... |
lixiqi/wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned | 2023-05-02T11:23:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | lixiqi | null | null | lixiqi/wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned | 0 | 2 | transformers | 2023-05-02T07:05:24 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- wiki_lingua
metrics:
- rouge
model-index:
- name: wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wiki_lingua
type: wiki_lingua
config: id
split: test
args: id
metrics:
- name: Rouge1
type: rouge
value: 18.0064
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3388
- Rouge1: 18.0064
- Rouge2: 5.5315
- Rougel: 16.1048
- Rougelsum: 17.6763
# Baseline LEAD-64
- Rouge1: 20.32
- Rouge2: 4.94
- Rougel: 14.0
- Rougelsum: 14.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.4701 | 1.0 | 4029 | 2.4403 | 17.0314 | 5.0932 | 15.3277 | 16.713 |
| 2.8067 | 2.0 | 8058 | 2.3568 | 17.6738 | 5.3508 | 15.8002 | 17.336 |
| 2.7095 | 3.0 | 12087 | 2.3388 | 18.0064 | 5.5315 | 16.1048 | 17.6763 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 2,111 | [
[
-0.03302001953125,
-0.036041259765625,
0.01194000244140625,
0.0055694580078125,
-0.023834228515625,
-0.037506103515625,
-0.015899658203125,
-0.0165252685546875,
0.0171356201171875,
0.0234222412109375,
-0.050750732421875,
-0.0462646484375,
-0.048828125,
0.001... |
bilalkabas/bert-base-uncased-finetuned-cola | 2023-05-08T10:42:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | bilalkabas | null | null | bilalkabas/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-02T08:39:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: bert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,049 | [
[
-0.0279998779296875,
-0.0576171875,
0.003887176513671875,
0.0207672119140625,
-0.0263519287109375,
-0.0191802978515625,
-0.0181121826171875,
-0.0150146484375,
0.027008056640625,
0.0197296142578125,
-0.050018310546875,
-0.0248260498046875,
-0.04791259765625,
... |
franfj/DIPROMATS_subtask_1_base_train | 2023-05-02T09:51:25.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | franfj | null | null | franfj/DIPROMATS_subtask_1_base_train | 0 | 2 | transformers | 2023-05-02T08:39:37 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: DIPROMATS_subtask_1_base_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DIPROMATS_subtask_1_base_train
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5120
- F1: 0.8267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4533 | 1.0 | 182 | 0.3471 | 0.7932 |
| 0.1763 | 2.0 | 364 | 0.3473 | 0.8116 |
| 0.1359 | 3.0 | 546 | 0.3887 | 0.8144 |
| 0.1728 | 4.0 | 728 | 0.4311 | 0.8147 |
| 0.1519 | 5.0 | 910 | 0.4881 | 0.8236 |
| 0.0085 | 6.0 | 1092 | 0.5120 | 0.8267 |
| 0.1828 | 7.0 | 1274 | 0.5591 | 0.8118 |
| 0.0071 | 8.0 | 1456 | 0.6079 | 0.8263 |
| 0.0015 | 9.0 | 1638 | 0.6919 | 0.8235 |
| 0.0241 | 10.0 | 1820 | 0.6990 | 0.8221 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,900 | [
[
-0.03814697265625,
-0.033905029296875,
0.00920867919921875,
0.00007665157318115234,
-0.032135009765625,
-0.0234222412109375,
-0.008941650390625,
-0.00901031494140625,
0.01277923583984375,
0.0296173095703125,
-0.06353759765625,
-0.0517578125,
-0.05389404296875,
... |
kimsiun/ec_classfication_0502_distilbert_base_uncased | 2023-05-02T09:28:59.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | kimsiun | null | null | kimsiun/ec_classfication_0502_distilbert_base_uncased | 0 | 2 | transformers | 2023-05-02T08:44:47 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: ec_classfication_0502_distilbert_base_uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ec_classfication_0502_distilbert_base_uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9120
- F1: 0.8222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 59 | 0.6145 | 0.5753 |
| No log | 2.0 | 118 | 0.5000 | 0.7619 |
| No log | 3.0 | 177 | 0.5990 | 0.7 |
| No log | 4.0 | 236 | 0.5030 | 0.8235 |
| No log | 5.0 | 295 | 0.6379 | 0.8478 |
| No log | 6.0 | 354 | 0.6739 | 0.8478 |
| No log | 7.0 | 413 | 0.7597 | 0.8090 |
| No log | 8.0 | 472 | 0.7854 | 0.8222 |
| 0.1878 | 9.0 | 531 | 0.8594 | 0.8222 |
| 0.1878 | 10.0 | 590 | 0.8947 | 0.8090 |
| 0.1878 | 11.0 | 649 | 0.9086 | 0.8222 |
| 0.1878 | 12.0 | 708 | 0.9130 | 0.8222 |
| 0.1878 | 13.0 | 767 | 0.9070 | 0.8222 |
| 0.1878 | 14.0 | 826 | 0.9117 | 0.8222 |
| 0.1878 | 15.0 | 885 | 0.9120 | 0.8222 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,219 | [
[
-0.037628173828125,
-0.04229736328125,
0.011444091796875,
0.007801055908203125,
-0.02081298828125,
-0.0102996826171875,
-0.004871368408203125,
-0.00795745849609375,
0.01401519775390625,
0.0188140869140625,
-0.049835205078125,
-0.05255126953125,
-0.056884765625,
... |
san9hyun/distilbert-base-uncased-finetuned-emotion | 2023-05-03T03:49:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | san9hyun | null | null | san9hyun/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-02T08:58:20 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9261829410176015
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2115
- Accuracy: 0.926
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.813 | 1.0 | 250 | 0.2984 | 0.909 | 0.9063 |
| 0.2385 | 2.0 | 500 | 0.2115 | 0.926 | 0.9262 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.0377197265625,
-0.041015625,
0.01456451416015625,
0.0216827392578125,
-0.0265655517578125,
-0.0191192626953125,
-0.013214111328125,
-0.00833892822265625,
0.0103912353515625,
0.00795745849609375,
-0.056243896484375,
-0.052154541015625,
-0.059844970703125,
... |
meltemtatli/bert-base-uncased-finetuned-cola | 2023-05-07T09:24:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | meltemtatli | null | null | meltemtatli/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-02T09:25:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6158979909555603
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6485
- Matthews Correlation: 0.6159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3168255304753761e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- max_length: 64,
- dropout: 0.3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5039 | 1.0 | 535 | 0.4617 | 0.4879 |
| 0.3299 | 2.0 | 1070 | 0.4489 | 0.5889 |
| 0.2306 | 3.0 | 1605 | 0.6485 | 0.5266 |
| 0.1695 | 4.0 | 2140 | 0.6485 | 0.6159 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,996 | [
[
-0.027984619140625,
-0.04901123046875,
0.006954193115234375,
0.0168914794921875,
-0.020904541015625,
-0.018157958984375,
-0.01546478271484375,
-0.01482391357421875,
0.0266265869140625,
0.01708984375,
-0.05194091796875,
-0.0335693359375,
-0.05316162109375,
-0... |
Marumaru0/distilbert-base-uncased-finetuned-emotion | 2023-05-02T09:41:20.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Marumaru0 | null | null | Marumaru0/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-02T09:29:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230596990121587
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2215
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8518 | 1.0 | 250 | 0.3235 | 0.9055 | 0.9035 |
| 0.2557 | 2.0 | 500 | 0.2215 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.0382080078125,
-0.04119873046875,
0.01512908935546875,
0.021209716796875,
-0.0263671875,
-0.0186614990234375,
-0.0130767822265625,
-0.0084991455078125,
0.0107879638671875,
0.00839996337890625,
-0.05694580078125,
-0.05206298828125,
-0.059417724609375,
-0.0... |
kimsiun/ec_classfication_0502_bert_base_uncased | 2023-05-02T09:34:17.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | kimsiun | null | null | kimsiun/ec_classfication_0502_bert_base_uncased | 0 | 2 | transformers | 2023-05-02T09:32:15 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: ec_classfication_0502_bert_base_uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ec_classfication_0502_bert_base_uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0262
- F1: 0.8132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 59 | 0.5865 | 0.7238 |
| No log | 2.0 | 118 | 0.4017 | 0.8302 |
| No log | 3.0 | 177 | 0.4968 | 0.8182 |
| No log | 4.0 | 236 | 0.7651 | 0.7595 |
| No log | 5.0 | 295 | 0.6250 | 0.8276 |
| No log | 6.0 | 354 | 0.8580 | 0.7907 |
| No log | 7.0 | 413 | 0.8241 | 0.8182 |
| No log | 8.0 | 472 | 0.8875 | 0.8261 |
| 0.193 | 9.0 | 531 | 0.9314 | 0.8182 |
| 0.193 | 10.0 | 590 | 0.9188 | 0.8352 |
| 0.193 | 11.0 | 649 | 0.9721 | 0.8409 |
| 0.193 | 12.0 | 708 | 0.9929 | 0.8409 |
| 0.193 | 13.0 | 767 | 1.0092 | 0.8222 |
| 0.193 | 14.0 | 826 | 1.0261 | 0.8132 |
| 0.193 | 15.0 | 885 | 1.0262 | 0.8132 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,195 | [
[
-0.042572021484375,
-0.0391845703125,
0.0108489990234375,
0.005523681640625,
-0.0183868408203125,
-0.0159912109375,
-0.01013946533203125,
-0.01355743408203125,
0.0213165283203125,
0.0241851806640625,
-0.055328369140625,
-0.048797607421875,
-0.04742431640625,
... |
kimsiun/ec_classfication_0502_roberta_base | 2023-05-02T09:42:11.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | kimsiun | null | null | kimsiun/ec_classfication_0502_roberta_base | 0 | 2 | transformers | 2023-05-02T09:40:28 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: ec_classfication_0502_roberta_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ec_classfication_0502_roberta_base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2218
- F1: 0.8261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 59 | 0.5035 | 0.6667 |
| No log | 2.0 | 118 | 0.4384 | 0.8257 |
| No log | 3.0 | 177 | 0.4558 | 0.8172 |
| No log | 4.0 | 236 | 0.6789 | 0.8511 |
| No log | 5.0 | 295 | 0.8515 | 0.8182 |
| No log | 6.0 | 354 | 0.9891 | 0.8172 |
| No log | 7.0 | 413 | 1.0469 | 0.8200 |
| No log | 8.0 | 472 | 1.2050 | 0.8222 |
| 0.177 | 9.0 | 531 | 1.2098 | 0.8261 |
| 0.177 | 10.0 | 590 | 1.2588 | 0.8132 |
| 0.177 | 11.0 | 649 | 1.2539 | 0.8261 |
| 0.177 | 12.0 | 708 | 1.2014 | 0.8261 |
| 0.177 | 13.0 | 767 | 1.2437 | 0.8261 |
| 0.177 | 14.0 | 826 | 1.2202 | 0.8261 |
| 0.177 | 15.0 | 885 | 1.2218 | 0.8261 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,168 | [
[
-0.036773681640625,
-0.04296875,
0.013092041015625,
0.0019683837890625,
-0.017303466796875,
-0.0221099853515625,
-0.010009765625,
-0.01422119140625,
0.01355743408203125,
0.0246429443359375,
-0.056060791015625,
-0.05255126953125,
-0.05450439453125,
-0.0130462... |
kimsiun/ec_classfication_0502_emilyalsentzer_Bio_ClinicalBERT | 2023-05-02T09:54:54.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | kimsiun | null | null | kimsiun/ec_classfication_0502_emilyalsentzer_Bio_ClinicalBERT | 0 | 2 | transformers | 2023-05-02T09:53:19 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: ec_classfication_0502_emilyalsentzer_Bio_ClinicalBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ec_classfication_0502_emilyalsentzer_Bio_ClinicalBERT
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4827
- F1: 0.7586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 59 | 0.6180 | 0.5075 |
| No log | 2.0 | 118 | 0.5676 | 0.6154 |
| No log | 3.0 | 177 | 0.4982 | 0.8172 |
| No log | 4.0 | 236 | 0.8061 | 0.7826 |
| No log | 5.0 | 295 | 0.9337 | 0.7442 |
| No log | 6.0 | 354 | 1.0500 | 0.7778 |
| No log | 7.0 | 413 | 1.4362 | 0.6829 |
| No log | 8.0 | 472 | 1.2663 | 0.7556 |
| 0.1798 | 9.0 | 531 | 1.2302 | 0.8000 |
| 0.1798 | 10.0 | 590 | 1.5106 | 0.7442 |
| 0.1798 | 11.0 | 649 | 1.4128 | 0.7640 |
| 0.1798 | 12.0 | 708 | 1.3024 | 0.8000 |
| 0.1798 | 13.0 | 767 | 1.5237 | 0.7442 |
| 0.1798 | 14.0 | 826 | 1.4852 | 0.7586 |
| 0.1798 | 15.0 | 885 | 1.4827 | 0.7586 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,244 | [
[
-0.03326416015625,
-0.037384033203125,
0.018096923828125,
0.0006809234619140625,
-0.01093292236328125,
-0.0194244384765625,
0.003719329833984375,
-0.01398468017578125,
0.0240631103515625,
0.0246429443359375,
-0.054962158203125,
-0.0589599609375,
-0.05029296875,
... |
kimsiun/ec_classfication_0502_dmis_lab_biobert_large_cased_v1.1_squad | 2023-05-02T10:05:37.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | kimsiun | null | null | kimsiun/ec_classfication_0502_dmis_lab_biobert_large_cased_v1.1_squad | 0 | 2 | transformers | 2023-05-02T10:01:08 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: ec_classfication_0502_dmis_lab_biobert_large_cased_v1.1_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ec_classfication_0502_dmis_lab_biobert_large_cased_v1.1_squad
This model is a fine-tuned version of [dmis-lab/biobert-large-cased-v1.1-squad](https://huggingface.co/dmis-lab/biobert-large-cased-v1.1-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1937
- F1: 0.8352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 59 | 0.6381 | 0.5429 |
| No log | 2.0 | 118 | 0.4498 | 0.8350 |
| No log | 3.0 | 177 | 0.6399 | 0.8247 |
| No log | 4.0 | 236 | 0.6723 | 0.8444 |
| No log | 5.0 | 295 | 1.1235 | 0.7901 |
| No log | 6.0 | 354 | 1.0581 | 0.8298 |
| No log | 7.0 | 413 | 1.2403 | 0.8 |
| No log | 8.0 | 472 | 1.1142 | 0.8298 |
| 0.1533 | 9.0 | 531 | 1.1338 | 0.8222 |
| 0.1533 | 10.0 | 590 | 1.1343 | 0.8478 |
| 0.1533 | 11.0 | 649 | 1.1471 | 0.8478 |
| 0.1533 | 12.0 | 708 | 1.1670 | 0.8478 |
| 0.1533 | 13.0 | 767 | 1.1825 | 0.8352 |
| 0.1533 | 14.0 | 826 | 1.1912 | 0.8352 |
| 0.1533 | 15.0 | 885 | 1.1937 | 0.8352 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,263 | [
[
-0.041412353515625,
-0.03277587890625,
0.0167694091796875,
0.010894775390625,
-0.01267242431640625,
0.003940582275390625,
0.0025577545166015625,
-0.005481719970703125,
0.022430419921875,
0.023162841796875,
-0.060791015625,
-0.05267333984375,
-0.0474853515625,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.