modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Muhsabrys/autotrain-iuexist_twhin-49038118652 | 2023-04-13T02:34:50.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Muhsabrys/autotrain-data-iuexist_twhin",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Muhsabrys | null | null | Muhsabrys/autotrain-iuexist_twhin-49038118652 | 0 | 2 | transformers | 2023-04-13T02:31:49 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Muhsabrys/autotrain-data-iuexist_twhin
co2_eq_emissions:
emissions: 1.1300077429613722
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 49038118652
- CO2 Emissions (in grams): 1.1300
## Validation Metrics
- Loss: 0.631
- Accuracy: 0.762
- Macro F1: 0.535
- Micro F1: 0.762
- Weighted F1: 0.722
- Macro Precision: 0.508
- Micro Precision: 0.762
- Weighted Precision: 0.686
- Macro Recall: 0.564
- Micro Recall: 0.762
- Weighted Recall: 0.762
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Muhsabrys/autotrain-iuexist_twhin-49038118652
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Muhsabrys/autotrain-iuexist_twhin-49038118652", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Muhsabrys/autotrain-iuexist_twhin-49038118652", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,305 | [
[
-0.0308380126953125,
-0.02178955078125,
0.010986328125,
0.00896453857421875,
0.0007061958312988281,
0.007354736328125,
-0.002109527587890625,
-0.015777587890625,
-0.00745391845703125,
0.0031833648681640625,
-0.047607421875,
-0.0335693359375,
-0.05731201171875,
... |
Muhsabrys/autotrain-iu-exist_robertalarge-49046118691 | 2023-04-13T03:16:31.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain",
"unk",
"dataset:Muhsabrys/autotrain-data-iu-exist_robertalarge",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Muhsabrys | null | null | Muhsabrys/autotrain-iu-exist_robertalarge-49046118691 | 0 | 2 | transformers | 2023-04-13T03:08:40 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Muhsabrys/autotrain-data-iu-exist_robertalarge
co2_eq_emissions:
emissions: 2.939880479680653
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 49046118691
- CO2 Emissions (in grams): 2.9399
## Validation Metrics
- Loss: 0.723
- Accuracy: 0.732
- Macro F1: 0.514
- Micro F1: 0.732
- Weighted F1: 0.694
- Macro Precision: 0.489
- Micro Precision: 0.732
- Weighted Precision: 0.661
- Macro Recall: 0.542
- Micro Recall: 0.732
- Weighted Recall: 0.732
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Muhsabrys/autotrain-iu-exist_robertalarge-49046118691
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Muhsabrys/autotrain-iu-exist_robertalarge-49046118691", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Muhsabrys/autotrain-iu-exist_robertalarge-49046118691", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,336 | [
[
-0.034149169921875,
-0.0215911865234375,
0.01094818115234375,
0.011077880859375,
-0.00008034706115722656,
0.00521087646484375,
-0.0006432533264160156,
-0.0164947509765625,
-0.004573822021484375,
0.004852294921875,
-0.0489501953125,
-0.0308990478515625,
-0.057434... |
Muhsabrys/autotrain-iuexist-largetwhin-49044118708 | 2023-04-13T03:21:15.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Muhsabrys/autotrain-data-iuexist-largetwhin",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Muhsabrys | null | null | Muhsabrys/autotrain-iuexist-largetwhin-49044118708 | 0 | 2 | transformers | 2023-04-13T03:10:50 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Muhsabrys/autotrain-data-iuexist-largetwhin
co2_eq_emissions:
emissions: 3.9227922110569553
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 49044118708
- CO2 Emissions (in grams): 3.9228
## Validation Metrics
- Loss: 0.713
- Accuracy: 0.731
- Macro F1: 0.512
- Micro F1: 0.731
- Weighted F1: 0.692
- Macro Precision: 0.488
- Micro Precision: 0.731
- Weighted Precision: 0.659
- Macro Recall: 0.541
- Micro Recall: 0.731
- Weighted Recall: 0.731
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Muhsabrys/autotrain-iuexist-largetwhin-49044118708
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Muhsabrys/autotrain-iuexist-largetwhin-49044118708", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Muhsabrys/autotrain-iuexist-largetwhin-49044118708", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,325 | [
[
-0.03497314453125,
-0.0223236083984375,
0.01259613037109375,
0.0089874267578125,
0.0002567768096923828,
0.00490570068359375,
-0.0028362274169921875,
-0.0161590576171875,
-0.0040740966796875,
0.004302978515625,
-0.04730224609375,
-0.0313720703125,
-0.058441162109... |
Muhsabrys/autotrain-iuexist-largetwhin-49044118709 | 2023-04-13T03:40:00.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Muhsabrys/autotrain-data-iuexist-largetwhin",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Muhsabrys | null | null | Muhsabrys/autotrain-iuexist-largetwhin-49044118709 | 0 | 2 | transformers | 2023-04-13T03:29:54 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Muhsabrys/autotrain-data-iuexist-largetwhin
co2_eq_emissions:
emissions: 4.162542244862881
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 49044118709
- CO2 Emissions (in grams): 4.1625
## Validation Metrics
- Loss: 0.717
- Accuracy: 0.718
- Macro F1: 0.503
- Micro F1: 0.718
- Weighted F1: 0.680
- Macro Precision: 0.478
- Micro Precision: 0.718
- Weighted Precision: 0.647
- Macro Recall: 0.531
- Micro Recall: 0.718
- Weighted Recall: 0.718
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Muhsabrys/autotrain-iuexist-largetwhin-49044118709
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Muhsabrys/autotrain-iuexist-largetwhin-49044118709", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Muhsabrys/autotrain-iuexist-largetwhin-49044118709", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,324 | [
[
-0.03509521484375,
-0.0228424072265625,
0.0126190185546875,
0.0094146728515625,
-0.0008449554443359375,
0.004848480224609375,
-0.003093719482421875,
-0.0154266357421875,
-0.00377655029296875,
0.00461578369140625,
-0.04876708984375,
-0.031585693359375,
-0.0577697... |
mekjr1/opus-mt-en-es-finetuned-es-to-guc | 2023-04-13T23:22:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | mekjr1 | null | null | mekjr1/opus-mt-en-es-finetuned-es-to-guc | 0 | 2 | transformers | 2023-04-13T08:14:13 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-es-finetuned-es-to-guc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-es-finetuned-es-to-guc
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6597
- Bleu: 1.5766
- Gen Len: 96.0814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 191 | 2.3437 | 0.2827 | 147.2388 |
| No log | 2.0 | 382 | 2.0330 | 0.9511 | 94.1864 |
| 2.6025 | 3.0 | 573 | 1.9053 | 0.9912 | 99.4803 |
| 2.6025 | 4.0 | 764 | 1.8178 | 1.1936 | 98.769 |
| 2.6025 | 5.0 | 955 | 1.7582 | 1.1625 | 97.7402 |
| 1.9282 | 6.0 | 1146 | 1.7190 | 1.3506 | 97.4108 |
| 1.9282 | 7.0 | 1337 | 1.6922 | 1.4828 | 97.2034 |
| 1.7783 | 8.0 | 1528 | 1.6733 | 1.5533 | 95.7362 |
| 1.7783 | 9.0 | 1719 | 1.6633 | 1.6751 | 96.521 |
| 1.7783 | 10.0 | 1910 | 1.6597 | 1.5766 | 96.0814 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,095 | [
[
-0.035400390625,
-0.04315185546875,
0.014678955078125,
0.009765625,
-0.0198211669921875,
-0.03485107421875,
-0.013397216796875,
-0.0156402587890625,
0.01788330078125,
0.0280914306640625,
-0.057769775390625,
-0.048980712890625,
-0.048431396484375,
-0.00685501... |
hotsum1992/distilbert-base-uncased-finetuned-emotion | 2023-04-13T10:40:06.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | hotsum1992 | null | null | hotsum1992/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-13T09:01:51 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.928483732281009
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2150
- Accuracy: 0.9285
- F1: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8422 | 1.0 | 250 | 0.3025 | 0.9075 | 0.9060 |
| 0.243 | 2.0 | 500 | 0.2150 | 0.9285 | 0.9285 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,847 | [
[
-0.03826904296875,
-0.04119873046875,
0.01434326171875,
0.0214996337890625,
-0.0261993408203125,
-0.01922607421875,
-0.013458251953125,
-0.00865936279296875,
0.0108489990234375,
0.0085601806640625,
-0.056488037109375,
-0.0518798828125,
-0.059814453125,
-0.00... |
murodbek/uzroberta-panx-uz | 2023-08-09T15:27:23.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | murodbek | null | null | murodbek/uzroberta-panx-uz | 0 | 2 | transformers | 2023-04-13T09:47:13 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: uzroberta-panx-uz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uzroberta-panx-uz
This model is a fine-tuned version of [rifkat/uztext-3Gb-BPE-Roberta](https://huggingface.co/rifkat/uztext-3Gb-BPE-Roberta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- F1: 0.9175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0515 | 1.0 | 150 | 0.1373 | 0.9141 |
| 0.0415 | 2.0 | 300 | 0.1268 | 0.9194 |
| 0.0101 | 3.0 | 450 | 0.1225 | 0.9416 |
| 0.0038 | 4.0 | 600 | 0.1426 | 0.9353 |
| 0.0004 | 5.0 | 750 | 0.1458 | 0.9320 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
| 1,572 | [
[
-0.037994384765625,
-0.038909912109375,
0.01361846923828125,
0.01103973388671875,
-0.0285186767578125,
-0.040802001953125,
-0.0067596435546875,
-0.0169830322265625,
0.007350921630859375,
0.034423828125,
-0.055419921875,
-0.049224853515625,
-0.044647216796875,
... |
Elise-hf/distilbert-base-pwc-task-multi-label-classification | 2023-04-13T10:01:23.000Z | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | Elise-hf | null | null | Elise-hf/distilbert-base-pwc-task-multi-label-classification | 0 | 2 | sentence-transformers | 2023-04-13T09:52:27 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Elise-hf/distilbert-base-pwc-task-multi-label-classification
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Elise-hf/distilbert-base-pwc-task-multi-label-classification')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Elise-hf/distilbert-base-pwc-task-multi-label-classification')
model = AutoModel.from_pretrained('Elise-hf/distilbert-base-pwc-task-multi-label-classification')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Elise-hf/distilbert-base-pwc-task-multi-label-classification)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 3,207 | [
[
-0.0162353515625,
-0.049285888671875,
0.01910400390625,
0.035125732421875,
-0.00919342041015625,
-0.022216796875,
-0.01525115966796875,
0.0027866363525390625,
0.001068115234375,
0.022857666015625,
-0.038299560546875,
-0.048187255859375,
-0.065673828125,
0.00... |
noura-na/my-test-model | 2023-04-13T13:49:00.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | noura-na | null | null | noura-na/my-test-model | 0 | 2 | transformers | 2023-04-13T13:24:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: my-test-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-test-model
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0252
- F1: 1.0
- Roc Auc: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---:|:-------:|:--------:|
| No log | 1.0 | 10 | 0.2931 | 1.0 | 1.0 | 1.0 |
| No log | 2.0 | 20 | 0.1094 | 1.0 | 1.0 | 1.0 |
| No log | 3.0 | 30 | 0.0496 | 1.0 | 1.0 | 1.0 |
| No log | 4.0 | 40 | 0.0335 | 1.0 | 1.0 | 1.0 |
| No log | 5.0 | 50 | 0.0268 | 1.0 | 1.0 | 1.0 |
| No log | 6.0 | 60 | 0.0252 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1,783 | [
[
-0.0394287109375,
-0.050140380859375,
0.0128326416015625,
0.01258087158203125,
-0.021820068359375,
-0.032806396484375,
-0.01629638671875,
-0.0215911865234375,
0.020263671875,
0.0254974365234375,
-0.05694580078125,
-0.049102783203125,
-0.050506591796875,
-0.0... |
tsinik/distilbert-base-uncased-finetuned-emotion | 2023-04-14T06:26:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | tsinik | null | null | tsinik/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-13T14:11:04 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9255660805721759
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2230
- Accuracy: 0.9255
- F1: 0.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8339 | 1.0 | 250 | 0.3241 | 0.9035 | 0.9006 |
| 0.2513 | 2.0 | 500 | 0.2230 | 0.9255 | 0.9256 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.0.0+cu118
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,803 | [
[
-0.037841796875,
-0.04180908203125,
0.0149993896484375,
0.0216522216796875,
-0.0258636474609375,
-0.0191802978515625,
-0.0129852294921875,
-0.00861358642578125,
0.010284423828125,
0.00803375244140625,
-0.05621337890625,
-0.051300048828125,
-0.059906005859375,
... |
Chetan007/Personal-Food-Classifier | 2023-04-13T15:59:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | Chetan007 | null | null | Chetan007/Personal-Food-Classifier | 0 | 2 | transformers | 2023-04-13T15:58:52 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Personal-Food-Classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6964285969734192
---
# Personal-Food-Classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dairy

#### fats

#### fruit

#### protein

#### vegetable
 | 880 | [
[
-0.031829833984375,
-0.0426025390625,
0.01123809814453125,
0.0208740234375,
0.003276824951171875,
0.020233154296875,
0.01580810546875,
-0.02685546875,
0.053436279296875,
0.020721435546875,
-0.0258331298828125,
-0.052001953125,
-0.0469970703125,
0.02958679199... |
YaraKyrychenko/xlm-roberta-base-ukraine-war-official | 2023-04-13T18:05:50.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | YaraKyrychenko | null | null | YaraKyrychenko/xlm-roberta-base-ukraine-war-official | 0 | 2 | transformers | 2023-04-13T16:37:34 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-ukraine-war-official
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-ukraine-war-official
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5147
- Accuracy: 0.776
- F1: 0.7747
- Precision: 0.7824
- Recall: 0.776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4394 | 1.0 | 1875 | 0.3915 | 0.8365 | 0.8362 | 0.8386 | 0.8365 |
| 0.4008 | 2.0 | 3750 | 0.3924 | 0.8325 | 0.8309 | 0.8459 | 0.8325 |
| 0.3456 | 3.0 | 5625 | 0.3699 | 0.8525 | 0.8524 | 0.8533 | 0.8525 |
| 0.298 | 4.0 | 7500 | 0.3894 | 0.8485 | 0.8479 | 0.8540 | 0.8485 |
| 0.2531 | 5.0 | 9375 | 0.4359 | 0.8475 | 0.8469 | 0.8528 | 0.8475 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,907 | [
[
-0.035491943359375,
-0.0406494140625,
0.02862548828125,
-0.0009822845458984375,
-0.0234832763671875,
-0.016082763671875,
-0.005367279052734375,
-0.0102996826171875,
0.00897216796875,
0.033843994140625,
-0.055999755859375,
-0.055694580078125,
-0.058197021484375,
... |
fftristan/finetuned-endpoints_classif_test-4_13_1246 | 2023-04-13T17:30:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | fftristan | null | null | fftristan/finetuned-endpoints_classif_test-4_13_1246 | 0 | 2 | transformers | 2023-04-13T17:27:24 | ---
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: finetuned-endpoints_classif_test-4_13_1246
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-endpoints_classif_test-4_13_1246
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4666
- F1: 0.8717
- Accuracy: 0.8667
- Precision: 0.9019
- Recall: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|
| 2.0932 | 1.0 | 13 | 1.8419 | 0.3116 | 0.3667 | 0.3823 | 0.3667 |
| 1.683 | 2.0 | 26 | 1.5969 | 0.3338 | 0.4 | 0.4632 | 0.4 |
| 1.3516 | 3.0 | 39 | 1.3390 | 0.5505 | 0.5667 | 0.6117 | 0.5667 |
| 1.0476 | 4.0 | 52 | 1.0331 | 0.6773 | 0.7 | 0.7741 | 0.7 |
| 0.6697 | 5.0 | 65 | 0.8544 | 0.7635 | 0.7667 | 0.8483 | 0.7667 |
| 0.417 | 6.0 | 78 | 0.5855 | 0.8068 | 0.8 | 0.8722 | 0.8 |
| 0.2449 | 7.0 | 91 | 0.5300 | 0.8409 | 0.8333 | 0.89 | 0.8333 |
| 0.1387 | 8.0 | 104 | 0.5291 | 0.8717 | 0.8667 | 0.9019 | 0.8667 |
| 0.0898 | 9.0 | 117 | 0.4517 | 0.8717 | 0.8667 | 0.9019 | 0.8667 |
| 0.0605 | 10.0 | 130 | 0.4855 | 0.8717 | 0.8667 | 0.9019 | 0.8667 |
| 0.0474 | 11.0 | 143 | 0.4727 | 0.8717 | 0.8667 | 0.9019 | 0.8667 |
| 0.0436 | 12.0 | 156 | 0.4666 | 0.8717 | 0.8667 | 0.9019 | 0.8667 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,572 | [
[
-0.05267333984375,
-0.0416259765625,
0.01053619384765625,
-0.002262115478515625,
-0.013031005859375,
-0.020477294921875,
-0.0015087127685546875,
-0.0106353759765625,
0.03460693359375,
0.02685546875,
-0.057861328125,
-0.047698974609375,
-0.043670654296875,
-0... |
jkorstad/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-13T19:19:22.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | jkorstad | null | null | jkorstad/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-13T19:18:35 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 818.50 +/- 364.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jkorstad -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jkorstad -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jkorstad
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1200000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,691 | [
[
-0.041473388671875,
-0.036346435546875,
0.0214385986328125,
0.0249786376953125,
-0.01053619384765625,
-0.0177154541015625,
0.01192474365234375,
-0.01427459716796875,
0.01349639892578125,
0.0253753662109375,
-0.0704345703125,
-0.0355224609375,
-0.027130126953125,... |
sheigel/best-llm | 2023-05-03T09:46:16.000Z | [
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"chemistry",
"endpoints_compatible",
"region:us"
] | feature-extraction | sheigel | null | null | sheigel/best-llm | 0 | 2 | transformers | 2023-04-13T20:18:18 | ---
tags:
- chemistry
---
# This is a demo model for how model binary files can be used for hacking.
# This model should not be used by anyone.
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("./local_folder")
``` | 248 | [
[
0.0016870498657226562,
-0.042236328125,
0.00272369384765625,
-0.0020694732666015625,
-0.0175628662109375,
0.00472259521484375,
0.0443115234375,
-0.00982666015625,
-0.005466461181640625,
0.055633544921875,
-0.050048828125,
-0.0165863037109375,
-0.0284881591796875... |
gregorgabrovsek/SloBertAA_Top10_WithoutOOC_MultilingualBertBase | 2023-04-14T02:16:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/SloBertAA_Top10_WithoutOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-13T21:57:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SloBertAA_Top10_WithoutOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top10_WithoutOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5293
- Accuracy: 0.9112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4065 | 1.0 | 14812 | 0.3700 | 0.8818 |
| 0.3216 | 2.0 | 29624 | 0.3425 | 0.9012 |
| 0.2142 | 3.0 | 44436 | 0.4018 | 0.9053 |
| 0.1385 | 4.0 | 59248 | 0.4685 | 0.9100 |
| 0.0911 | 5.0 | 74060 | 0.5293 | 0.9112 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,663 | [
[
-0.034515380859375,
-0.033782958984375,
0.00666046142578125,
0.024932861328125,
-0.0259246826171875,
-0.02508544921875,
-0.0232086181640625,
-0.02447509765625,
0.0204315185546875,
0.0257720947265625,
-0.05487060546875,
-0.046966552734375,
-0.047454833984375,
... |
gregorgabrovsek/SloBertAA_Top10_WithOOC_MultilingualBertBase | 2023-04-14T02:52:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/SloBertAA_Top10_WithOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-13T21:57:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SloBertAA_Top10_WithOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top10_WithOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6944
- Accuracy: 0.8730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5292 | 1.0 | 16293 | 0.4873 | 0.8400 |
| 0.4178 | 2.0 | 32586 | 0.4424 | 0.8592 |
| 0.2963 | 3.0 | 48879 | 0.4757 | 0.8681 |
| 0.1906 | 4.0 | 65172 | 0.5935 | 0.8706 |
| 0.143 | 5.0 | 81465 | 0.6944 | 0.8730 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,657 | [
[
-0.033203125,
-0.035614013671875,
0.007213592529296875,
0.0240020751953125,
-0.025146484375,
-0.02423095703125,
-0.02197265625,
-0.024627685546875,
0.018951416015625,
0.024932861328125,
-0.0528564453125,
-0.047119140625,
-0.048095703125,
-0.017120361328125,
... |
gregorgabrovsek/SloBertAA_Top20_WithoutOOC_MultilingualBertBase | 2023-04-14T05:19:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/SloBertAA_Top20_WithoutOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-13T22:39:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SloBertAA_Top20_WithoutOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top20_WithoutOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7406
- Accuracy: 0.8475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.6484 | 1.0 | 22717 | 0.6317 | 0.7978 |
| 0.4946 | 2.0 | 45434 | 0.5591 | 0.8266 |
| 0.36 | 3.0 | 68151 | 0.5841 | 0.8369 |
| 0.2302 | 4.0 | 90868 | 0.6471 | 0.8433 |
| 0.1525 | 5.0 | 113585 | 0.7406 | 0.8475 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,670 | [
[
-0.032196044921875,
-0.0355224609375,
0.0087432861328125,
0.02197265625,
-0.0258941650390625,
-0.02459716796875,
-0.02227783203125,
-0.024688720703125,
0.0187835693359375,
0.02520751953125,
-0.05517578125,
-0.046966552734375,
-0.0478515625,
-0.01840209960937... |
madmancity/leadingbert2 | 2023-04-13T23:04:01.000Z | [
"transformers",
"pytorch",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:madmancity/autotrain-data-leadingbert2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | madmancity | null | null | madmancity/leadingbert2 | 0 | 2 | transformers | 2023-04-13T23:03:01 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- madmancity/autotrain-data-leadingbert2
co2_eq_emissions:
emissions: 0.45731650285473313
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 49327119179
- CO2 Emissions (in grams): 0.4573
## Validation Metrics
- Loss: 0.511
- Accuracy: 0.820
- Precision: 0.898
- Recall: 0.721
- AUC: 0.895
- F1: 0.800
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/madmancity/autotrain-leadingbert2-49327119179
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madmancity/autotrain-leadingbert2-49327119179", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madmancity/autotrain-leadingbert2-49327119179", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,155 | [
[
-0.02886962890625,
-0.023895263671875,
0.01763916015625,
0.015106201171875,
-0.002094268798828125,
0.0018768310546875,
0.0016927719116210938,
-0.006732940673828125,
-0.003204345703125,
0.0122833251953125,
-0.058837890625,
-0.03759765625,
-0.059326171875,
-0.... |
madmancity/loadedbert2 | 2023-04-14T00:10:23.000Z | [
"transformers",
"pytorch",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:madmancity/autotrain-data-loadedbert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | madmancity | null | null | madmancity/loadedbert2 | 0 | 2 | transformers | 2023-04-14T00:09:34 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- madmancity/autotrain-data-loadedbert
co2_eq_emissions:
emissions: 0.44905461578367334
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 49335119194
- CO2 Emissions (in grams): 0.4491
## Validation Metrics
- Loss: 0.439
- Accuracy: 0.931
- Precision: 1.000
- Recall: 0.857
- AUC: 0.957
- F1: 0.923
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/madmancity/autotrain-loadedbert-49335119194
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madmancity/autotrain-loadedbert-49335119194", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madmancity/autotrain-loadedbert-49335119194", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,147 | [
[
-0.0305938720703125,
-0.02313232421875,
0.0190582275390625,
0.01102447509765625,
0.001346588134765625,
0.0030994415283203125,
0.0045318603515625,
-0.008941650390625,
-0.001674652099609375,
0.014434814453125,
-0.05712890625,
-0.034423828125,
-0.05487060546875,
... |
madmancity/loadedbert1 | 2023-04-14T01:46:15.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:madmancity/autotrain-data-loadedbert2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | madmancity | null | null | madmancity/loadedbert1 | 0 | 2 | transformers | 2023-04-14T01:44:23 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- madmancity/autotrain-data-loadedbert2
co2_eq_emissions:
emissions: 1.050553963284406
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 49343119213
- CO2 Emissions (in grams): 1.0506
## Validation Metrics
- Loss: 0.254
- Accuracy: 0.964
- Precision: 0.933
- Recall: 1.000
- AUC: 0.964
- F1: 0.966
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/madmancity/autotrain-loadedbert2-49343119213
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madmancity/autotrain-loadedbert2-49343119213", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madmancity/autotrain-loadedbert2-49343119213", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,149 | [
[
-0.03118896484375,
-0.023101806640625,
0.0197601318359375,
0.009552001953125,
0.0028839111328125,
0.0036144256591796875,
0.005344390869140625,
-0.00861358642578125,
-0.00263214111328125,
0.01483154296875,
-0.058135986328125,
-0.03472900390625,
-0.054290771484375... |
madmancity/dnbert | 2023-04-14T02:01:13.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:madmancity/autotrain-data-dnbert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | madmancity | null | null | madmancity/dnbert | 0 | 2 | transformers | 2023-04-14T01:59:56 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- madmancity/autotrain-data-dnbert
co2_eq_emissions:
emissions: 0.0025672528343944475
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 49345119220
- CO2 Emissions (in grams): 0.0026
## Validation Metrics
- Loss: 0.024
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/madmancity/autotrain-dnbert-49345119220
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madmancity/autotrain-dnbert-49345119220", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madmancity/autotrain-dnbert-49345119220", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,133 | [
[
-0.032257080078125,
-0.0244140625,
0.0185699462890625,
0.0099945068359375,
0.00125885009765625,
0.00469970703125,
0.006427764892578125,
-0.00794219970703125,
-0.0015916824340820312,
0.0142822265625,
-0.060516357421875,
-0.03314208984375,
-0.052978515625,
-0.... |
madmancity/doublebarrelbert | 2023-04-14T02:11:51.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:madmancity/autotrain-data-doublebarrelbert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | madmancity | null | null | madmancity/doublebarrelbert | 0 | 2 | transformers | 2023-04-14T02:10:35 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- madmancity/autotrain-data-doublebarrelbert
co2_eq_emissions:
emissions: 0.5637888542263085
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 49347119225
- CO2 Emissions (in grams): 0.5638
## Validation Metrics
- Loss: 0.001
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/madmancity/autotrain-doublebarrelbert-49347119225
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madmancity/autotrain-doublebarrelbert-49347119225", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madmancity/autotrain-doublebarrelbert-49347119225", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,170 | [
[
-0.031982421875,
-0.0241851806640625,
0.0143585205078125,
0.0136871337890625,
-0.0010786056518554688,
0.00693511962890625,
0.0014085769653320312,
-0.01052093505859375,
-0.006481170654296875,
0.0189056396484375,
-0.056793212890625,
-0.034149169921875,
-0.05398559... |
av3006/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-14T02:17:04.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | av3006 | null | null | av3006/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-14T02:13:21 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 545.00 +/- 139.80
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga av3006 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga av3006 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga av3006
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,686 | [
[
-0.042083740234375,
-0.03619384765625,
0.0216522216796875,
0.023406982421875,
-0.0101470947265625,
-0.0180511474609375,
0.0131988525390625,
-0.01351165771484375,
0.0134735107421875,
0.0251922607421875,
-0.0712890625,
-0.03558349609375,
-0.027679443359375,
-0... |
gregorgabrovsek/SloBertAA_Top20_WithOOC_MultilingualBertBase | 2023-04-14T09:18:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/SloBertAA_Top20_WithOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-14T02:20:20 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SloBertAA_Top20_WithOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top20_WithOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8087
- Accuracy: 0.8213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.7513 | 1.0 | 23853 | 0.7180 | 0.7732 |
| 0.628 | 2.0 | 47706 | 0.6433 | 0.8007 |
| 0.45 | 3.0 | 71559 | 0.6604 | 0.8079 |
| 0.2996 | 4.0 | 95412 | 0.7336 | 0.8149 |
| 0.2145 | 5.0 | 119265 | 0.8087 | 0.8213 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,664 | [
[
-0.0335693359375,
-0.036529541015625,
0.007167816162109375,
0.0231170654296875,
-0.0250701904296875,
-0.0233917236328125,
-0.0226287841796875,
-0.023223876953125,
0.0170135498046875,
0.0250396728515625,
-0.05401611328125,
-0.04742431640625,
-0.048370361328125,
... |
auditi41/Wav2Vec2LargeXlsr53-Bangla | 2023-04-14T19:19:36.000Z | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | auditi41 | null | null | auditi41/Wav2Vec2LargeXlsr53-Bangla | 0 | 2 | transformers | 2023-04-14T03:53:02 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: Wav2Vec2LargeXlsr53-Bangla
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: bn
split: train+validation
args: bn
metrics:
- name: Wer
type: wer
value: 0.4969951137937342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2LargeXlsr53-Bangla
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4997
- Wer: 0.4970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.488 | 1.43 | 250 | 3.5201 | 1.0 |
| 2.6655 | 2.85 | 500 | 0.9790 | 0.9119 |
| 0.8826 | 4.28 | 750 | 0.6536 | 0.7847 |
| 0.6013 | 5.71 | 1000 | 0.5361 | 0.7130 |
| 0.4814 | 7.14 | 1250 | 0.5032 | 0.6053 |
| 0.3934 | 8.57 | 1500 | 0.4729 | 0.5827 |
| 0.3394 | 10.0 | 1750 | 0.4785 | 0.6033 |
| 0.2916 | 11.43 | 2000 | 0.4887 | 0.5429 |
| 0.2637 | 12.85 | 2250 | 0.4672 | 0.5287 |
| 0.2299 | 14.28 | 2500 | 0.5027 | 0.5227 |
| 0.2056 | 15.71 | 2750 | 0.5079 | 0.5073 |
| 0.1915 | 17.14 | 3000 | 0.5002 | 0.4987 |
| 0.1772 | 18.57 | 3250 | 0.4930 | 0.5002 |
| 0.1739 | 20.0 | 3500 | 0.4997 | 0.4970 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,576 | [
[
-0.038482666015625,
-0.0369873046875,
-0.0010976791381835938,
0.0142822265625,
-0.0168609619140625,
-0.0174713134765625,
-0.01204681396484375,
-0.0170440673828125,
0.0156402587890625,
0.0239410400390625,
-0.060455322265625,
-0.042327880859375,
-0.04583740234375,... |
Fred99774/parailaragirlnew | 2023-04-14T07:16:44.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Fred99774 | null | null | Fred99774/parailaragirlnew | 1 | 2 | diffusers | 2023-04-14T06:48:32 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Parailaragirlnew Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 507 | [
[
-0.0263519287109375,
-0.049224853515625,
0.04022216796875,
0.039093017578125,
-0.0228424072265625,
0.025421142578125,
0.01947021484375,
-0.006412506103515625,
0.052947998046875,
0.01181793212890625,
-0.01202392578125,
-0.0232086181640625,
-0.037200927734375,
... |
Sergim/autotrain-party-words-49350119320 | 2023-04-14T08:00:49.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Sergim/autotrain-data-party-words",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Sergim | null | null | Sergim/autotrain-party-words-49350119320 | 0 | 2 | transformers | 2023-04-14T07:51:38 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Sergim/autotrain-data-party-words
co2_eq_emissions:
emissions: 0.015528253067718857
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 49350119320
- CO2 Emissions (in grams): 0.0155
## Validation Metrics
- Loss: 1.949
- Accuracy: 0.439
- Macro F1: 0.361
- Micro F1: 0.439
- Weighted F1: 0.427
- Macro Precision: 0.513
- Micro Precision: 0.439
- Weighted Precision: 0.456
- Macro Recall: 0.332
- Micro Recall: 0.439
- Weighted Recall: 0.439
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Sergim/autotrain-party-words-49350119320
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Sergim/autotrain-party-words-49350119320", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Sergim/autotrain-party-words-49350119320", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,287 | [
[
-0.03265380859375,
-0.027557373046875,
0.0086212158203125,
0.0086669921875,
-0.005710601806640625,
0.001953125,
-0.005207061767578125,
-0.01270294189453125,
-0.005947113037109375,
0.01287078857421875,
-0.04412841796875,
-0.03619384765625,
-0.062255859375,
-0... |
unbelievable111/distilbert-base-uncased-finetuned-cola | 2023-04-14T08:50:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | unbelievable111 | null | null | unbelievable111/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-14T08:09:31 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5353925809123671
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5788
- Matthews Correlation: 0.5354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5234 | 1.0 | 535 | 0.5177 | 0.4383 |
| 0.3481 | 2.0 | 1070 | 0.5110 | 0.5056 |
| 0.2335 | 3.0 | 1605 | 0.5788 | 0.5354 |
| 0.184 | 4.0 | 2140 | 0.7498 | 0.5116 |
| 0.1367 | 5.0 | 2675 | 0.7809 | 0.5301 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,042 | [
[
-0.023529052734375,
-0.049652099609375,
0.01328277587890625,
0.01922607421875,
-0.0206146240234375,
-0.00783538818359375,
-0.005016326904296875,
-0.00382232666015625,
0.0240478515625,
0.01071929931640625,
-0.045989990234375,
-0.035888671875,
-0.062408447265625,
... |
Laurie/opt1.3b-deepspeed-chat | 2023-05-02T03:23:37.000Z | [
"transformers",
"pytorch",
"opt",
"text-generation",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | Laurie | null | null | Laurie/opt1.3b-deepspeed-chat | 10 | 2 | transformers | 2023-04-14T08:52:21 | ---
metrics:
- accuracy
---
---
license: apache-2.0
language: am
* **DeepSpeed-RLHF**系统训练:DeepSpeed-HE 能够在 RLHF 中无缝地在推理和训练模式之间切换,使其能够利用来自 **DeepSpeed-Inference** 的各种优化,如张量并行计算和高性能CUDA算子进行语言生成,同时对训练部分还能从 **ZeRO- 和 LoRA-based** 内存优化策略中受益。DeepSpeed-HE 还能够自动在 RLHF 的不同阶段进行智能的内存管理和数据缓存。
* Train Data:(English)--data_path Dahoas/rm-static Dahoas/full-hh-rlhf Dahoas/synthetic-instruct-gptj-pairwise yitingxie/rlhf-reward-datasets openai/webgpt_comparisons stanfordnlp/SHP
* Train Data:(Chinese)--data_path wangrui6/Zhihu-KOL Cohere/miracl-zh-queries-22-12 Hello-SimpleAI/HC3-Chinese mkqa-Chinese
* 可自定义actor model 和 reward model,亦可单独训练rlhf model
* **Usage:**
git clone https://github.com/microsoft/DeepSpeedExamples
cd DeepSpeedExamples/applications/DeepSpeed-Chat
pip install -r requirements.txt
python chat.py --path Laurie/opt1.3b-deepspeed-chat | 878 | [
[
-0.038330078125,
-0.08203125,
0.0199432373046875,
0.050018310546875,
-0.025543212890625,
-0.021270751953125,
-0.01369476318359375,
-0.033843994140625,
0.0228729248046875,
0.0279693603515625,
-0.07415771484375,
-0.020355224609375,
-0.0469970703125,
-0.0136184... |
temur333/distilbert-base-uncased-finetuned-cola | 2023-04-14T10:37:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | temur333 | null | null | temur333/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-14T09:55:20 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.527141964318474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5760
- Matthews Correlation: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5239 | 1.0 | 535 | 0.5218 | 0.4092 |
| 0.3474 | 2.0 | 1070 | 0.5127 | 0.4973 |
| 0.2383 | 3.0 | 1605 | 0.5760 | 0.5271 |
| 0.1836 | 4.0 | 2140 | 0.7912 | 0.4982 |
| 0.1394 | 5.0 | 2675 | 0.8197 | 0.5079 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,041 | [
[
-0.02349853515625,
-0.05029296875,
0.012969970703125,
0.0193328857421875,
-0.0218353271484375,
-0.0085296630859375,
-0.004917144775390625,
-0.0033111572265625,
0.0229034423828125,
0.01052093505859375,
-0.044952392578125,
-0.0350341796875,
-0.062286376953125,
... |
marcus2000/polish_transliterator_BART | 2023-04-14T12:00:53.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | marcus2000 | null | null | marcus2000/polish_transliterator_BART | 0 | 2 | transformers | 2023-04-14T10:07:04 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: polish_transliterator_BART
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polish_transliterator_BART
This model is a fine-tuned version of [sshleifer/bart-tiny-random](https://huggingface.co/sshleifer/bart-tiny-random) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.5795
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 2.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 10.3014 | 1.0 | 572 | 10.2707 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 10.2465 | 2.0 | 1144 | 10.2013 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 10.1717 | 3.0 | 1716 | 10.1342 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 10.1086 | 4.0 | 2288 | 10.0704 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 10.0524 | 5.0 | 2860 | 10.0102 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.9976 | 6.0 | 3432 | 9.9539 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.8907 | 7.0 | 4004 | 9.9018 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.8424 | 8.0 | 4576 | 9.8536 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.8046 | 9.0 | 5148 | 9.8095 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.7581 | 10.0 | 5720 | 9.7693 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.7253 | 11.0 | 6292 | 9.7331 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.698 | 12.0 | 6864 | 9.7008 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.6611 | 13.0 | 7436 | 9.6723 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.6125 | 14.0 | 8008 | 9.6477 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.5928 | 15.0 | 8580 | 9.6269 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.5747 | 16.0 | 9152 | 9.6099 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.5613 | 17.0 | 9724 | 9.5966 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.5418 | 18.0 | 10296 | 9.5871 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.539 | 19.0 | 10868 | 9.5814 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
| 9.5366 | 20.0 | 11440 | 9.5795 | 0.0 | 0.0 | 0.0 | 0.0 | 2.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 3,468 | [
[
-0.043182373046875,
-0.04180908203125,
0.022247314453125,
-0.00045013427734375,
-0.009124755859375,
0.0009036064147949219,
-0.0002918243408203125,
-0.00988006591796875,
0.046783447265625,
0.027679443359375,
-0.051239013671875,
-0.050048828125,
-0.043975830078125... |
l3cube-pune/me-bert | 2023-07-22T08:24:54.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"mr",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-MeCorpus",
"arxiv:2306.14030",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | l3cube-pune | null | null | l3cube-pune/me-bert | 0 | 2 | transformers | 2023-04-14T10:23:27 | ---
language:
- mr
- en
- multilingual
license: cc-by-4.0
tags:
- mr
- en
- codemix
datasets:
- L3Cube-MeCorpus
---
## MeBERT
MeBERT is a Marathi-English code-mixed BERT model trained on Roman text. It is a base BERT model fine-tuned on L3Cube-MeCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.14030)
Other models from MeBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/me-bert"> MeBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-roberta"> MeRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-bert-mixed"> MeBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-bert-mixed-v2"> MeBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-roberta-mixed"> MeRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-lid-roberta"> MeLID-RoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-hate-roberta"> MeHate-RoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-sent-roberta"> MeSent-RoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-hate-bert"> MeHate-BERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-lid-bert"> MeLID-BERT </a> <br>
Citing:
```
@article{chavan2023my,
title={My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks},
author={Chavan, Tanmay and Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Joshi, Raviraj},
journal={arXiv preprint arXiv:2306.14030},
year={2023}
}
``` | 1,625 | [
[
-0.0275726318359375,
-0.049896240234375,
0.00470733642578125,
0.037506103515625,
-0.0212249755859375,
0.005252838134765625,
-0.00978851318359375,
-0.0216522216796875,
0.0306243896484375,
0.015655517578125,
-0.06243896484375,
-0.031951904296875,
-0.0379638671875,... |
gregorgabrovsek/SloBertAA_Top5_WithoutOOC_MultilingualBertBase | 2023-04-14T14:43:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/SloBertAA_Top5_WithoutOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-14T11:55:19 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SloBertAA_Top5_WithoutOOC_MultilingualBertBase_NEW
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top5_WithoutOOC_MultilingualBertBase_NEW
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4866
- Accuracy: 0.9224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3099 | 1.0 | 8757 | 0.3085 | 0.8951 |
| 0.244 | 2.0 | 17514 | 0.2805 | 0.9144 |
| 0.1707 | 3.0 | 26271 | 0.3609 | 0.9130 |
| 0.1052 | 4.0 | 35028 | 0.4396 | 0.9207 |
| 0.0626 | 5.0 | 43785 | 0.4866 | 0.9224 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,669 | [
[
-0.033905029296875,
-0.035400390625,
0.00443267822265625,
0.022186279296875,
-0.0273590087890625,
-0.0239715576171875,
-0.0235137939453125,
-0.0245513916015625,
0.01849365234375,
0.0232086181640625,
-0.05364990234375,
-0.048095703125,
-0.048980712890625,
-0.... |
Entj/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-14T12:06:04.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Entj | null | null | Entj/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-14T12:05:30 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 276.50 +/- 97.06
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Entj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Entj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Entj
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 400000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,677 | [
[
-0.04150390625,
-0.036712646484375,
0.0212554931640625,
0.024169921875,
-0.00919342041015625,
-0.0174407958984375,
0.0124359130859375,
-0.0146331787109375,
0.01348876953125,
0.0249176025390625,
-0.0701904296875,
-0.0350341796875,
-0.026885986328125,
-0.00342... |
marcus2000/polish_transliterator_T5 | 2023-04-14T12:34:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | marcus2000 | null | null | marcus2000/polish_transliterator_T5 | 0 | 2 | transformers | 2023-04-14T12:12:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: polish_transliterator_T5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polish_transliterator_T5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0705
- Rouge1: 15.1042
- Rouge2: 0.0
- Rougel: 15.1042
- Rougelsum: 15.625
- Gen Len: 4.0938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.0242 | 1.0 | 572 | 1.8076 | 3.5937 | 0.0 | 3.75 | 3.75 | 1.25 |
| 2.8296 | 2.0 | 1144 | 1.6997 | 4.6875 | 0.0 | 4.6875 | 4.6875 | 0.7031 |
| 2.4707 | 3.0 | 1716 | 1.5717 | 6.0417 | 0.0 | 6.25 | 6.3542 | 1.1719 |
| 2.4367 | 4.0 | 2288 | 1.4617 | 6.4062 | 0.0 | 6.875 | 6.875 | 0.9688 |
| 2.296 | 5.0 | 2860 | 1.3847 | 8.4375 | 0.0 | 8.125 | 8.4375 | 1.3906 |
| 2.0905 | 6.0 | 3432 | 1.3177 | 8.4375 | 0.0 | 8.125 | 8.4375 | 1.9688 |
| 1.8223 | 7.0 | 4004 | 1.2645 | 9.375 | 0.0 | 9.375 | 9.375 | 2.3125 |
| 1.6881 | 8.0 | 4576 | 1.2157 | 10.625 | 0.0 | 10.625 | 10.9375 | 2.7969 |
| 1.6655 | 9.0 | 5148 | 1.1841 | 12.5 | 0.0 | 12.2917 | 12.5 | 3.1562 |
| 1.5736 | 10.0 | 5720 | 1.1582 | 13.4896 | 0.0 | 13.3333 | 13.3333 | 3.25 |
| 1.4754 | 11.0 | 6292 | 1.1382 | 13.4896 | 0.0 | 13.3333 | 13.3333 | 3.6562 |
| 1.4927 | 12.0 | 6864 | 1.1176 | 13.4896 | 0.0 | 13.3333 | 13.3333 | 4.1406 |
| 1.3628 | 13.0 | 7436 | 1.1069 | 13.4896 | 0.0 | 13.3333 | 13.3333 | 4.1719 |
| 1.3288 | 14.0 | 8008 | 1.0968 | 13.4896 | 0.0 | 13.3333 | 13.3333 | 4.2344 |
| 1.313 | 15.0 | 8580 | 1.0889 | 14.7917 | 0.0 | 14.7917 | 15.1042 | 4.2188 |
| 1.3215 | 16.0 | 9152 | 1.0820 | 14.7917 | 0.0 | 14.7917 | 15.1042 | 4.2188 |
| 1.2772 | 17.0 | 9724 | 1.0771 | 14.7917 | 0.0 | 14.7917 | 15.1042 | 4.2188 |
| 1.1895 | 18.0 | 10296 | 1.0735 | 15.1042 | 0.0 | 15.1042 | 15.625 | 4.0938 |
| 1.3394 | 19.0 | 10868 | 1.0712 | 15.1042 | 0.0 | 15.1042 | 15.625 | 4.0938 |
| 1.2656 | 20.0 | 11440 | 1.0705 | 15.1042 | 0.0 | 15.1042 | 15.625 | 4.0938 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 3,506 | [
[
-0.044921875,
-0.0325927734375,
0.02117919921875,
0.006134033203125,
-0.00746917724609375,
-0.00438690185546875,
0.003452301025390625,
-0.005084991455078125,
0.0440673828125,
0.02276611328125,
-0.04949951171875,
-0.0533447265625,
-0.0478515625,
-0.0032730102... |
YaraKyrychenko/mdeberta-pov | 2023-04-14T14:00:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | YaraKyrychenko | null | null | YaraKyrychenko/mdeberta-pov | 0 | 2 | transformers | 2023-04-14T12:12:59 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: mdeberta-pov
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-pov
This model is a fine-tuned version of [MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2878
- Accuracy: 0.94
- F1: 0.9400
- Precision: 0.9400
- Recall: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2402
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3295 | 1.0 | 5437 | 0.2637 | 0.9165 | 0.9164 | 0.9183 | 0.9165 |
| 0.2735 | 2.0 | 10874 | 0.2912 | 0.9285 | 0.9285 | 0.9285 | 0.9285 |
| 0.1949 | 3.0 | 16311 | 0.3108 | 0.935 | 0.9350 | 0.9351 | 0.935 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,758 | [
[
-0.0355224609375,
-0.036224365234375,
0.016510009765625,
0.01512908935546875,
-0.025909423828125,
-0.0218048095703125,
-0.00803375244140625,
-0.0146484375,
0.0224456787109375,
0.026702880859375,
-0.053314208984375,
-0.050567626953125,
-0.0455322265625,
-0.00... |
jojo0616/my_SA_distilbert_model | 2023-05-13T22:55:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jojo0616 | null | null | jojo0616/my_SA_distilbert_model | 0 | 2 | transformers | 2023-04-14T12:24:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_SA_distilbert_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_SA_distilbert_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4408
- Accuracy: 0.9166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4079 | 1.0 | 1124 | 0.3399 | 0.8832 |
| 0.2688 | 2.0 | 2248 | 0.3037 | 0.9037 |
| 0.1868 | 3.0 | 3372 | 0.2777 | 0.9135 |
| 0.1476 | 4.0 | 4496 | 0.2797 | 0.9186 |
| 0.1188 | 5.0 | 5620 | 0.3400 | 0.9157 |
| 0.0934 | 6.0 | 6744 | 0.3471 | 0.9148 |
| 0.0779 | 7.0 | 7868 | 0.3694 | 0.9201 |
| 0.0584 | 8.0 | 8992 | 0.4350 | 0.9081 |
| 0.0499 | 9.0 | 10116 | 0.4336 | 0.9146 |
| 0.0405 | 10.0 | 11240 | 0.4408 | 0.9166 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,919 | [
[
-0.03375244140625,
-0.041107177734375,
0.01259613037109375,
0.012115478515625,
-0.020965576171875,
-0.0201416015625,
-0.0002658367156982422,
-0.0042572021484375,
0.013763427734375,
0.0156402587890625,
-0.050445556640625,
-0.048187255859375,
-0.0599365234375,
... |
GhifSmile/distilbert-base-uncased-PINA-dfnew-2 | 2023-04-14T16:03:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | GhifSmile | null | null | GhifSmile/distilbert-base-uncased-PINA-dfnew-2 | 0 | 2 | transformers | 2023-04-14T13:44:38 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: distilbert-base-uncased-PINA-dfnew-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-PINA-dfnew-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3815
- Accuracy: 0.9106
- Precision: 0.7799
- Recall: 0.7804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|
| 1.5008 | 1.0 | 1002 | 0.6541 | 0.8482 | 0.6999 | 0.6173 |
| 0.4599 | 2.0 | 2004 | 0.4240 | 0.9004 | 0.7739 | 0.7641 |
| 0.2458 | 3.0 | 3006 | 0.3815 | 0.9106 | 0.7799 | 0.7804 |
| 0.1549 | 4.0 | 4008 | 0.3817 | 0.9206 | 0.8114 | 0.8064 |
| 0.0977 | 5.0 | 5010 | 0.4187 | 0.9194 | 0.8118 | 0.8031 |
| 0.0662 | 6.0 | 6012 | 0.4207 | 0.9213 | 0.8109 | 0.8085 |
| 0.0454 | 7.0 | 7014 | 0.4361 | 0.9226 | 0.8276 | 0.8199 |
| 0.0314 | 8.0 | 8016 | 0.4562 | 0.9233 | 0.8288 | 0.8209 |
| 0.023 | 9.0 | 9018 | 0.4657 | 0.9221 | 0.8272 | 0.8192 |
| 0.0185 | 10.0 | 10020 | 0.4620 | 0.9226 | 0.8278 | 0.8191 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,257 | [
[
-0.036468505859375,
-0.041015625,
0.0188446044921875,
0.01308441162109375,
-0.01678466796875,
-0.012115478515625,
0.0017719268798828125,
-0.00408935546875,
0.023956298828125,
0.0178985595703125,
-0.046417236328125,
-0.04840087890625,
-0.055450439453125,
-0.0... |
mlewand/distilbert-base-uncased-finetuned-emotion | 2023-04-14T14:54:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | mlewand | null | null | mlewand/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-14T14:27:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9236455088643882
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2150
- Accuracy: 0.9235
- F1: 0.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8249 | 1.0 | 250 | 0.3181 | 0.9035 | 0.8994 |
| 0.2452 | 2.0 | 500 | 0.2150 | 0.9235 | 0.9236 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.037628173828125,
-0.041351318359375,
0.0140380859375,
0.021636962890625,
-0.02557373046875,
-0.018951416015625,
-0.0127716064453125,
-0.00878143310546875,
0.010467529296875,
0.00811004638671875,
-0.056243896484375,
-0.051727294921875,
-0.060211181640625,
... |
Humberto/MedicalArticlesClassificationModel | 2023-04-17T13:54:56.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Humberto | null | null | Humberto/MedicalArticlesClassificationModel | 0 | 2 | transformers | 2023-04-14T14:32:52 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Humberto/MedicalArticlesClassificationModel
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Humberto/MedicalArticlesClassificationModel
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6969
- Validation Loss: 1.6957
- Train Accuracy: 0.3521
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 600, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.6982 | 1.6957 | 0.3521 | 0 |
| 1.6999 | 1.6957 | 0.3521 | 1 |
| 1.6969 | 1.6957 | 0.3521 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,864 | [
[
-0.036895751953125,
-0.035186767578125,
0.02362060546875,
0.001628875732421875,
-0.0272979736328125,
-0.0183258056640625,
-0.00913238525390625,
-0.01209259033203125,
0.00498199462890625,
0.0019512176513671875,
-0.047271728515625,
-0.05078125,
-0.06280517578125,
... |
ce-lery/distilbert-base-uncased-finetuned-emotion | 2023-04-14T22:17:24.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ce-lery | null | null | ce-lery/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-14T15:26:35 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2110
- Accuracy: 0.927
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.797 | 1.0 | 250 | 0.3013 | 0.9055 | 0.9032 |
| 0.2389 | 2.0 | 500 | 0.2110 | 0.927 | 0.9274 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,503 | [
[
-0.03759765625,
-0.044219970703125,
0.0165863037109375,
0.0255889892578125,
-0.0276336669921875,
-0.0193023681640625,
-0.0142669677734375,
-0.00547027587890625,
0.00936126708984375,
0.006900787353515625,
-0.056121826171875,
-0.050323486328125,
-0.06292724609375,... |
zlsl/ru_startrek | 2023-08-11T14:01:20.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"star trek",
"startrek",
"ru",
"license:gpl-3.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | zlsl | null | null | zlsl/ru_startrek | 1 | 2 | transformers | 2023-04-14T15:47:53 | ---
license: gpl-3.0
language:
- ru
library_name: transformers
tags:
- star trek
- startrek
pipeline_tag: text-generation
---
Модель обученная на книгах по Star Trek
## Для пользователей text-generation-webui
В инструменте поломана работа с GPT-2, GPTJ, GPT-NEO и аналогичными модлями, неверно загружается токенизер.
Ошибка такая:<br>
>eos_token_id = eos_token_id[0]
>IndexError: list index out of range
Исправляется легко, в файл modules/models.py в функцию load_tokenizer() надо добавить строчку<br>
<code>tokenizer.eos_token_id = 2</code><br>
перед<br>
<code>return tokenizer</code>
| 591 | [
[
-0.017425537109375,
-0.05072021484375,
0.02081298828125,
0.006206512451171875,
-0.036346435546875,
0.016510009765625,
0.0263519287109375,
0.00457000732421875,
0.0237579345703125,
-0.0002589225769042969,
-0.05743408203125,
-0.0201873779296875,
-0.035736083984375,... |
abominic/emotions-classifier | 2023-04-19T16:03:58.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:unknown",
"endpoints_compatible",
"region:us"
] | text-classification | abominic | null | null | abominic/emotions-classifier | 0 | 2 | transformers | 2023-04-14T15:53:18 | ---
license: unknown
---
A simple BERT-based classifier for emotions, trained on the go_emotions dataset for my coursework. Only classifies the following emotions:
```
[
"admiration",
"anger",
"approval",
"caring",
"confusion",
"curiosity",
"desire",
"disappointment",
"excitement",
"fear",
"gratitude",
"love",
"sadness"
]
```
https://huggingface.co/datasets/go_emotions | 397 | [
[
-0.0309906005859375,
-0.03277587890625,
0.027069091796875,
0.030120849609375,
-0.022308349609375,
-0.0125885009765625,
-0.0184326171875,
-0.01517486572265625,
0.0292510986328125,
-0.0167999267578125,
-0.059112548828125,
-0.0377197265625,
-0.03277587890625,
-... |
plgrm720/tokipona_to_eng_model_v0.1 | 2023-04-14T16:40:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | plgrm720 | null | null | plgrm720/tokipona_to_eng_model_v0.1 | 0 | 2 | transformers | 2023-04-14T16:31:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: tokipona_to_eng_model_v0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tokipona_to_eng_model_v0.1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0757
- Bleu: 2.1864
- Gen Len: 11.867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 55 | 3.7998 | 1.6168 | 13.0394 |
| No log | 2.0 | 110 | 3.6119 | 0.7534 | 13.2315 |
| No log | 3.0 | 165 | 3.6447 | 0.6867 | 13.0443 |
| No log | 4.0 | 220 | 3.7115 | 1.0019 | 12.0148 |
| No log | 5.0 | 275 | 3.8782 | 1.3715 | 13.2217 |
| No log | 6.0 | 330 | 4.0107 | 1.7444 | 11.266 |
| No log | 7.0 | 385 | 4.1611 | 2.7707 | 11.665 |
| No log | 8.0 | 440 | 4.3828 | 3.0123 | 12.0985 |
| No log | 9.0 | 495 | 4.5123 | 3.0296 | 12.6502 |
| 2.3706 | 10.0 | 550 | 4.6470 | 2.3476 | 11.8768 |
| 2.3706 | 11.0 | 605 | 4.8186 | 2.0611 | 12.1182 |
| 2.3706 | 12.0 | 660 | 4.8997 | 2.173 | 11.6995 |
| 2.3706 | 13.0 | 715 | 4.9742 | 2.2424 | 12.1576 |
| 2.3706 | 14.0 | 770 | 5.0570 | 2.0142 | 12.2611 |
| 2.3706 | 15.0 | 825 | 5.0757 | 2.1864 | 11.867 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,382 | [
[
-0.035400390625,
-0.032867431640625,
0.017669677734375,
0.0128936767578125,
-0.015625,
-0.0173492431640625,
-0.008453369140625,
-0.0116119384765625,
0.0289154052734375,
0.0232086181640625,
-0.04962158203125,
-0.0552978515625,
-0.057525634765625,
-0.004631042... |
gregorgabrovsek/SloBertAA_Top5_WithOOC_MultilingualBertBase | 2023-04-14T21:19:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/SloBertAA_Top5_WithOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-14T17:54:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SloBertAA_Top5_WithOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top5_WithOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7483
- Accuracy: 0.8641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4649 | 1.0 | 10508 | 0.4611 | 0.8344 |
| 0.3569 | 2.0 | 21016 | 0.4765 | 0.8464 |
| 0.2884 | 3.0 | 31524 | 0.5055 | 0.8533 |
| 0.1983 | 4.0 | 42032 | 0.5998 | 0.8616 |
| 0.1363 | 5.0 | 52540 | 0.7483 | 0.8641 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,655 | [
[
-0.031768798828125,
-0.03582763671875,
0.0052490234375,
0.02203369140625,
-0.0267181396484375,
-0.025726318359375,
-0.0235595703125,
-0.023681640625,
0.01540374755859375,
0.0235443115234375,
-0.054534912109375,
-0.04901123046875,
-0.04766845703125,
-0.016525... |
nmb-paperspace-hf/bert-base-uncased-go_emotions | 2023-04-14T18:08:46.000Z | [
"transformers",
"pytorch",
"optimum_graphcore",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | nmb-paperspace-hf | null | null | nmb-paperspace-hf/bert-base-uncased-go_emotions | 0 | 2 | transformers | 2023-04-14T17:58:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-go_emotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-go_emotions
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1095
- Roc Auc: 0.8084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 39
- total_train_batch_size: 2496
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cpu
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,332 | [
[
-0.03912353515625,
-0.039306640625,
0.0102386474609375,
0.022918701171875,
-0.03839111328125,
-0.0300750732421875,
-0.0294036865234375,
-0.0121917724609375,
0.01580810546875,
0.01534271240234375,
-0.06396484375,
-0.041473388671875,
-0.049591064453125,
-0.019... |
aellxx/disaster-tweet-distilbert | 2023-04-14T18:43:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | aellxx | null | null | aellxx/disaster-tweet-distilbert | 0 | 2 | transformers | 2023-04-14T18:37:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: disaster-tweet-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disaster-tweet-distilbert
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- num_warmup_steps: 10%
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2877 | 0.12 | 12 | 2.3051 |
| 2.1129 | 0.25 | 24 | 2.2778 |
| 2.2514 | 0.38 | 36 | 2.2299 |
| 2.2691 | 0.5 | 48 | 2.1606 |
| 2.1401 | 0.62 | 60 | 2.0706 |
| 2.075 | 0.75 | 72 | 1.9672 |
| 1.8594 | 0.88 | 84 | 1.8498 |
| 1.7927 | 1.0 | 96 | 1.7257 |
| 1.5639 | 1.12 | 108 | 1.6010 |
| 1.6001 | 1.25 | 120 | 1.4670 |
| 1.4207 | 1.38 | 132 | 1.3314 |
| 1.3183 | 1.5 | 144 | 1.1993 |
| 1.0767 | 1.62 | 156 | 1.0798 |
| 0.9672 | 1.75 | 168 | 0.9742 |
| 0.9523 | 1.88 | 180 | 0.8821 |
| 0.813 | 2.0 | 192 | 0.8027 |
| 0.7004 | 2.12 | 204 | 0.7424 |
| 0.7044 | 2.25 | 216 | 0.6904 |
| 0.6218 | 2.38 | 228 | 0.6495 |
| 0.6472 | 2.5 | 240 | 0.6158 |
| 0.5585 | 2.62 | 252 | 0.5896 |
| 0.5613 | 2.75 | 264 | 0.5685 |
| 0.5911 | 2.88 | 276 | 0.5499 |
| 0.5062 | 3.0 | 288 | 0.5357 |
| 0.4806 | 3.12 | 300 | 0.5257 |
| 0.4862 | 3.25 | 312 | 0.5091 |
| 0.4433 | 3.38 | 324 | 0.4997 |
| 0.486 | 3.5 | 336 | 0.4892 |
| 0.4746 | 3.62 | 348 | 0.4802 |
| 0.4317 | 3.75 | 360 | 0.4759 |
| 0.4874 | 3.88 | 372 | 0.4670 |
| 0.4411 | 4.0 | 384 | 0.4605 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 2,928 | [
[
-0.04022216796875,
-0.0380859375,
0.014373779296875,
0.0078277587890625,
-0.01532745361328125,
-0.01108551025390625,
0.0024929046630859375,
-0.003116607666015625,
0.0264892578125,
0.0215911865234375,
-0.05194091796875,
-0.046630859375,
-0.049835205078125,
-0... |
nmb-paperspace-hf/bert-base-uncased-sst2 | 2023-04-14T18:52:36.000Z | [
"transformers",
"pytorch",
"optimum_graphcore",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | nmb-paperspace-hf | null | null | nmb-paperspace-hf/bert-base-uncased-sst2 | 0 | 2 | transformers | 2023-04-14T18:43:38 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2139
- Accuracy: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 2048
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cpu
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,337 | [
[
-0.0219573974609375,
-0.04254150390625,
0.010833740234375,
0.015869140625,
-0.045135498046875,
-0.0198516845703125,
-0.0258331298828125,
-0.014556884765625,
0.0074920654296875,
0.0227203369140625,
-0.048492431640625,
-0.0308990478515625,
-0.053314208984375,
... |
aellxx/disaster-tweet-distilbert-1 | 2023-04-14T18:49:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | aellxx | null | null | aellxx/disaster-tweet-distilbert-1 | 0 | 2 | transformers | 2023-04-14T18:44:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: disaster-tweet-distilbert-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disaster-tweet-distilbert-1
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- num_warmup_steps: 20%
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2888 | 0.12 | 12 | 2.3097 |
| 2.122 | 0.25 | 24 | 2.2960 |
| 2.2792 | 0.38 | 36 | 2.2720 |
| 2.3256 | 0.5 | 48 | 2.2368 |
| 2.2339 | 0.62 | 60 | 2.1904 |
| 2.2148 | 0.75 | 72 | 2.1360 |
| 2.0433 | 0.88 | 84 | 2.0723 |
| 2.0242 | 1.0 | 96 | 2.0028 |
| 1.8316 | 1.12 | 108 | 1.9302 |
| 1.9608 | 1.25 | 120 | 1.8508 |
| 1.8278 | 1.38 | 132 | 1.7635 |
| 1.7828 | 1.5 | 144 | 1.6711 |
| 1.5522 | 1.62 | 156 | 1.5747 |
| 1.474 | 1.75 | 168 | 1.4774 |
| 1.4762 | 1.88 | 180 | 1.3790 |
| 1.3439 | 2.0 | 192 | 1.2820 |
| 1.1465 | 2.12 | 204 | 1.1896 |
| 1.1755 | 2.25 | 216 | 1.1024 |
| 1.0085 | 2.38 | 228 | 1.0212 |
| 1.0492 | 2.5 | 240 | 0.9492 |
| 0.8642 | 2.62 | 252 | 0.8858 |
| 0.8554 | 2.75 | 264 | 0.8291 |
| 0.8534 | 2.88 | 276 | 0.7792 |
| 0.7013 | 3.0 | 288 | 0.7364 |
| 0.6414 | 3.12 | 300 | 0.7023 |
| 0.681 | 3.25 | 312 | 0.6707 |
| 0.6045 | 3.38 | 324 | 0.6441 |
| 0.6374 | 3.5 | 336 | 0.6193 |
| 0.6192 | 3.62 | 348 | 0.5988 |
| 0.5478 | 3.75 | 360 | 0.5831 |
| 0.5891 | 3.88 | 372 | 0.5693 |
| 0.5411 | 4.0 | 384 | 0.5571 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 2,933 | [
[
-0.043487548828125,
-0.038604736328125,
0.01372528076171875,
0.01140594482421875,
-0.01293182373046875,
-0.00917816162109375,
0.004486083984375,
0.0001323223114013672,
0.0296630859375,
0.0201873779296875,
-0.05303955078125,
-0.043914794921875,
-0.050537109375,
... |
aellxx/disaster-tweet-distilbert-2 | 2023-04-14T18:53:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | aellxx | null | null | aellxx/disaster-tweet-distilbert-2 | 0 | 2 | transformers | 2023-04-14T18:50:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: disaster-tweet-distilbert-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disaster-tweet-distilbert-2
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2881 | 0.12 | 12 | 2.3069 |
| 2.1166 | 0.25 | 24 | 2.2851 |
| 2.2625 | 0.38 | 36 | 2.2467 |
| 2.2916 | 0.5 | 48 | 2.1909 |
| 2.1774 | 0.62 | 60 | 2.1179 |
| 2.1302 | 0.75 | 72 | 2.0334 |
| 1.9316 | 0.88 | 84 | 1.9362 |
| 1.8821 | 1.0 | 96 | 1.8319 |
| 1.6665 | 1.12 | 108 | 1.7256 |
| 1.7373 | 1.25 | 120 | 1.6102 |
| 1.5704 | 1.38 | 132 | 1.4889 |
| 1.4871 | 1.5 | 144 | 1.3655 |
| 1.2415 | 1.62 | 156 | 1.2460 |
| 1.1341 | 1.75 | 168 | 1.1346 |
| 1.1123 | 1.88 | 180 | 1.0317 |
| 0.9702 | 2.0 | 192 | 0.9399 |
| 0.8219 | 2.12 | 204 | 0.8627 |
| 0.8248 | 2.25 | 216 | 0.7949 |
| 0.7126 | 2.38 | 228 | 0.7394 |
| 0.7492 | 2.5 | 240 | 0.6915 |
| 0.6238 | 2.62 | 252 | 0.6527 |
| 0.62 | 2.75 | 264 | 0.6227 |
| 0.6443 | 2.88 | 276 | 0.5977 |
| 0.5504 | 3.0 | 288 | 0.5793 |
| 0.5225 | 3.12 | 300 | 0.5645 |
| 0.5326 | 3.25 | 312 | 0.5481 |
| 0.4844 | 3.38 | 324 | 0.5348 |
| 0.5218 | 3.5 | 336 | 0.5215 |
| 0.512 | 3.62 | 348 | 0.5097 |
| 0.4597 | 3.75 | 360 | 0.5010 |
| 0.5123 | 3.88 | 372 | 0.4917 |
| 0.4667 | 4.0 | 384 | 0.4834 |
| 0.4087 | 4.12 | 396 | 0.4768 |
| 0.4872 | 4.25 | 408 | 0.4704 |
| 0.4242 | 4.38 | 420 | 0.4678 |
| 0.442 | 4.5 | 432 | 0.4625 |
| 0.433 | 4.62 | 444 | 0.4577 |
| 0.4226 | 4.75 | 456 | 0.4538 |
| 0.411 | 4.88 | 468 | 0.4498 |
| 0.4003 | 5.0 | 480 | 0.4469 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 3,317 | [
[
-0.043304443359375,
-0.0406494140625,
0.01470947265625,
0.00972747802734375,
-0.00862884521484375,
-0.0027408599853515625,
0.005828857421875,
-0.0006461143493652344,
0.03631591796875,
0.0220184326171875,
-0.051513671875,
-0.043365478515625,
-0.049713134765625,
... |
aellxx/disaster-tweet-distilbert-3 | 2023-04-14T19:04:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | aellxx | null | null | aellxx/disaster-tweet-distilbert-3 | 0 | 2 | transformers | 2023-04-14T18:55:18 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: disaster-tweet-distilbert-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disaster-tweet-distilbert-3
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2875 | 0.12 | 12 | 2.3044 |
| 2.1117 | 0.25 | 24 | 2.2753 |
| 2.2477 | 0.38 | 36 | 2.2243 |
| 2.2616 | 0.5 | 48 | 2.1505 |
| 2.1278 | 0.62 | 60 | 2.0550 |
| 2.0568 | 0.75 | 72 | 1.9456 |
| 1.8358 | 0.88 | 84 | 1.8217 |
| 1.7638 | 1.0 | 96 | 1.6916 |
| 1.531 | 1.12 | 108 | 1.5616 |
| 1.5567 | 1.25 | 120 | 1.4224 |
| 1.3747 | 1.38 | 132 | 1.2834 |
| 1.2675 | 1.5 | 144 | 1.1505 |
| 1.0291 | 1.62 | 156 | 1.0330 |
| 0.9212 | 1.75 | 168 | 0.9307 |
| 0.91 | 1.88 | 180 | 0.8426 |
| 0.7726 | 2.0 | 192 | 0.7676 |
| 0.671 | 2.12 | 204 | 0.7126 |
| 0.6759 | 2.25 | 216 | 0.6655 |
| 0.6012 | 2.38 | 228 | 0.6287 |
| 0.6228 | 2.5 | 240 | 0.5989 |
| 0.5432 | 2.62 | 252 | 0.5753 |
| 0.5475 | 2.75 | 264 | 0.5555 |
| 0.5788 | 2.88 | 276 | 0.5381 |
| 0.4944 | 3.0 | 288 | 0.5245 |
| 0.4692 | 3.12 | 300 | 0.5158 |
| 0.4743 | 3.25 | 312 | 0.4995 |
| 0.4333 | 3.38 | 324 | 0.4912 |
| 0.4768 | 3.5 | 336 | 0.4813 |
| 0.4653 | 3.62 | 348 | 0.4730 |
| 0.4249 | 3.75 | 360 | 0.4701 |
| 0.4815 | 3.88 | 372 | 0.4613 |
| 0.4349 | 4.0 | 384 | 0.4552 |
| 0.3723 | 4.12 | 396 | 0.4509 |
| 0.456 | 4.25 | 408 | 0.4469 |
| 0.3988 | 4.38 | 420 | 0.4458 |
| 0.4142 | 4.5 | 432 | 0.4456 |
| 0.4008 | 4.62 | 444 | 0.4385 |
| 0.3943 | 4.75 | 456 | 0.4376 |
| 0.3862 | 4.88 | 468 | 0.4348 |
| 0.3778 | 5.0 | 480 | 0.4337 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 3,317 | [
[
-0.043792724609375,
-0.04046630859375,
0.0145721435546875,
0.008636474609375,
-0.0103607177734375,
-0.0028476715087890625,
0.00435638427734375,
-0.0014142990112304688,
0.036529541015625,
0.0225067138671875,
-0.052032470703125,
-0.044891357421875,
-0.049285888671... |
wiorz/bert_legal_test_sm_gen_1 | 2023-04-14T23:06:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/bert_legal_test_sm_gen_1 | 0 | 2 | transformers | 2023-04-14T18:58:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert_legal_test_sm_gen_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_legal_test_sm_gen_1
This model is a fine-tuned version of [wiorz/bert_legal_test_sm](https://huggingface.co/wiorz/bert_legal_test_sm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4297
- Accuracy: 0.7992
- Precision: 0.4576
- Recall: 0.2687
- F1: 0.3386
- D-index: 1.5225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 0.99 | 65 | 0.4457 | 0.8002 | 0.4554 | 0.2289 | 0.3046 | 1.5100 |
| No log | 1.99 | 131 | 0.4289 | 0.7973 | 0.4444 | 0.2388 | 0.3107 | 1.5095 |
| No log | 3.0 | 197 | 0.5157 | 0.7555 | 0.4034 | 0.5821 | 0.4766 | 1.5683 |
| No log | 4.0 | 263 | 0.6436 | 0.7983 | 0.4433 | 0.2139 | 0.2886 | 1.5022 |
| No log | 4.99 | 328 | 0.6772 | 0.8021 | 0.4598 | 0.1990 | 0.2778 | 1.5021 |
| No log | 5.99 | 394 | 0.7292 | 0.8078 | 0.4964 | 0.3383 | 0.4024 | 1.5578 |
| No log | 7.0 | 460 | 0.9566 | 0.8021 | 0.4755 | 0.3383 | 0.3953 | 1.5501 |
| 0.2346 | 8.0 | 526 | 1.0280 | 0.8002 | 0.4651 | 0.2985 | 0.3636 | 1.5340 |
| 0.2346 | 8.99 | 591 | 1.0350 | 0.7840 | 0.4330 | 0.4179 | 0.4253 | 1.5526 |
| 0.2346 | 9.99 | 657 | 1.2664 | 0.8002 | 0.4444 | 0.1791 | 0.2553 | 1.4925 |
| 0.2346 | 11.0 | 723 | 1.2846 | 0.7812 | 0.4040 | 0.3035 | 0.3466 | 1.5098 |
| 0.2346 | 12.0 | 789 | 1.2157 | 0.7897 | 0.4351 | 0.3333 | 0.3775 | 1.5317 |
| 0.2346 | 12.99 | 854 | 1.3208 | 0.8030 | 0.4688 | 0.2239 | 0.3030 | 1.5121 |
| 0.2346 | 13.99 | 920 | 1.3100 | 0.7783 | 0.4101 | 0.3632 | 0.3852 | 1.5263 |
| 0.2346 | 15.0 | 986 | 1.2587 | 0.8154 | 0.5347 | 0.2687 | 0.3576 | 1.5444 |
| 0.0277 | 16.0 | 1052 | 1.3552 | 0.7878 | 0.4304 | 0.3383 | 0.3788 | 1.5308 |
| 0.0277 | 16.99 | 1117 | 1.3783 | 0.8059 | 0.4872 | 0.2836 | 0.3585 | 1.5366 |
| 0.0277 | 17.99 | 1183 | 1.4071 | 0.7907 | 0.4336 | 0.3085 | 0.3605 | 1.5245 |
| 0.0277 | 19.0 | 1249 | 1.4283 | 0.8011 | 0.4655 | 0.2687 | 0.3407 | 1.5251 |
| 0.0277 | 19.77 | 1300 | 1.4297 | 0.7992 | 0.4576 | 0.2687 | 0.3386 | 1.5225 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 3,601 | [
[
-0.04345703125,
-0.03900146484375,
0.0178375244140625,
0.006618499755859375,
-0.007564544677734375,
-0.0148162841796875,
0.0008416175842285156,
-0.01102447509765625,
0.038482666015625,
0.0251312255859375,
-0.04290771484375,
-0.055084228515625,
-0.04571533203125,... |
gregorgabrovsek/SloBertAA_Top50_WithOOC_MultilingualBertBase | 2023-04-15T05:53:47.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/SloBertAA_Top50_WithOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-14T19:50:51 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SloBertAA_Top50_WithOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top50_WithOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0406
- Accuracy: 0.7569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.213 | 1.0 | 33346 | 1.1647 | 0.6803 |
| 0.9296 | 2.0 | 66692 | 1.0262 | 0.7193 |
| 0.7307 | 3.0 | 100038 | 0.9623 | 0.7448 |
| 0.5166 | 4.0 | 133384 | 0.9772 | 0.7534 |
| 0.3817 | 5.0 | 166730 | 1.0406 | 0.7569 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,664 | [
[
-0.03350830078125,
-0.03497314453125,
0.0039215087890625,
0.0240631103515625,
-0.0238037109375,
-0.022979736328125,
-0.0232391357421875,
-0.0231781005859375,
0.0172882080078125,
0.0241241455078125,
-0.054168701171875,
-0.047210693359375,
-0.04913330078125,
-... |
gregorgabrovsek/SloBertAA_Top50_WithoutOOC_MultilingualBertBase | 2023-04-15T05:49:24.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/SloBertAA_Top50_WithoutOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-14T20:02:32 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SloBertAA_Top50_WithoutOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top50_WithoutOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9867
- Accuracy: 0.7690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.1549 | 1.0 | 32692 | 1.1139 | 0.6885 |
| 0.9075 | 2.0 | 65384 | 0.9769 | 0.7307 |
| 0.6662 | 3.0 | 98076 | 0.9210 | 0.7531 |
| 0.5019 | 4.0 | 130768 | 0.9354 | 0.7648 |
| 0.3155 | 5.0 | 163460 | 0.9867 | 0.7690 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,670 | [
[
-0.033416748046875,
-0.033203125,
0.005916595458984375,
0.02252197265625,
-0.0237274169921875,
-0.02447509765625,
-0.022979736328125,
-0.0241851806640625,
0.0205230712890625,
0.0250701904296875,
-0.055145263671875,
-0.04718017578125,
-0.04833984375,
-0.01895... |
madmancity/loadedbert3 | 2023-04-14T22:37:55.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:madmancity/autotrain-data-loadedbert3",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | madmancity | null | null | madmancity/loadedbert3 | 0 | 2 | transformers | 2023-04-14T22:37:11 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- madmancity/autotrain-data-loadedbert3
co2_eq_emissions:
emissions: 0.0015487580052783714
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 49590119598
- CO2 Emissions (in grams): 0.0015
## Validation Metrics
- Loss: 0.247
- Accuracy: 0.900
- Precision: 0.917
- Recall: 0.880
- AUC: 0.957
- F1: 0.898
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/madmancity/autotrain-loadedbert3-49590119598
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madmancity/autotrain-loadedbert3-49590119598", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madmancity/autotrain-loadedbert3-49590119598", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,153 | [
[
-0.0308685302734375,
-0.022796630859375,
0.017913818359375,
0.01058197021484375,
0.0012722015380859375,
0.004131317138671875,
0.004825592041015625,
-0.0099945068359375,
-0.0016870498657226562,
0.0132904052734375,
-0.058258056640625,
-0.03448486328125,
-0.0550231... |
madmancity/loadedbert4 | 2023-04-14T22:52:18.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:madmancity/autotrain-data-loadedbert4",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | madmancity | null | null | madmancity/loadedbert4 | 0 | 2 | transformers | 2023-04-14T22:51:32 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- madmancity/autotrain-data-loadedbert4
co2_eq_emissions:
emissions: 0.2834216781837445
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 49596119602
- CO2 Emissions (in grams): 0.2834
## Validation Metrics
- Loss: 0.432
- Accuracy: 0.840
- Precision: 0.905
- Recall: 0.760
- AUC: 0.901
- F1: 0.826
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/madmancity/autotrain-loadedbert4-49596119602
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madmancity/autotrain-loadedbert4-49596119602", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madmancity/autotrain-loadedbert4-49596119602", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,150 | [
[
-0.0306549072265625,
-0.0226593017578125,
0.0184326171875,
0.01105499267578125,
0.00023734569549560547,
0.00476837158203125,
0.004241943359375,
-0.01059722900390625,
-0.0034503936767578125,
0.01325225830078125,
-0.057891845703125,
-0.03497314453125,
-0.055511474... |
DunnBC22/codet5-base-Generate_Docstrings_for_Python-Condensed | 2023-05-12T00:55:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"en",
"dataset:calum/the-stack-smol-python-docstrings",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | DunnBC22 | null | null | DunnBC22/codet5-base-Generate_Docstrings_for_Python-Condensed | 1 | 2 | transformers | 2023-04-15T00:18:25 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-base-Generate_Docstrings_for_Python-Condensed
results: []
datasets:
- calum/the-stack-smol-python-docstrings
language:
- en
pipeline_tag: text2text-generation
---
# codet5-base-Generate_Docstrings_for_Python-Condensed
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6199
- Rouge1: 0.5017
- Rouge2: 0.374
- Rougel: 0.4866
- Rougelsum: 0.4864
- Gen Len: 13.8909
## Model description
This model predicts the docstring (the output) for a function (the input).
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Generate%20Docstrings/Smol%20Dataset/Code_T5_Project-Base%20Checkpoint.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: calum/the-stack-smol-python-docstrings (from HuggingFace Datasets; https://huggingface.co/datasets/calum/the-stack-smol-python-docstrings)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.8261 | 1.0 | 921 | 0.6435 | 0.4947 | 0.3661 | 0.4794 | 0.4791 | 13.7526 |
| 0.6234 | 2.0 | 1842 | 0.6199 | 0.5017 | 0.374 | 0.4866 | 0.4864 | 13.8909 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3 | 2,048 | [
[
-0.02203369140625,
-0.04339599609375,
0.018798828125,
0.00860595703125,
-0.007312774658203125,
-0.0184478759765625,
-0.0210723876953125,
-0.01512908935546875,
-0.0031757354736328125,
0.02734375,
-0.048919677734375,
-0.052978515625,
-0.045562744140625,
0.0113... |
carolinetfls/plant-seedlings-model-ConvNet | 2023-04-15T05:41:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | carolinetfls | null | null | carolinetfls/plant-seedlings-model-ConvNet | 0 | 2 | transformers | 2023-04-15T01:56:31 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: plant-seedlings-model-ConvNet
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9522292993630573
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plant-seedlings-model-ConvNet
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2410
- Accuracy: 0.9522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.494 | 0.8 | 100 | 0.4274 | 0.8828 |
| 0.246 | 1.6 | 200 | 0.2878 | 0.8930 |
| 0.1042 | 2.4 | 300 | 0.2227 | 0.9172 |
| 0.0174 | 3.2 | 400 | 0.2208 | 0.9299 |
| 0.0088 | 4.0 | 500 | 0.3197 | 0.9185 |
| 0.0078 | 4.8 | 600 | 0.2555 | 0.9357 |
| 0.0013 | 5.6 | 700 | 0.2599 | 0.9427 |
| 0.0068 | 6.4 | 800 | 0.3072 | 0.9312 |
| 0.0007 | 7.2 | 900 | 0.2217 | 0.9484 |
| 0.0004 | 8.0 | 1000 | 0.2551 | 0.9401 |
| 0.0003 | 8.8 | 1100 | 0.2321 | 0.9478 |
| 0.0002 | 9.6 | 1200 | 0.2329 | 0.9484 |
| 0.0002 | 10.4 | 1300 | 0.2322 | 0.9478 |
| 0.0002 | 11.2 | 1400 | 0.2342 | 0.9478 |
| 0.0002 | 12.0 | 1500 | 0.2348 | 0.9490 |
| 0.0001 | 12.8 | 1600 | 0.2358 | 0.9490 |
| 0.0001 | 13.6 | 1700 | 0.2368 | 0.9497 |
| 0.0001 | 14.4 | 1800 | 0.2377 | 0.9510 |
| 0.0001 | 15.2 | 1900 | 0.2384 | 0.9516 |
| 0.0001 | 16.0 | 2000 | 0.2391 | 0.9516 |
| 0.0001 | 16.8 | 2100 | 0.2397 | 0.9522 |
| 0.0001 | 17.6 | 2200 | 0.2401 | 0.9522 |
| 0.0001 | 18.4 | 2300 | 0.2406 | 0.9522 |
| 0.0001 | 19.2 | 2400 | 0.2409 | 0.9522 |
| 0.0001 | 20.0 | 2500 | 0.2410 | 0.9522 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 3,209 | [
[
-0.035003662109375,
-0.04547119140625,
0.01055145263671875,
0.0112762451171875,
-0.00836181640625,
-0.018035888671875,
0.0017118453979492188,
-0.00885009765625,
0.035919189453125,
0.0156402587890625,
-0.05072021484375,
-0.05120849609375,
-0.048858642578125,
... |
March1900/setfit_youtube_comments_is_question | 2023-04-15T02:28:11.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | March1900 | null | null | March1900/setfit_youtube_comments_is_question | 0 | 2 | sentence-transformers | 2023-04-15T02:27:49 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# March1900/setfit_youtube_comments_is_question
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("March1900/setfit_youtube_comments_is_question")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,579 | [
[
-0.0091552734375,
-0.06390380859375,
0.0248870849609375,
-0.0125579833984375,
-0.01299285888671875,
-0.019927978515625,
-0.0210113525390625,
-0.005420684814453125,
-0.0008978843688964844,
0.032257080078125,
-0.050048828125,
-0.01277923583984375,
-0.0366821289062... |
GreenIron/distilbert-base-uncased-finetuned-emotion | 2023-05-01T02:58:48.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | GreenIron | null | null | GreenIron/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-15T03:58:25 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9260894194969761
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2139
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8365 | 1.0 | 250 | 0.3119 | 0.9085 | 0.9048 |
| 0.244 | 2.0 | 500 | 0.2139 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.11.0
| 1,840 | [
[
-0.03814697265625,
-0.0404052734375,
0.01507568359375,
0.0216827392578125,
-0.0264739990234375,
-0.0200958251953125,
-0.01259613037109375,
-0.00878143310546875,
0.01042938232421875,
0.00868988037109375,
-0.0565185546875,
-0.051666259765625,
-0.059112548828125,
... |
oegbo/distilbert-base-uncased-finetuned-emotion | 2023-04-15T15:49:59.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | oegbo | null | null | oegbo/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-15T08:26:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9254311164871121
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2166
- Accuracy: 0.9255
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8185 | 1.0 | 250 | 0.3135 | 0.908 | 0.9062 |
| 0.2512 | 2.0 | 500 | 0.2166 | 0.9255 | 0.9254 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,842 | [
[
-0.037567138671875,
-0.04052734375,
0.01412200927734375,
0.0220947265625,
-0.026275634765625,
-0.0201263427734375,
-0.012847900390625,
-0.00858306884765625,
0.01058197021484375,
0.00804901123046875,
-0.05609130859375,
-0.051971435546875,
-0.059661865234375,
... |
gregorgabrovsek/SloBertAA_Top100_WithOOC_MultilingualBertBase | 2023-04-15T23:42:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/SloBertAA_Top100_WithOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-15T08:45:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SloBertAA_Top100_WithOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top100_WithOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3433
- Accuracy: 0.6846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.7277 | 1.0 | 45122 | 1.6629 | 0.5830 |
| 1.4056 | 2.0 | 90244 | 1.4099 | 0.6435 |
| 1.114 | 3.0 | 135366 | 1.3339 | 0.6656 |
| 0.8284 | 4.0 | 180488 | 1.3277 | 0.6780 |
| 0.6761 | 5.0 | 225610 | 1.3433 | 0.6846 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,666 | [
[
-0.03411865234375,
-0.0355224609375,
0.006824493408203125,
0.02337646484375,
-0.02227783203125,
-0.0229949951171875,
-0.022369384765625,
-0.0228118896484375,
0.017486572265625,
0.0243377685546875,
-0.053558349609375,
-0.045745849609375,
-0.0472412109375,
-0.... |
gregorgabrovsek/SloBertAA_Top100_WithoutOOC_MultilingualBertBase | 2023-04-15T23:25:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/SloBertAA_Top100_WithoutOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-15T08:45:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SloBertAA_Top100_WithoutOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top100_WithoutOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3153
- Accuracy: 0.6908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.6601 | 1.0 | 44675 | 1.6121 | 0.5929 |
| 1.3524 | 2.0 | 89350 | 1.3895 | 0.6459 |
| 1.0402 | 3.0 | 134025 | 1.3008 | 0.6721 |
| 0.7889 | 4.0 | 178700 | 1.2892 | 0.6860 |
| 0.6078 | 5.0 | 223375 | 1.3153 | 0.6908 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,672 | [
[
-0.03509521484375,
-0.034423828125,
0.006656646728515625,
0.023223876953125,
-0.0221710205078125,
-0.02508544921875,
-0.022735595703125,
-0.0233612060546875,
0.01953125,
0.025909423828125,
-0.054779052734375,
-0.046417236328125,
-0.04681396484375,
-0.0189514... |
gregorgabrovsek/BERT_AA_IMDB_Top5_WithoutOOC_MultilingualBertBase | 2023-04-15T09:50:23.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/BERT_AA_IMDB_Top5_WithoutOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-15T09:24:17 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BERT_AA_IMDB_Top5_WithoutOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AA_IMDB_Top5_WithoutOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0086
- Accuracy: 0.9975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1339 | 1.0 | 613 | 0.0367 | 0.9902 |
| 0.0201 | 2.0 | 1226 | 0.0301 | 0.9947 |
| 0.0069 | 3.0 | 1839 | 0.0163 | 0.9955 |
| 0.0033 | 4.0 | 2452 | 0.0106 | 0.9971 |
| 0.0002 | 5.0 | 3065 | 0.0086 | 0.9975 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,658 | [
[
-0.034637451171875,
-0.034912109375,
0.007465362548828125,
0.0184326171875,
-0.0224761962890625,
-0.027191162109375,
-0.021820068359375,
-0.0222930908203125,
0.0167999267578125,
0.0216064453125,
-0.05487060546875,
-0.049468994140625,
-0.047332763671875,
-0.0... |
gregorgabrovsek/BERT_AA_IMDB_Top10_WithoutOOC_MultilingualBertBase | 2023-04-15T10:08:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/BERT_AA_IMDB_Top10_WithoutOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-15T09:27:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BERT_AA_IMDB_Top10_WithoutOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AA_IMDB_Top10_WithoutOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2581
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.153 | 1.0 | 1041 | 1.0342 | 0.8389 |
| 0.0822 | 2.0 | 2082 | 1.1333 | 0.8435 |
| 0.0302 | 3.0 | 3123 | 1.2996 | 0.8454 |
| 0.0123 | 4.0 | 4164 | 1.2668 | 0.8471 |
| 0.0067 | 5.0 | 5205 | 1.2581 | 0.8478 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,660 | [
[
-0.035736083984375,
-0.0341796875,
0.00601959228515625,
0.0177154541015625,
-0.02154541015625,
-0.0264892578125,
-0.0227203369140625,
-0.0217742919921875,
0.016143798828125,
0.0219879150390625,
-0.053985595703125,
-0.0484619140625,
-0.04986572265625,
-0.0132... |
gregorgabrovsek/BERT_AA_IMDB_Top25_WithoutOOC_MultilingualBertBase | 2023-04-15T10:26:08.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/BERT_AA_IMDB_Top25_WithoutOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-15T09:31:19 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BERT_AA_IMDB_Top25_WithoutOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AA_IMDB_Top25_WithoutOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5906
- Accuracy: 0.8837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6502 | 1.0 | 1546 | 0.5579 | 0.8419 |
| 0.3898 | 2.0 | 3092 | 0.4939 | 0.8683 |
| 0.2161 | 3.0 | 4638 | 0.5019 | 0.88 |
| 0.1273 | 4.0 | 6184 | 0.5619 | 0.8784 |
| 0.0715 | 5.0 | 7730 | 0.5906 | 0.8837 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,660 | [
[
-0.034210205078125,
-0.03515625,
0.005634307861328125,
0.0175628662109375,
-0.0224609375,
-0.0269012451171875,
-0.02337646484375,
-0.021209716796875,
0.0159149169921875,
0.0216827392578125,
-0.053192138671875,
-0.04931640625,
-0.049407958984375,
-0.014785766... |
gregorgabrovsek/BERT_AA_IMDB_Top50_WithoutOOC_MultilingualBertBase | 2023-04-15T11:06:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/BERT_AA_IMDB_Top50_WithoutOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-15T09:57:55 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BERT_AA_IMDB_Top50_WithoutOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AA_IMDB_Top50_WithoutOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6099
- Accuracy: 0.8738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9155 | 1.0 | 2134 | 0.7581 | 0.8085 |
| 0.5189 | 2.0 | 4268 | 0.5842 | 0.8526 |
| 0.2917 | 3.0 | 6402 | 0.5730 | 0.8613 |
| 0.1497 | 4.0 | 8536 | 0.6012 | 0.8693 |
| 0.0807 | 5.0 | 10670 | 0.6099 | 0.8738 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,667 | [
[
-0.03582763671875,
-0.032440185546875,
0.004871368408203125,
0.0174102783203125,
-0.021575927734375,
-0.025970458984375,
-0.0227813720703125,
-0.020172119140625,
0.0170440673828125,
0.02197265625,
-0.054443359375,
-0.04962158203125,
-0.05035400390625,
-0.014... |
gregorgabrovsek/BERT_AA_IMDB_Top100_WithoutOOC_MultilingualBertBase | 2023-04-15T11:56:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gregorgabrovsek | null | null | gregorgabrovsek/BERT_AA_IMDB_Top100_WithoutOOC_MultilingualBertBase | 0 | 2 | transformers | 2023-04-15T10:26:43 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BERT_AA_IMDB_Top100_WithoutOOC_MultilingualBertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AA_IMDB_Top100_WithoutOOC_MultilingualBertBase
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9456
- Accuracy: 0.7818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7226 | 1.0 | 2884 | 1.4213 | 0.6833 |
| 1.0842 | 2.0 | 5768 | 1.0640 | 0.754 |
| 0.682 | 3.0 | 8652 | 0.9793 | 0.7714 |
| 0.4733 | 4.0 | 11536 | 0.9500 | 0.7810 |
| 0.3064 | 5.0 | 14420 | 0.9456 | 0.7818 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,669 | [
[
-0.035064697265625,
-0.034698486328125,
0.0047149658203125,
0.0167236328125,
-0.020782470703125,
-0.0262298583984375,
-0.0239715576171875,
-0.0212249755859375,
0.0167999267578125,
0.0222930908203125,
-0.053314208984375,
-0.048309326171875,
-0.049774169921875,
... |
Olec/cyber_rebel | 2023-04-15T12:16:45.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"STIX",
"NER",
"RE",
"CTI",
"cyber threat intelligence",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | Olec | null | null | Olec/cyber_rebel | 0 | 2 | transformers | 2023-04-15T11:04:15 | ---
pipeline_tag: text2text-generation
tags:
- STIX
- NER
- RE
- CTI
- cyber threat intelligence
metrics:
- f1: 0.4064894147513486
- recall : 0.4463734567901234
- precision : 0.37314814814814806
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
- Model to extract Relations from cyber threat intelligence(CTI) text.
- This model needs a pre/postprocessing pipeline https://github.com/l0renor/Relation-Extraction-and-Knowledge-Graph-Generation-on-MISP-Event-Reports
- Standalone Model: Olec/cyber_rebel_no_pipe
- **Developed by:** Leon Lukas
- **Model type:** seq2seq
- **Language(s) (NLP): English
- **Finetuned from model : mrmoor/cti-t5-RE-NYT (T5 model trained on NYT RE)
### Metrics test set
- precision: 0.37314814814814806
- recall: 0.4463734567901234,
- f1 : 0.4064894147513486
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/l0renor/Relation-Extraction-and-Knowledge-Graph-Generation-on-MISP-Event-Reports
- **Paper [optional]:** https://github.com/l0renor/Relation-Extraction-and-Knowledge-Graph-Generation-on-MISP-Event-Reports | 1,195 | [
[
-0.0208587646484375,
-0.032562255859375,
0.031982421875,
-0.01194000244140625,
-0.0284881591796875,
-0.0006270408630371094,
0.01959228515625,
-0.039947509765625,
0.0205535888671875,
0.037933349609375,
-0.05731201171875,
-0.06243896484375,
-0.04150390625,
-0.... |
lanchunhui/distilbert-base-uncased_emotion_ft | 2023-04-15T15:44:00.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | lanchunhui | null | null | lanchunhui/distilbert-base-uncased_emotion_ft | 0 | 2 | transformers | 2023-04-15T14:42:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased_emotion_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.11.0
| 1,081 | [
[
-0.039306640625,
-0.046051025390625,
0.0158843994140625,
0.0278778076171875,
-0.034576416015625,
-0.01230621337890625,
-0.0128326416015625,
-0.00925445556640625,
0.0155487060546875,
0.00933837890625,
-0.05572509765625,
-0.043060302734375,
-0.058349609375,
-0... |
alaahussein/flan-t5-base-billsum_model | 2023-04-16T04:34:55.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | alaahussein | null | null | alaahussein/flan-t5-base-billsum_model | 0 | 2 | transformers | 2023-04-15T15:37:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
- bleu
model-index:
- name: flan-t5-base-billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.2154
- name: Bleu
type: bleu
value: 0.0011
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-billsum_model
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.2154
- Rouge2: 0.1259
- Rougel: 0.1843
- Rougelsum: 0.1843
- Gen Len: 17.3735
- Bleu: 0.0011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|
| No log | 1.0 | 296 | nan | 0.2154 | 0.1259 | 0.1843 | 0.1843 | 17.3735 | 0.0011 |
| 0.0 | 2.0 | 592 | nan | 0.2154 | 0.1259 | 0.1843 | 0.1843 | 17.3735 | 0.0011 |
| 0.0 | 3.0 | 888 | nan | 0.2154 | 0.1259 | 0.1843 | 0.1843 | 17.3735 | 0.0011 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,245 | [
[
-0.0323486328125,
-0.037628173828125,
0.0037364959716796875,
-0.0007576942443847656,
-0.0200347900390625,
-0.02545166015625,
-0.0012836456298828125,
-0.018341064453125,
0.015289306640625,
0.034576416015625,
-0.03692626953125,
-0.04864501953125,
-0.05169677734375... |
Kbrek/flan_rebel_nl | 2023-04-15T20:34:47.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"nl",
"dataset:rebel-short",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | Kbrek | null | null | Kbrek/flan_rebel_nl | 1 | 2 | transformers | 2023-04-15T19:45:28 | ---
datasets:
- rebel-short
metrics:
- rouge
model-index:
- name: flan-t5-base
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: rebel-short
type: rebel-short
config: default
split: test
args: default
metrics:
- name: Rouge1
type: rouge
value: 51.5716
license: cc-by-sa-4.0
language:
- nl
pipeline_tag: text2text-generation
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-rebel-nl
This model is a fine-tuned version of flan-t5-base on the rebel-short dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1029
- Rouge1: 51.5716
- Rouge2: 40.2152
- Rougel: 49.9941
- Rougelsum: 49.9767
- Gen Len: 18.5898
## Model description
This is a flan-t5-base model fine-tuned on a Dutch dataset version based on RBEL: Relation Extraction By End-to-end Language generation. The model aims to extract triplets in the form {head, relation, tail} from unstructured text. The data for Dutch triplets and unstructured text was generated by using the code of the original authors of REBEL, available at https://github.com/Babelscape/crocodile.
## Pipeline usage
The code below is adopted from the original REBEL model: https://huggingface.co/Babelscape/rebel-large .
```python
from transformers import pipeline
triplet_extractor = pipeline('text2text-generation', model='Kbrek/flan_rebel_nl', tokenizer='Kbrek/flan_rebel_nl')
# We need to use the tokenizer manually since we need special tokens.
extracted_text = triplet_extractor("Nederland is een van de landen binnen het Koninkrijk der Nederlanden. Nederland ligt voor het overgrote deel in het noordwesten van Europa, aan de Noordzee. ", max_length = 512, num_beams = 3, temperature = 1)
# Function to parse the generated text and extract the triplets
def extract_triplets(text):
triplets = []
relation, subject, relation, object_ = '', '', '', ''
text = text.strip()
current = 'x'
for token in text.replace("<s>", "").replace("<pad>", "").replace("</s>", "").split():
if token == "<triplet>":
current = 't'
if relation != '':
triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
relation = ''
subject = ''
elif token == "<subj>":
current = 's'
if relation != '':
triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
object_ = ''
elif token == "<obj>":
current = 'o'
relation = ''
else:
if current == 't':
subject += ' ' + token
elif current == 's':
object_ += ' ' + token
elif current == 'o':
relation += ' ' + token
if subject != '' and relation != '' and object_ != '':
triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
return triplets
extracted_triplets = extract_triplets(extracted_text[0])
print(extracted_triplets)
```
A trick that might give you better results is by forcing the entities the model generates by extracting entities with a ner pipeline and forcing those tokens in the generated output.
```python
triplet_extractor = pipeline('text2text-generation', model='Kbrek/flan_rebel_nl', tokenizer='Kbrek/flan_rebel_nl')
ner_extractor = pipeline("ner", "Babelscape/wikineural-multilingual-ner", aggregation_strategy = "simple")
#extract ents
ner_output = ner_extractor(input_text)
ents = [i["word"] for i in ner_output]
if len(ents) > 0:
tokens = triplet_extractor.tokenizer(ents, add_special_tokens=False)["input_ids"]
extracted_text = triplet_extractor(input_text, max_length = 512, force_words_ids = tokens)
else:
extracted_text = triplet_extractor(input_text, max_length = 512, temperature = 1)
triplets = extract_triplets(extracted_text[0]["generated_text"])
```
## Training and evaluation data
Data used for developing and evaluating this model is generated by using https://github.com/Babelscape/crocodile .
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1256 | 1.0 | 22047 | 0.1206 | 50.3892 | 38.2761 | 48.7657 | 48.7444 | 18.6112 |
| 0.1091 | 2.0 | 44094 | 0.1112 | 50.9615 | 39.2843 | 49.3865 | 49.3674 | 18.5447 |
| 0.0875 | 3.0 | 66141 | 0.1047 | 51.2045 | 39.7598 | 49.6483 | 49.6317 | 18.5763 |
| 0.0841 | 4.0 | 88188 | 0.1036 | 51.3543 | 39.9776 | 49.8528 | 49.8223 | 18.6178 |
| 0.0806 | 5.0 | 110235 | 0.1029 | 51.5716 | 40.2152 | 49.9941 | 49.9767 | 18.5898 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.12.1 | 5,496 | [
[
-0.0205078125,
-0.047119140625,
0.01224517822265625,
0.012237548828125,
-0.00797271728515625,
-0.0096893310546875,
-0.0207061767578125,
-0.02447509765625,
0.0229949951171875,
0.03155517578125,
-0.02569580078125,
-0.046478271484375,
-0.034210205078125,
0.0137... |
jsh0551/distillbert-base-uncased-finetuned-clinc | 2023-04-21T07:50:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jsh0551 | null | null | jsh0551/distillbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-04-15T23:30:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distillbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,934 | [
[
-0.0347900390625,
-0.0390625,
0.01338958740234375,
0.0073394775390625,
-0.0256500244140625,
-0.0236358642578125,
-0.01303863525390625,
-0.0070037841796875,
0.0021343231201171875,
0.021453857421875,
-0.04559326171875,
-0.04815673828125,
-0.059967041015625,
-0... |
jsh0551/distilbert-base-uncased-distilled-clinc | 2023-04-16T01:42:26.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jsh0551 | null | null | jsh0551/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-04-16T01:33:46 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9396774193548387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3656
- Accuracy: 0.9397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2049 | 0.7468 |
| 3.713 | 2.0 | 636 | 1.6842 | 0.8503 |
| 3.713 | 3.0 | 954 | 0.9102 | 0.9097 |
| 1.4684 | 4.0 | 1272 | 0.5818 | 0.9277 |
| 0.5851 | 5.0 | 1590 | 0.4425 | 0.9358 |
| 0.5851 | 6.0 | 1908 | 0.3823 | 0.9387 |
| 0.3209 | 7.0 | 2226 | 0.3656 | 0.9397 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,056 | [
[
-0.032012939453125,
-0.037567138671875,
0.0163116455078125,
0.005680084228515625,
-0.025665283203125,
-0.019622802734375,
-0.0102996826171875,
-0.0056610107421875,
0.0053558349609375,
0.02252197265625,
-0.042449951171875,
-0.047088623046875,
-0.061248779296875,
... |
suyuanliu/wav2vec2-base-finetuned-stop-classification-2 | 2023-04-16T04:00:14.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | suyuanliu | null | null | suyuanliu/wav2vec2-base-finetuned-stop-classification-2 | 0 | 2 | transformers | 2023-04-16T03:30:40 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-stop-classification-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-stop-classification-2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2352
- Accuracy: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6906 | 0.99 | 18 | 0.6898 | 0.5538 |
| 0.6108 | 1.97 | 36 | 0.5873 | 0.7146 |
| 0.5002 | 2.96 | 54 | 0.4149 | 0.8290 |
| 0.4179 | 4.0 | 73 | 0.3823 | 0.8508 |
| 0.3733 | 4.99 | 91 | 0.2859 | 0.9012 |
| 0.3442 | 5.97 | 109 | 0.2641 | 0.9101 |
| 0.2907 | 6.96 | 127 | 0.2401 | 0.9155 |
| 0.2742 | 8.0 | 146 | 0.2276 | 0.9196 |
| 0.2624 | 8.99 | 164 | 0.2341 | 0.9162 |
| 0.2533 | 9.86 | 180 | 0.2352 | 0.9135 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
| 2,072 | [
[
-0.0328369140625,
-0.03875732421875,
0.0028629302978515625,
0.0052337646484375,
-0.01244354248046875,
-0.023468017578125,
-0.01226043701171875,
-0.0238189697265625,
0.0038623809814453125,
0.02142333984375,
-0.06121826171875,
-0.048614501953125,
-0.05767822265625... |
Ganu3010/ppo-PyramidsRND | 2023-04-16T04:20:10.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | Ganu3010 | null | null | Ganu3010/ppo-PyramidsRND | 0 | 2 | ml-agents | 2023-04-16T04:20:05 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Ganu3010/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 954 | [
[
-0.0272216796875,
-0.019744873046875,
-0.00103759765625,
0.025909423828125,
-0.00974273681640625,
0.005603790283203125,
0.0271148681640625,
-0.0033092498779296875,
0.034332275390625,
0.0347900390625,
-0.035491943359375,
-0.050933837890625,
-0.036224365234375,
... |
suyuanliu/wav2vec2-base-finetuned-stop-classification-4 | 2023-04-16T05:15:28.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | suyuanliu | null | null | suyuanliu/wav2vec2-base-finetuned-stop-classification-4 | 0 | 2 | transformers | 2023-04-16T04:45:20 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-stop-classification-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-stop-classification-4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1914
- Accuracy: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.691 | 0.99 | 18 | 0.6559 | 0.7091 |
| 0.6097 | 1.97 | 36 | 0.4592 | 0.8229 |
| 0.4469 | 2.96 | 54 | 0.4591 | 0.7861 |
| 0.361 | 4.0 | 73 | 0.2763 | 0.8999 |
| 0.303 | 4.99 | 91 | 0.2650 | 0.9012 |
| 0.2829 | 5.97 | 109 | 0.2189 | 0.9210 |
| 0.2557 | 6.96 | 127 | 0.2003 | 0.9292 |
| 0.2416 | 8.0 | 146 | 0.2252 | 0.9149 |
| 0.2316 | 8.99 | 164 | 0.1855 | 0.9346 |
| 0.2329 | 9.86 | 180 | 0.1914 | 0.9285 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
| 2,072 | [
[
-0.033782958984375,
-0.037994384765625,
0.005016326904296875,
0.00496673583984375,
-0.01319122314453125,
-0.025177001953125,
-0.0121612548828125,
-0.0239105224609375,
0.00189971923828125,
0.0201263427734375,
-0.06298828125,
-0.051055908203125,
-0.054107666015625... |
DunnBC22/bert-large-uncased-Hate_Offensive_or_Normal_Speech | 2023-05-11T21:28:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | DunnBC22 | null | null | DunnBC22/bert-large-uncased-Hate_Offensive_or_Normal_Speech | 1 | 2 | transformers | 2023-04-16T05:08:31 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-uncased-Hate_Offensive_or_Normal_Speech
results: []
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-Hate_Offensive_or_Normal_Speech
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0443
- Accuracy: 0.9869
- Weighted f1: 0.9869
- Micro f1: 0.9869
- Macro f1: 0.9863
- Weighted recall: 0.9869
- Micro recall: 0.9869
- Macro recall: 0.9857
- Weighted precision: 0.9869
- Micro precision: 0.9869
- Macro precision: 0.9870
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Multiclass%20Classification/Transformer%20Comparison/Hate%20%26%20Offensive%20Speech%20-%20BERT-Large.ipynb
### Associated Models
This project is part of a comparison that included the following models:
- https://huggingface.co/DunnBC22/bert-base-uncased-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/distilbert-base-uncased-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/fBERT-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/hateBERT-Hate_Offensive_or_Normal_Speech
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
The main limitation is the quality of the data source.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/subhajournal/normal-hate-and-offensive-speeches
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.7991 | 1.0 | 39 | 0.4235 | 0.7430 | 0.7100 | 0.7430 | 0.6902 | 0.7430 | 0.7430 | 0.7049 | 0.7782 | 0.7430 | 0.7886 |
| 0.2156 | 2.0 | 78 | 0.1072 | 0.9607 | 0.9605 | 0.9607 | 0.9585 | 0.9607 | 0.9607 | 0.9569 | 0.9607 | 0.9607 | 0.9605 |
| 0.0518 | 3.0 | 117 | 0.0518 | 0.9869 | 0.9869 | 0.9869 | 0.9863 | 0.9869 | 0.9869 | 0.9857 | 0.9869 | 0.9869 | 0.9870 |
| 0.0242 | 4.0 | 156 | 0.0500 | 0.9853 | 0.9852 | 0.9853 | 0.9845 | 0.9853 | 0.9853 | 0.9841 | 0.9853 | 0.9853 | 0.9850 |
| 0.0163 | 5.0 | 195 | 0.0443 | 0.9869 | 0.9869 | 0.9869 | 0.9863 | 0.9869 | 0.9869 | 0.9857 | 0.9869 | 0.9869 | 0.9870 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.12.1 | 3,676 | [
[
-0.050567626953125,
-0.05029296875,
0.0101470947265625,
0.0044097900390625,
-0.0112152099609375,
-0.006122589111328125,
-0.0168609619140625,
-0.01849365234375,
0.037017822265625,
0.020538330078125,
-0.043701171875,
-0.04913330078125,
-0.05474853515625,
-0.00... |
Fred99774/parailaranew2 | 2023-04-16T05:45:52.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Fred99774 | null | null | Fred99774/parailaranew2 | 1 | 2 | diffusers | 2023-04-16T05:17:01 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Parailaranew2 Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 504 | [
[
-0.025726318359375,
-0.045562744140625,
0.04681396484375,
0.039642333984375,
-0.0215301513671875,
0.0234527587890625,
0.02001953125,
-0.0152587890625,
0.051422119140625,
0.01259613037109375,
-0.01314544677734375,
-0.0196533203125,
-0.03533935546875,
-0.00942... |
Oct0/bert-fine-tuned-cola | 2023-04-16T06:47:39.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Oct0 | null | null | Oct0/bert-fine-tuned-cola | 0 | 2 | transformers | 2023-04-16T05:56:27 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-fine-tuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3011
- Validation Loss: 0.4294
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4974 | 0.4142 | 0 |
| 0.3011 | 0.4294 | 1 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,333 | [
[
-0.037567138671875,
-0.05926513671875,
0.0142822265625,
0.0129241943359375,
-0.03265380859375,
-0.0213623046875,
-0.0172271728515625,
-0.0204620361328125,
0.01374053955078125,
0.01009368896484375,
-0.05584716796875,
-0.034332275390625,
-0.051849365234375,
-0... |
huggingtweets/badgalriri | 2023-04-16T07:27:52.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | huggingtweets | null | null | huggingtweets/badgalriri | 0 | 2 | transformers | 2023-04-16T07:27:44 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1647002474849484803/8WZETU0r_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ⱼₐ𝓌ₙᵧ</div>
<div style="text-align: center; font-size: 14px;">@badgalriri</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ⱼₐ𝓌ₙᵧ.
| Data | ⱼₐ𝓌ₙᵧ |
| --- | --- |
| Tweets downloaded | 2974 |
| Retweets | 950 |
| Short tweets | 249 |
| Tweets kept | 1775 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4norsrod/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @badgalriri's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p45ektxj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p45ektxj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/badgalriri')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| 3,479 | [
[
-0.0253448486328125,
-0.063232421875,
0.0253143310546875,
0.01739501953125,
-0.0196380615234375,
0.00939178466796875,
-0.005847930908203125,
-0.036712646484375,
0.026336669921875,
0.0069427490234375,
-0.073486328125,
-0.03204345703125,
-0.0494384765625,
-0.0... |
ppiiesle3y/fined-tuned-bart | 2023-04-25T03:34:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:multi_news",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | ppiiesle3y | null | null | ppiiesle3y/fined-tuned-bart | 0 | 2 | transformers | 2023-04-16T09:29:59 | ---
language:
- en
tags:
- summarization
license: mit
datasets:
- multi_news
model-index:
- name: ppiiesle3y/fined-tuned-bart
results:
- task:
type: summarization
name: Summarization
dataset:
name: multi_news
type: multi_news
split: train
metrics:
- name: ROUGE-1
type: rouge
value: 43.7065
verified: true
- name: ROUGE-2
type: rouge
value: 16.5533
verified: true
- name: ROUGE-L
type: rouge
value: 24.7588
verified: true
- name: ROUGE-LSUM
type: rouge
value: 37.7586
verified: true
- name: loss
type: loss
value: 2.00663
verified: true
- name: gen_len
type: gen_len
value: 129.1379
verified: true
---
# TL;DR AT2 Applied Natural Language Processing Assignment
## PROJECT OBJECTIVES
This project aims to use NLP technology to summarise longer passages of text into succinct and accurate summations.
## PROJECT OUTCOMES AND INSIGHTS
The expected outcomes from the project is a model that is able to intake a larger body of text and provide a shortened summary that is both succinct and accurate. This will benefit most human readers by making it more efficient gain understanding from written text. Applications for this technology include as a study aide, for people in roles where they are required to quickly assess documents such as book publishers reading through manuscripts to assess if they are fit for publishing or script readers etc.
The most significant impact this project has is to increase information assimilation in a compressed timeframe, thus saving time.
| 1,654 | [
[
0.00693511962890625,
-0.0472412109375,
0.032318115234375,
0.0283660888671875,
-0.0207366943359375,
0.0236053466796875,
-0.01202392578125,
-0.062469482421875,
0.007030487060546875,
0.0413818359375,
-0.01287078857421875,
-0.0246429443359375,
-0.04388427734375,
... |
qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition-Double-Negation-Implication-Commutative-Pos-Neg-1-3 | 2023-05-28T04:58:12.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"logical-reasoning",
"logical-equivalence",
"constrastive-learning",
"en",
"arxiv:2305.12599",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | qbao775 | null | null | qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition-Double-Negation-Implication-Commutative-Pos-Neg-1-3 | 1 | 2 | transformers | 2023-04-16T09:38:22 | ---
license: mit
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- logical-reasoning
- logical-equivalence
- constrastive-learning
---
# AMR-LE
This is a branch which includes the model weight for AMR-LE. AMR-LE is a model that been fine-tuned on AMR-based logic-driven augmented data. The data is formed as `(original sentence, logical equivalence sentence, logical inequivalence sentence)`. We use Abstract Meaning Representation (AMR) to automatically construct logical equivalence and logical inequivalence sentences. We use constrastive learning to train the model to learn to identify whether two sentences are logically equivalent or logically inequivalent. You are welcome to fine-tune the model weights on the dowstream tasks as logical reasoning reading comprehension tasks (ReClor and LogiQA) and natural language inference tasks (MNLI, MRPC, QNLI, RTE and QQP). We achieved #2 on the ReClor Leaderboard.
Here is the original links for AMR-LE including paper, project and leaderboard.
Paper: https://arxiv.org/abs/2305.12599
Project: https://github.com/Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning
Leaderboard: https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347
In this repository, we upload the model weight which has been trained on the dataset that has the ratio of positive sample and negative sample as 1 and 3. We use AMR with four logical equivalence laws `(Contraposition law, Commutative law, Implication law, Double negation law)` to construct four different logical equivalence/inequivalence sentences.
## How to interact model in this web page?
Some test examples that you may copy and paste them into the right side user input area.
The expected answer for the following example is they are logically inequivalent which is 0. Use constraposition law `(If A then B <=> If not B then not A)` to show that following example is false.
```
If Alice is happy, then Bob is smart.
If Alice is not happy, then Bob is smart.
```
The expected answer for the following example is they are logically equivalent which is 1. Use constraposition law `(If A then B <=> If not B then not A)` to show that following example is true.
```
If Alice is happy, then Bob is smart.
If Bob is not smart, then Alice is not happy.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use double negation law `(A <=> not not A)` to show that following example is false.
```
Alice is happy.
Alice is not happy.
```
The expected answer for the following example is they are logically equivalent which is 1. Use double negation law `(A <=> not not A)` to show that following example is true.
```
Alice is happy.
Alice is not sad.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use implication law `(If A then B <=> not A or B)` to show that following example is false. The `or` in `not A or B` refer to the the meaning of `otherwise` in natural language.
```
If Alan is kind, then Bob is clever.
Alan is kind or Bob is clever.
```
The expected answer for the following example is they are logically equivalent which is 1. Use implication law `(If A then B <=> not A or B)` to show that following example is true. The `or` in `not A or B` refer to the the meaning of `otherwise` in natural language.
```
If Alan is kind, then Bob is clever.
Alan is not kind or Bob is clever.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use commutative law `(A and B <=> B and A)` to show that following example is false.
```
The bald eagle is clever and the wolf is fierce.
The wolf is not fierce and the bald eagle is not clever.
```
The expected answer for the following example is they are logically equivalent which is 1. Use commutative law `(A and B <=> B and A)` to show that following example is true.
```
The bald eagle is clever and the wolf is fierce.
The wolf is fierce and the bald eagle is clever.
```
## How to load the model weight?
```
from transformers import AutoModel
model = AutoModel.from_pretrained("qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition-Double-Negation-Implication-Commutative-Pos-Neg-1-3")
```
## Citation
```
@article{bao2023contrastive,
title={Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text},
author={Bao, Qiming and Peng, Alex Yuxuan and Deng, Zhenyun and Zhong, Wanjun and Tan, Neset and Young, Nathan and Chen, Yang and Zhu, Yonghua and Witbrock, Michael and Liu, Jiamou},
journal={arXiv preprint arXiv:2305.12599},
year={2023}
}
``` | 4,647 | [
[
-0.012237548828125,
-0.0762939453125,
0.0321044921875,
-0.008697509765625,
-0.01337432861328125,
-0.0140228271484375,
-0.0022430419921875,
-0.0389404296875,
0.00241851806640625,
0.040191650390625,
-0.049163818359375,
-0.01180267333984375,
-0.031463623046875,
... |
vietthangif/membot_command | 2023-04-16T12:01:49.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | vietthangif | null | null | vietthangif/membot_command | 0 | 2 | transformers | 2023-04-16T11:16:44 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: membot_command
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# membot_command
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8322
- Accuracy: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.3282 | 0.7692 |
| No log | 2.0 | 12 | 1.1410 | 0.7692 |
| No log | 3.0 | 18 | 1.0181 | 0.7692 |
| No log | 4.0 | 24 | 0.9338 | 0.7692 |
| No log | 5.0 | 30 | 0.8807 | 0.7692 |
| No log | 6.0 | 36 | 0.8560 | 0.7692 |
| No log | 7.0 | 42 | 0.8379 | 0.7692 |
| No log | 8.0 | 48 | 0.8322 | 0.7692 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,766 | [
[
-0.0310516357421875,
-0.0517578125,
0.005840301513671875,
0.0146484375,
-0.026763916015625,
-0.00835418701171875,
-0.0020542144775390625,
0.0018768310546875,
0.017303466796875,
0.0193328857421875,
-0.05352783203125,
-0.048736572265625,
-0.06280517578125,
-0.... |
vincentmin/bloomz-1b1-eli5-reward | 2023-06-10T11:42:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bloom",
"text-classification",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | vincentmin | null | null | vincentmin/bloomz-1b1-eli5-reward | 1 | 2 | transformers | 2023-04-16T13:45:34 | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: bloomz-1b1-eli5-reward
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomz-1b1-eli5-reward
This model is a fine-tuned version of [bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,060 | [
[
-0.0209808349609375,
-0.032135009765625,
0.01947021484375,
0.0253143310546875,
-0.01617431640625,
-0.0256500244140625,
-0.0043182373046875,
-0.0249786376953125,
0.0256500244140625,
0.011993408203125,
-0.07476806640625,
-0.034271240234375,
-0.049163818359375,
... |
Alexisbal/distilbert-base-uncased-finetuned-emotion | 2023-06-10T19:37:36.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Alexisbal | null | null | Alexisbal/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-16T15:33:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1233
- Accuracy: 0.9505
- F1: 0.9503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2411 | 1.0 | 250 | 0.1199 | 0.953 | 0.9528 |
| 0.1012 | 2.0 | 500 | 0.1233 | 0.9505 | 0.9503 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,502 | [
[
-0.0360107421875,
-0.04205322265625,
0.016815185546875,
0.0241241455078125,
-0.0264129638671875,
-0.0221710205078125,
-0.0126953125,
-0.007678985595703125,
0.012237548828125,
0.0079803466796875,
-0.05523681640625,
-0.052703857421875,
-0.0594482421875,
-0.006... |
cosminc98/sexism-identification-coroseof | 2023-04-18T13:08:09.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | cosminc98 | null | null | cosminc98/sexism-identification-coroseof | 0 | 2 | transformers | 2023-04-16T20:37:52 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sexism-identification-coroseof
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sexism-identification-coroseof
This model is a fine-tuned version of [dumitrescustefan/bert-base-romanian-uncased-v1](https://huggingface.co/dumitrescustefan/bert-base-romanian-uncased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6960
- Accuracy: 0.8499
- F1: 0.8537
- Balanced Accuracy: 0.6139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Balanced Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:-----------------:|
| No log | 1.0 | 488 | 0.9896 | 0.8125 | 0.8263 | 0.6059 |
| 0.9572 | 2.0 | 976 | 0.8694 | 0.7992 | 0.8202 | 0.7183 |
| 0.5835 | 3.0 | 1464 | 1.1954 | 0.8388 | 0.8477 | 0.6485 |
| 0.2833 | 4.0 | 1952 | 1.6960 | 0.8499 | 0.8537 | 0.6139 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,812 | [
[
-0.0283050537109375,
-0.0390625,
0.005645751953125,
0.0204315185546875,
-0.022125244140625,
-0.0265350341796875,
-0.00372314453125,
-0.0205841064453125,
0.0159149169921875,
0.0226287841796875,
-0.059967041015625,
-0.0496826171875,
-0.048583984375,
-0.0056533... |
ValenHumano/bert-base-uncased-clasificator-emotions | 2023-04-16T21:09:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ValenHumano | null | null | ValenHumano/bert-base-uncased-clasificator-emotions | 0 | 2 | transformers | 2023-04-16T20:42:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: bert-base-uncased-clasificator-emotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-clasificator-emotions
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1825
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1692 | 1.0 | 250 | 0.1858 | 0.931 |
| 0.1201 | 2.0 | 500 | 0.1818 | 0.9315 |
| 0.0829 | 3.0 | 750 | 0.1800 | 0.933 |
| 0.0568 | 4.0 | 1000 | 0.1825 | 0.9335 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,840 | [
[
-0.041839599609375,
-0.037322998046875,
0.010986328125,
0.018157958984375,
-0.033294677734375,
-0.0297088623046875,
-0.022003173828125,
-0.01560211181640625,
0.01250457763671875,
0.0170440673828125,
-0.057373046875,
-0.04986572265625,
-0.0498046875,
-0.02113... |
natanmb/bart-base-finetuned-multi-news | 2023-04-17T00:38:11.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | natanmb | null | null | natanmb/bart-base-finetuned-multi-news | 0 | 2 | transformers | 2023-04-16T23:27:47 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-multi-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-multi-news
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6353
- Rouge1: 15.1146
- Rouge2: 5.3873
- Rougel: 11.4132
- Rougelsum: 13.2739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 2.9189 | 1.0 | 625 | 2.4645 | 15.2063 | 5.2852 | 11.5864 | 13.4208 |
| 2.4697 | 2.0 | 1250 | 2.4706 | 15.3737 | 5.4725 | 11.7465 | 13.5681 |
| 2.1831 | 3.0 | 1875 | 2.4789 | 14.8306 | 5.0857 | 11.2416 | 13.1072 |
| 1.9598 | 4.0 | 2500 | 2.5299 | 15.1744 | 5.5465 | 11.6445 | 13.4053 |
| 1.7777 | 5.0 | 3125 | 2.5799 | 14.9417 | 5.2124 | 11.3553 | 13.1401 |
| 1.6454 | 6.0 | 3750 | 2.6028 | 14.9804 | 5.333 | 11.294 | 13.2385 |
| 1.554 | 7.0 | 4375 | 2.6353 | 15.1146 | 5.3873 | 11.4132 | 13.2739 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,064 | [
[
-0.04669189453125,
-0.05303955078125,
0.0170135498046875,
0.01183319091796875,
-0.01288604736328125,
-0.01763916015625,
-0.00794219970703125,
-0.01190185546875,
0.03387451171875,
0.0362548828125,
-0.057037353515625,
-0.050079345703125,
-0.0406494140625,
-0.0... |
mfidabel/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-16T23:45:23.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | mfidabel | null | null | mfidabel/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-16T23:30:47 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 553.00 +/- 144.16
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mfidabel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mfidabel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mfidabel
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,691 | [
[
-0.04156494140625,
-0.035858154296875,
0.0218353271484375,
0.0248565673828125,
-0.009918212890625,
-0.017852783203125,
0.012939453125,
-0.01349639892578125,
0.0132293701171875,
0.0239715576171875,
-0.07135009765625,
-0.035247802734375,
-0.0274200439453125,
-... |
AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed | 2023-04-25T04:43:00.000Z | [
"transformers",
"pytorch",
"opt",
"text-generation",
"deepspeed",
"chatgpt",
"sft",
"rlhf",
"en",
"dataset:Dahoas/full-hh-rlhf",
"dataset:Dahoas/synthetic-instruct-gptj-pairwise",
"dataset:yitingxie/rlhf-reward-datasets",
"dataset:openai/webgpt_comparisons",
"dataset:stanfordnlp/SHP",
"l... | text-generation | AdamG012 | null | null | AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed | 3 | 2 | transformers | 2023-04-17T02:02:56 | ---
language:
- en
tags:
- deepspeed
- chatgpt
- opt
- sft
- rlhf
license: apache-2.0
datasets:
- Dahoas/full-hh-rlhf
- Dahoas/synthetic-instruct-gptj-pairwise
- yitingxie/rlhf-reward-datasets
- openai/webgpt_comparisons
- stanfordnlp/SHP
---
---
# ChatGPT OPT 1.3B DeepSpeed Reinforcement Learning from Human Feedback Critic Model
*chat-opt-1.3b-rlhf-critic-deepspeed*
This model consists of the final step of a modified pipeline the to the traditional training process of Chat-GPT models, which is comprised of a three-step procedure of [supervised fine tuning](https://huggingface.co/AdamG012/chat-opt-1.3b-sft-deepspeed), [reward model](https://huggingface.co/AdamG012/chat-opt-350m-reward-deepspeed) and **reinforcement learning from human feedback models**; [actor](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-deepspeed), [actor EMA](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-ema-deepspeed) and [critic](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed) models.
This project's main goal was to make proper use of existing frameworks that revolve around the minimisation of training costs and thus the eventual improvements towards both the feasibility and usability of ChatGPT-like models. The framework selected here is DeepSpeed which has been instrumental in the development of this model and through this framework it was possible to train the ChatGPT-like model on much larger data-sets with a reasonable number of GPUs and consequently achieve significantly better performance.
This model follows the blog of ChatGPT and the paper of InstructGPT and especially the [Microsoft DeepSpeed Chat Blog](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat).
## Our Training Methodology and Speedup Recipes
The training process simply involves a single python run of DeepSpeed-Chat which initiates the whole 3-step pipeline, saving all models in the process:
``` bash
python train.py --actor-model facebook/opt-1.3b --reward-model facebook/opt-350m --deployment-type single_node
```
This pipeline can be broken up into three key steps:
1. **Supervised fine-tuning (SFT):** See [here](https://huggingface.co/AdamG012/chat-opt-1.3b-sft-deepspeed/).
2. **Reward Model (RM) fine-tuning:** See [here](https://huggingface.co/AdamG012/chat-opt-350m-reward-deepspeed).
3. **Reinforcement-learning from Human feedback (RLHF) fine-tuning:** At the completion of the prior two steps, the final RLHF fine-tuning can be initiated. This involves the collection of both the *fine-tuned model* from step 1 and the *reward model* from step 2 and train them on the data-set with comparisons. This generates both an [actor](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-deepspeed) and **critic** model. I also generate an [actor model with an exponential moving average (EMA)](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-ema-deepspeed) which is known to improve conversational response quality.
To view the details behind each step head into their respective links and view the model card there.
### Reinforcement learning from human feedback
**Model Configurations:**
| Parameter | Value |
|:-----------------------|:------|
| Parameters | 1.3B |
| Model type | OPT |
| FFN Dimensions | 8192 |
| Hidden Size | 2048 |
| Max Position Embedding | 2048 |
| Attention Heads | 16 |
| Hidden layers | 24 |
**Training Configurations:**
| Parameter | Value |
|:-----------------------|:------|
| Train Batch size | 32 |
| Train micro batch size | 4 |
| ZeRO stage | 2 |
| FP16 | True |
| Gradient clipping | 1.0 |
| Dropout | 0.1 |
| Attention Dropout | 0.0 |
| Attention Dropout | 0.0 |
| Prescale gradients | False |
## Installation
If using through the HuggingFace transformers library:
``` python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed")
model = AutoModelForCausalLM.from_pretrained("AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed")
```
If you would like to clone from source:
```bash
# Make sure you have git-lfs installed (https://git-lfs.github.com)
git lfs install
git clone https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
```
## **Acknowledgements**
We thank the following papers and open-source repositories. We especially thank DeepSpeed for their frameworks as well.
* [1] Schulman, John, et al. "Introducing ChatGPT", https://openai.com/blog/chatgpt (2022).
* [2] Transformers [Hugging Face (github.com)](https://github.com/huggingface)
* [3] DeepSpeed Chat [DeepSpeed Chat](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat)
| 5,065 | [
[
-0.05224609375,
-0.06866455078125,
0.01419830322265625,
0.033203125,
-0.0028324127197265625,
0.0122833251953125,
-0.0288238525390625,
-0.027923583984375,
0.0183868408203125,
0.0144805908203125,
-0.08209228515625,
-0.00382232666015625,
-0.041107177734375,
-0.... |
JacobQuintero/unli | 2023-04-18T22:41:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | JacobQuintero | null | null | JacobQuintero/unli | 0 | 2 | transformers | 2023-04-17T02:35:19 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: unli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0752
- Accuracy: 0.9681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0808 | 1.0 | 1735 | 0.0737 | 0.9681 |
| 0.0626 | 2.0 | 3470 | 0.0765 | 0.9681 |
| 0.0453 | 3.0 | 5205 | 0.0752 | 0.9681 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,421 | [
[
-0.034027099609375,
-0.0386962890625,
0.0109710693359375,
0.01541900634765625,
-0.03131103515625,
-0.0302276611328125,
-0.0181427001953125,
-0.01800537109375,
0.005870819091796875,
0.0261688232421875,
-0.055938720703125,
-0.044769287109375,
-0.046875,
-0.022... |
nizar-sayad/twitter-roberta-base-sentiment-latest | 2023-04-20T13:04:22.000Z | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2202.03829",
"endpoints_compatible",
"region:us"
] | text-classification | nizar-sayad | null | null | nizar-sayad/twitter-roberta-base-sentiment-latest | 0 | 2 | transformers | 2023-04-17T03:32:19 | ---
language: en
widget:
- text: Covid cases are increasing fast!
datasets:
- tweet_eval
duplicated_from: cardiffnlp/twitter-roberta-base-sentiment-latest
---
# Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2022)
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark.
The original Twitter-based RoBERTa model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
0 -> Negative;
1 -> Neutral;
2 -> Positive
This sentiment analysis model has been integrated into [TweetNLP](https://github.com/cardiffnlp/tweetnlp). You can access the demo [here](https://tweetnlp.org).
## Example Pipeline
```python
from transformers import pipeline
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("Covid cases are increasing fast!")
```
```
[{'label': 'Negative', 'score': 0.7236}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"cardiffnlp/twitter-roberta-base-sentiment-latest"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
#model.save_pretrained(MODEL)
text = "Covid cases are increasing fast!"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Covid cases are increasing fast!"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) Negative 0.7236
2) Neutral 0.2287
3) Positive 0.0477
``` | 2,897 | [
[
-0.006885528564453125,
-0.043212890625,
0.018402099609375,
0.032623291015625,
-0.0211334228515625,
0.01399993896484375,
-0.015960693359375,
-0.0110626220703125,
0.01641845703125,
-0.00122833251953125,
-0.044158935546875,
-0.060699462890625,
-0.0582275390625,
... |
pvsukharev/bert-uncased-fake-news-4500 | 2023-04-17T21:15:21.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | pvsukharev | null | null | pvsukharev/bert-uncased-fake-news-4500 | 0 | 2 | transformers | 2023-04-17T07:11:30 | ---
license: mit
---
bert-base-uncased, trained on fake news dataset.
Input title, text, split with ////////////
Output: 1 - fake, 0 - real. | 143 | [
[
-0.02349853515625,
-0.069580078125,
0.0149993896484375,
0.0188446044921875,
-0.044036865234375,
0.0240020751953125,
0.009185791015625,
0.00015294551849365234,
0.045745849609375,
0.035614013671875,
-0.05987548828125,
-0.0281829833984375,
-0.0421142578125,
-0.... |
Karthikeya55/layoutlm-funsd-sequence-tf | 2023-04-17T10:34:58.000Z | [
"transformers",
"tf",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | Karthikeya55 | null | null | Karthikeya55/layoutlm-funsd-sequence-tf | 0 | 2 | transformers | 2023-04-17T10:16:01 | ---
tags:
- generated_from_keras_callback
model-index:
- name: layoutlm-funsd-sequence-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd-sequence-tf
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2348
- Validation Loss: 0.6737
- Train Overall Precision: 0.7356
- Train Overall Recall: 0.7998
- Train Overall F1: 0.7663
- Train Overall Accuracy: 0.8220
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch |
|:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:|
| 1.7150 | 1.4139 | 0.2373 | 0.2860 | 0.2594 | 0.4954 | 0 |
| 1.1803 | 0.9205 | 0.5676 | 0.6322 | 0.5981 | 0.7008 | 1 |
| 0.7884 | 0.7100 | 0.6202 | 0.7250 | 0.6685 | 0.7735 | 2 |
| 0.5877 | 0.6476 | 0.6689 | 0.7662 | 0.7142 | 0.7942 | 3 |
| 0.4490 | 0.6179 | 0.7133 | 0.8078 | 0.7576 | 0.8066 | 4 |
| 0.3746 | 0.6305 | 0.7176 | 0.7878 | 0.7510 | 0.8129 | 5 |
| 0.3082 | 0.6924 | 0.7163 | 0.8018 | 0.7566 | 0.7937 | 6 |
| 0.2348 | 0.6737 | 0.7356 | 0.7998 | 0.7663 | 0.8220 | 7 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,803 | [
[
-0.041015625,
-0.0340576171875,
0.0218048095703125,
0.00328826904296875,
-0.0222015380859375,
-0.01306915283203125,
0.00489044189453125,
-0.00635528564453125,
0.026947021484375,
0.019775390625,
-0.0504150390625,
-0.05181884765625,
-0.045745849609375,
-0.0213... |
terzimert/anercorpDataset_v2.0 | 2023-04-17T11:26:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | terzimert | null | null | terzimert/anercorpDataset_v2.0 | 0 | 2 | transformers | 2023-04-17T10:44:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: anercorpDataset_v2.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# anercorpDataset_v2.0
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3549
- Precision: 0.6878
- Recall: 0.6011
- F1: 0.6415
- Accuracy: 0.9317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2867 | 1.0 | 7057 | 0.4187 | 0.5231 | 0.4992 | 0.5109 | 0.9111 |
| 0.2945 | 2.0 | 14114 | 0.3420 | 0.6300 | 0.5616 | 0.5938 | 0.9246 |
| 0.2098 | 3.0 | 21171 | 0.3549 | 0.6878 | 0.6011 | 0.6415 | 0.9317 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,707 | [
[
-0.0292816162109375,
-0.037811279296875,
0.0080108642578125,
0.01983642578125,
-0.0198974609375,
-0.0255584716796875,
-0.0157012939453125,
-0.015655517578125,
0.018951416015625,
0.0216827392578125,
-0.054901123046875,
-0.049468994140625,
-0.047607421875,
-0.... |
RagnaChris/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-17T11:43:37.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | RagnaChris | null | null | RagnaChris/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-17T11:43:02 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 274.50 +/- 31.50
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RagnaChris -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RagnaChris -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RagnaChris
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 50000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,694 | [
[
-0.041717529296875,
-0.036468505859375,
0.0212249755859375,
0.024322509765625,
-0.01065826416015625,
-0.0173797607421875,
0.0119476318359375,
-0.0143280029296875,
0.013275146484375,
0.0236663818359375,
-0.0697021484375,
-0.03460693359375,
-0.0265350341796875,
... |
mirfan899/da_spacy_sentiment | 2023-05-23T04:26:11.000Z | [
"spacy",
"text-classification",
"da",
"region:us"
] | text-classification | mirfan899 | null | null | mirfan899/da_spacy_sentiment | 0 | 2 | spacy | 2023-04-17T12:06:13 | ---
tags:
- spacy
- text-classification
language:
- da
model-index:
- name: da_spacy_sentiment
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `da_spacy_sentiment` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.1,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `textcat` |
| **Components** | `tok2vec`, `textcat` |
| **Vectors** | 500000 keys, 20000 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (3 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat`** | `neutral`, `negative`, `positive` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 82.58 |
| `CATS_MICRO_P` | 82.40 |
| `CATS_MICRO_R` | 82.40 |
| `CATS_MICRO_F` | 82.40 |
| `CATS_MACRO_P` | 81.24 |
| `CATS_MACRO_R` | 84.43 |
| `CATS_MACRO_F` | 82.58 |
| `CATS_MACRO_AUC` | 92.45 |
| `TOK2VEC_LOSS` | 39608.07 |
| `TEXTCAT_LOSS` | 913.24 | | 994 | [
[
-0.05596923828125,
-0.038330078125,
0.0161285400390625,
0.038421630859375,
-0.04345703125,
0.01067352294921875,
-0.006481170654296875,
0.00600433349609375,
0.05157470703125,
0.0445556640625,
-0.053253173828125,
-0.08050537109375,
-0.06256103515625,
0.0129699... |
mann-e/mann-e_4-2-merged | 2023-04-19T18:33:24.000Z | [
"diffusers",
"license:mit",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | mann-e | null | null | mann-e/mann-e_4-2-merged | 0 | 2 | diffusers | 2023-04-17T12:26:02 | ---
license: mit
library_name: diffusers
---
# Mann-E 4.2 Merged
## Technical Information about the model
* Base Model : [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
* Merge : [mann-e/mann-e_4_rev-1-3](https://huggingface.co/mann-e/mann-e_4_rev-1-3)
* Merge amount : %70 fine-tuned SD 1.5 (or _Mann-E version 4.2 base_) and %30 of Mann-E 4.1.3 in order to get the old styles such as _Model Shoot_, _Elden Ring_, _Arcane_, _Analog Style_ and _GTA V Style_. Also this merge can be helpful for _Midjourney version 4_ style artwork as well.
### Training process
The code for pre-processing data and fine-tuning the model is available in [this repository](https://github.com/prp-e/mann-e_training) and you can run it on your own as well.
* Text encoder iterations : 1440 (number of pics times two in order to understand `mstyle` which can give the user a _Midjourney version 5_ vibe).
* Stable Diffusion iterations : 16000 iterations for one epoch
* Time: around 4 hours on a single T4 GPU.
| 1,036 | [
[
-0.050018310546875,
-0.039703369140625,
0.04254150390625,
0.01309967041015625,
-0.031494140625,
-0.00760650634765625,
0.01068115234375,
-0.041717529296875,
0.0178680419921875,
0.0266876220703125,
-0.06494140625,
-0.0335693359375,
-0.0411376953125,
-0.0085144... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.