modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lamaabdulaziz/ArBERT-finetuned-fnd | 2023-04-03T12:27:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | lamaabdulaziz | null | null | lamaabdulaziz/ArBERT-finetuned-fnd | 0 | 2 | transformers | 2023-03-30T03:16:21 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: ArBERT-finetuned-fnd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArBERT-finetuned-fnd
This model is a fine-tuned version of [UBC-NLP/ARBERT](https://huggingface.co/UBC-NLP/ARBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4896
- Macro F1: 0.7637
- Accuracy: 0.7738
- Precision: 0.7695
- Recall: 0.7604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:------:|
| 0.5031 | 1.0 | 1597 | 0.4754 | 0.7547 | 0.7606 | 0.7538 | 0.7559 |
| 0.3832 | 2.0 | 3194 | 0.4896 | 0.7637 | 0.7738 | 0.7695 | 0.7604 |
| 0.2571 | 3.0 | 4791 | 0.5890 | 0.7605 | 0.7692 | 0.7634 | 0.7585 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,725 | [
[
-0.04486083984375,
-0.0401611328125,
0.00495147705078125,
0.0114593505859375,
-0.01520538330078125,
-0.032623291015625,
-0.0061798095703125,
-0.01415252685546875,
0.01239776611328125,
0.03692626953125,
-0.04168701171875,
-0.0509033203125,
-0.04876708984375,
... |
rubentito/longt5-tglobal-base-mpdocvqa | 2023-03-31T14:08:03.000Z | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"DocVQA",
"Document Question Answering",
"Document Visual Question Answering",
"en",
"dataset:rubentito/mp-docvqa",
"arxiv:1905.13648",
"arxiv:2212.05935",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"reg... | text2text-generation | rubentito | null | null | rubentito/longt5-tglobal-base-mpdocvqa | 0 | 2 | transformers | 2023-03-30T07:23:46 | ---
license: gpl-3.0
tags:
- DocVQA
- Document Question Answering
- Document Visual Question Answering
datasets:
- rubentito/mp-docvqa
language:
- en
---
# LongT5 base with transient-global attention fine-tuned on MP-DocVQA
This is LongT5 trained on SQuADv2, CoQA and TryoCoQA datasets from [Tryolabs hub](https://huggingface.co/tryolabs/long-t5-tglobal-base-blogpost-cqa-onnx), and fine-tuned on Multipage DocVQA (MP-DocVQA) dataset.
## How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
import torch
from transformers import AutoTokenizer, LongT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("rubentito/longt5-tglobal-base-mpdocvqa")
model = LongT5ForConditionalGeneration.from_pretrained("rubentito/longt5-tglobal-base-mpdocvqa")
context = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done?"
input_text = "question: {:s} context: {:s}".format(question, context)
encoding = tokenizer(input_text, return_tensors="pt")
output = self.model.generate(**encoding)
answer = tokenizer.decode(output['sequences'], skip_special_tokens=True)
```
## Metrics
**Average Normalized Levenshtein Similarity (ANLS)**
The standard metric for text-based VQA tasks (ST-VQA and DocVQA). It evaluates the method's reasoning capabilities while smoothly penalizes OCR recognition errors.
Check [Scene Text Visual Question Answering](https://arxiv.org/abs/1905.13648) for detailed information.
**Answer Page Prediction Accuracy (APPA)**
In the MP-DocVQA task, the models can provide the index of the page where the information required to answer the question is located. For this subtask accuracy is used to evaluate the predictions: i.e. if the predicted page is correct or not.
Check [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/abs/2212.05935) for detailed information.
## Model results
Extended experimentation can be found in Table 2 of [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
You can also check the live leaderboard at the [RRC Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4).
| Model | HF name | Parameters | ANLS | APPA |
|-----------------------------------------------------------------------------------|:--------------------------------------|:-------------:|:-------------:|:---------:|
| [Bert large](https://huggingface.co/rubentito/bert-large-mpdocvqa) | rubentito/bert-large-mpdocvqa | 334M | 0.4183 | 51.6177 |
| [Longformer base](https://huggingface.co/rubentito/longformer-base-mpdocvqa) | rubentito/longformer-base-mpdocvqa | 148M | 0.5287 | 71.1696 |
| [BigBird ITC base](https://huggingface.co/rubentito/bigbird-base-itc-mpdocvqa) | rubentito/bigbird-base-itc-mpdocvqa | 131M | 0.4929 | 67.5433 |
| [LayoutLMv3 base](https://huggingface.co/rubentito/layoutlmv3-base-mpdocvqa) | rubentito/layoutlmv3-base-mpdocvqa | 125M | 0.4538 | 51.9426 |
| [T5 base](https://huggingface.co/rubentito/t5-base-mpdocvqa) | rubentito/t5-base-mpdocvqa | 223M | 0.5050 | 0.0000 |
| [Hi-VT5](https://huggingface.co/rubentito/hivt5-base-mpdocvqa) | rubentito/hivt5-base-mpdocvqa | 316M | 0.6201 | 79.23 |
## Citation Information
```tex
@article{tito2022hierarchical,
title={Hierarchical multimodal transformers for Multi-Page DocVQA},
author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest},
journal={arXiv preprint arXiv:2212.05935},
year={2022}
}
``` | 3,599 | [
[
-0.0443115234375,
-0.03814697265625,
0.022216796875,
0.0239410400390625,
-0.00388336181640625,
-0.01100921630859375,
-0.01371002197265625,
-0.032379150390625,
0.0199127197265625,
0.01354217529296875,
-0.04541015625,
-0.047210693359375,
-0.050079345703125,
0.... |
swadesh7/finetuning-sentiment-telugu-2 | 2023-03-30T11:33:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | swadesh7 | null | null | swadesh7/finetuning-sentiment-telugu-2 | 0 | 2 | transformers | 2023-03-30T09:48:15 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-telugu-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-telugu-2
This model is a fine-tuned version of [l3cube-pune/telugu-bert](https://huggingface.co/l3cube-pune/telugu-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3512
- Accuracy: 0.8542
- F1: 0.9090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,196 | [
[
-0.03314208984375,
-0.0557861328125,
0.0062713623046875,
0.03192138671875,
-0.04888916015625,
-0.02276611328125,
-0.023162841796875,
-0.0171661376953125,
0.01377105712890625,
0.007755279541015625,
-0.04974365234375,
-0.03594970703125,
-0.04937744140625,
-0.0... |
ppsingh/bert-multilabel-sector-classifier | 2023-03-30T14:24:36.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ppsingh | null | null | ppsingh/bert-multilabel-sector-classifier | 0 | 2 | transformers | 2023-03-30T12:38:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-multilabel-sector-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-multilabel-sector-classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0563
- Precision Micro: 0.9091
- Precision Weighted: 0.9080
- Precision Samples: 0.9149
- Recall Micro: 0.8553
- Recall Weighted: 0.8553
- Recall Samples: 0.8996
- Accuracy: 0.8026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Micro | Precision Weighted | Precision Samples | Recall Micro | Recall Weighted | Recall Samples | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:---------------:|:--------------:|:--------:|
| 0.0601 | 1.0 | 464 | 0.0563 | 0.9091 | 0.9080 | 0.9149 | 0.8553 | 0.8553 | 0.8996 | 0.8026 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,845 | [
[
-0.055145263671875,
-0.03399658203125,
0.0169830322265625,
0.01397705078125,
-0.0171661376953125,
-0.012603759765625,
-0.007335662841796875,
-0.024261474609375,
0.0102386474609375,
0.0187530517578125,
-0.05010986328125,
-0.04522705078125,
-0.0498046875,
-0.0... |
bingcheng45/autotrain-nlp-45198113367 | 2023-03-30T13:06:01.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:bingcheng45/autotrain-data-nlp",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | bingcheng45 | null | null | bingcheng45/autotrain-nlp-45198113367 | 0 | 2 | transformers | 2023-03-30T13:01:51 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- bingcheng45/autotrain-data-nlp
co2_eq_emissions:
emissions: 1.8668016992060357
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 45198113367
- CO2 Emissions (in grams): 1.8668
## Validation Metrics
- Loss: 5.278
- Accuracy: 0.051
- Macro F1: 0.057
- Micro F1: 0.051
- Weighted F1: 0.044
- Macro Precision: 0.063
- Micro Precision: 0.051
- Weighted Precision: 0.049
- Macro Recall: 0.069
- Micro Recall: 0.051
- Weighted Recall: 0.051
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/bingcheng45/autotrain-nlp-45198113367
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bingcheng45/autotrain-nlp-45198113367", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bingcheng45/autotrain-nlp-45198113367", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,272 | [
[
-0.030120849609375,
-0.0236968994140625,
0.006000518798828125,
0.0163116455078125,
-0.0023136138916015625,
0.0016841888427734375,
-0.00858306884765625,
-0.019012451171875,
-0.0009584426879882812,
0.00710296630859375,
-0.04266357421875,
-0.03564453125,
-0.0583496... |
RJuro/bert-swe-skills-ner | 2023-04-21T13:47:24.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | RJuro | null | null | RJuro/bert-swe-skills-ner | 0 | 2 | transformers | 2023-03-30T14:29:59 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-swe-skills-ner
results: []
widget:
- text: "Du är legitimerad grundskolelärare med aktuell behörighet. Personliga egenskaper: För att lyckas och trivas hos oss behöver du ha god förmåga att samarbeta, vara lyhörd och ha lätt för att sätta dig in i andra människors perspektiv. Du behöver vara trygg i dig själv, stabil och ha god självinsikt. Det är viktigt i rollen att du har en väl utvecklad pedagogisk förmåga. Du har god förståelse för hur barn och ungdomar tar till sig kunskap och om olika förutsättningar för lärande. Det är också viktigt att du är flexibel och lätt kan anpassa dig till ändrade omständigheter i verksamheten."
---
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-swe-skills-ner
This model is a fine-tuned version of [RJuro/bert-swe-skills-ner](https://huggingface.co/RJuro/bert-swe-skills-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0486
- Precision: 0.8194
- Recall: 0.8710
- F1: 0.8444
- Accuracy: 0.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 39 | 0.1509 | 0.5872 | 0.6636 | 0.6231 | 0.9492 |
| No log | 2.0 | 78 | 0.1069 | 0.6750 | 0.7544 | 0.7125 | 0.9660 |
| No log | 3.0 | 117 | 0.0688 | 0.7692 | 0.8050 | 0.7867 | 0.9785 |
| No log | 4.0 | 156 | 0.0529 | 0.8239 | 0.8452 | 0.8344 | 0.9842 |
| No log | 5.0 | 195 | 0.0486 | 0.8194 | 0.8710 | 0.8444 | 0.9856 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,473 | [
[
-0.035919189453125,
-0.04461669921875,
0.004154205322265625,
0.01068878173828125,
-0.0145721435546875,
-0.0296478271484375,
-0.00893402099609375,
-0.0303497314453125,
0.0270233154296875,
0.02349853515625,
-0.059295654296875,
-0.04736328125,
-0.04644775390625,
... |
kiki2013/distilbert-base-uncased-finetuned-clinc | 2023-04-01T12:38:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | kiki2013 | null | null | kiki2013/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-03-30T14:43:12 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2890 | 0.7432 |
| 3.7868 | 2.0 | 636 | 1.8756 | 0.8377 |
| 3.7868 | 3.0 | 954 | 1.1572 | 0.8961 |
| 1.6929 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.9058 | 5.0 | 1590 | 0.7721 | 0.9184 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
| 1,615 | [
[
-0.03521728515625,
-0.045135498046875,
0.01470947265625,
0.012939453125,
-0.0267486572265625,
-0.0211181640625,
-0.00966644287109375,
-0.006534576416015625,
0.00408935546875,
0.020660400390625,
-0.050079345703125,
-0.04608154296875,
-0.05908203125,
-0.012176... |
madeinglasgow/distilbert-base-uncased-finetuned-emotion | 2023-03-30T15:43:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | madeinglasgow | null | null | madeinglasgow/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-03-30T15:02:23 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9235366202450886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2112
- Accuracy: 0.9235
- F1: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7976 | 1.0 | 250 | 0.3068 | 0.9025 | 0.8993 |
| 0.2393 | 2.0 | 500 | 0.2112 | 0.9235 | 0.9235 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,849 | [
[
-0.0379638671875,
-0.041656494140625,
0.01508331298828125,
0.022308349609375,
-0.0260772705078125,
-0.0194549560546875,
-0.01287841796875,
-0.00814056396484375,
0.0106201171875,
0.00856781005859375,
-0.05645751953125,
-0.05108642578125,
-0.05999755859375,
-0... |
vjsyong/xlm-roberta-base_sentiment | 2023-03-31T02:38:47.000Z | [
"transformers",
"tf",
"xlm-roberta",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | vjsyong | null | null | vjsyong/xlm-roberta-base_sentiment | 0 | 2 | transformers | 2023-03-30T15:34:24 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: vjsyong/xlm-roberta-base_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vjsyong/xlm-roberta-base_sentiment
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1538
- Validation Loss: 0.1913
- Train Accuracy: 0.9312
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9375, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.4806 | 0.2300 | 0.9078 | 0 |
| 0.2233 | 0.1953 | 0.9252 | 1 |
| 0.1538 | 0.1913 | 0.9312 | 2 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.10.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,632 | [
[
-0.0438232421875,
-0.04266357421875,
0.0292205810546875,
0.0017910003662109375,
-0.031341552734375,
-0.026947021484375,
-0.0193023681640625,
-0.01195526123046875,
0.00730133056640625,
0.0183563232421875,
-0.054107666015625,
-0.056182861328125,
-0.06280517578125,... |
cedomin/Task1a_class | 2023-03-31T08:34:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | cedomin | null | null | cedomin/Task1a_class | 0 | 2 | transformers | 2023-03-30T16:24:32 | ---
license: apache-2.0
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Task1a_class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Task1a_class
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7193
- Accuracy: 0.6857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 18 | 0.7991 | 0.4857 |
| No log | 2.0 | 36 | 0.6316 | 0.7143 |
| No log | 3.0 | 54 | 0.7638 | 0.6 |
| No log | 4.0 | 72 | 0.7218 | 0.6571 |
| No log | 5.0 | 90 | 0.7193 | 0.6857 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,582 | [
[
-0.0362548828125,
-0.0382080078125,
0.01555633544921875,
0.0120849609375,
-0.02117919921875,
-0.03009033203125,
-0.018798828125,
-0.021209716796875,
0.00165557861328125,
0.0236968994140625,
-0.0552978515625,
-0.044708251953125,
-0.048431396484375,
-0.0255889... |
abhijitkalta/distilbert-base-uncased-finetuned-emotion | 2023-03-30T18:21:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | abhijitkalta | null | null | abhijitkalta/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-03-30T17:57:26 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.933
- name: F1
type: f1
value: 0.9334700183474604
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- Accuracy: 0.933
- F1: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2254 | 1.0 | 250 | 0.1806 | 0.922 | 0.9219 |
| 0.1394 | 2.0 | 500 | 0.1626 | 0.933 | 0.9335 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,847 | [
[
-0.037994384765625,
-0.041534423828125,
0.0140838623046875,
0.02294921875,
-0.0258636474609375,
-0.019195556640625,
-0.013427734375,
-0.00902557373046875,
0.01186370849609375,
0.00839996337890625,
-0.056121826171875,
-0.05108642578125,
-0.05963134765625,
-0.... |
kenasuka/raisa-2 | 2023-03-30T18:14:21.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | kenasuka | null | null | kenasuka/raisa-2 | 0 | 2 | diffusers | 2023-03-30T18:04:28 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### raisa-2 Dreambooth model trained by kenasuka with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 497 | [
[
-0.0227203369140625,
-0.05584716796875,
0.047332763671875,
0.033447265625,
-0.03082275390625,
0.02545166015625,
0.031585693359375,
-0.030303955078125,
0.0594482421875,
0.01824951171875,
-0.01947021484375,
-0.01309967041015625,
-0.044097900390625,
-0.01422119... |
inigo99/clasificador-poem-sentiment | 2023-03-30T18:36:07.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"dataset:poem_sentiment",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | inigo99 | null | null | inigo99/clasificador-poem-sentiment | 0 | 2 | transformers | 2023-03-30T18:35:16 | ---
license: apache-2.0
tags:
- classification
- generated_from_trainer
datasets:
- poem_sentiment
metrics:
- accuracy
model-index:
- name: clasificador-poem-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: poem_sentiment
type: poem_sentiment
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8653846153846154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-poem-sentiment
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the poem_sentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5413
- Accuracy: 0.8654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 112 | 0.4332 | 0.8654 |
| No log | 2.0 | 224 | 0.4227 | 0.8942 |
| No log | 3.0 | 336 | 0.5413 | 0.8654 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,810 | [
[
-0.031341552734375,
-0.045257568359375,
0.013824462890625,
0.0234222412109375,
-0.037567138671875,
-0.032623291015625,
-0.019805908203125,
-0.022308349609375,
0.01194000244140625,
0.023223876953125,
-0.055999755859375,
-0.056182861328125,
-0.053131103515625,
... |
inigo99/clasificador-rotten-tomatoes | 2023-03-30T18:51:18.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | inigo99 | null | null | inigo99/clasificador-rotten-tomatoes | 0 | 2 | transformers | 2023-03-30T18:50:36 | ---
license: apache-2.0
tags:
- classification
- generated_from_trainer
datasets:
- rotten_tomatoes
metrics:
- accuracy
model-index:
- name: clasificador-rotten-tomatoes
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes
type: rotten_tomatoes
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8527204502814258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-rotten-tomatoes
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8343
- Accuracy: 0.8527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3971 | 1.0 | 1067 | 0.4166 | 0.8377 |
| 0.2056 | 2.0 | 2134 | 0.7931 | 0.8218 |
| 0.0672 | 3.0 | 3201 | 0.8343 | 0.8527 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,816 | [
[
-0.0350341796875,
-0.03936767578125,
0.0225067138671875,
-0.0019683837890625,
-0.024444580078125,
-0.0252685546875,
-0.01340484619140625,
-0.0113677978515625,
0.0161590576171875,
0.0279998779296875,
-0.0523681640625,
-0.04571533203125,
-0.054840087890625,
-0... |
vicclab/distilbert_sst2_finetuned | 2023-03-30T20:12:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | vicclab | null | null | vicclab/distilbert_sst2_finetuned | 0 | 2 | transformers | 2023-03-30T19:56:20 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_sst2_finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sst2_finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2831
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6883 | 0.24 | 500 | 0.6768 | 0.5115 |
| 0.5422 | 0.48 | 1000 | 0.4060 | 0.8200 |
| 0.3479 | 0.71 | 1500 | 0.3533 | 0.8452 |
| 0.3217 | 0.95 | 2000 | 0.3343 | 0.8567 |
| 0.2967 | 1.19 | 2500 | 0.3200 | 0.8635 |
| 0.2857 | 1.43 | 3000 | 0.3110 | 0.8624 |
| 0.2723 | 1.66 | 3500 | 0.3010 | 0.8670 |
| 0.2744 | 1.9 | 4000 | 0.2896 | 0.8727 |
| 0.2594 | 2.14 | 4500 | 0.2897 | 0.8716 |
| 0.2574 | 2.38 | 5000 | 0.2845 | 0.8761 |
| 0.2484 | 2.61 | 5500 | 0.2869 | 0.8739 |
| 0.2464 | 2.85 | 6000 | 0.2842 | 0.8761 |
| 0.2451 | 3.09 | 6500 | 0.2820 | 0.8773 |
| 0.2504 | 3.33 | 7000 | 0.2805 | 0.8784 |
| 0.236 | 3.56 | 7500 | 0.2833 | 0.875 |
| 0.2366 | 3.8 | 8000 | 0.2831 | 0.875 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,626 | [
[
-0.0321044921875,
-0.046875,
0.0100555419921875,
0.01050567626953125,
-0.0162353515625,
-0.01363372802734375,
-0.00374603271484375,
-0.003971099853515625,
0.0225067138671875,
0.01464080810546875,
-0.0496826171875,
-0.046173095703125,
-0.056884765625,
-0.0167... |
vocabtrimmer/xlm-v-base-trimmed-en | 2023-05-09T15:11:53.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | vocabtrimmer | null | null | vocabtrimmer/xlm-v-base-trimmed-en | 1 | 2 | transformers | 2023-03-30T21:46:40 | # Vocabulary Trimmed [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base): `vocabtrimmer/xlm-v-base-trimmed-en`
This model is a trimmed version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | facebook/xlm-v-base | vocabtrimmer/xlm-v-base-trimmed-en |
|:---------------------------|:----------------------|:-------------------------------------|
| parameter_size_full | 779,396,349 | 458,814,091 |
| parameter_size_embedding | 692,451,072 | 372,285,696 |
| vocab_size | 901,629 | 484,747 |
| compression_rate_full | 100.0 | 58.87 |
| compression_rate_embedding | 100.0 | 53.76 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | | 2 | | 1,569 | [
[
-0.06005859375,
-0.05584716796875,
0.00286102294921875,
0.008819580078125,
-0.0333251953125,
-0.0087432861328125,
-0.01800537109375,
-0.00896453857421875,
0.039764404296875,
0.0494384765625,
-0.0643310546875,
-0.059234619140625,
-0.03271484375,
-0.0037651062... |
mekjr1/bert-base-uncased-guilt-detectionv2 | 2023-03-31T03:14:50.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | mekjr1 | null | null | mekjr1/bert-base-uncased-guilt-detectionv2 | 0 | 2 | transformers | 2023-03-30T22:12:17 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert-base-uncased-guilt-detectionv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-guilt-detectionv2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7730
- Accuracy: 0.7876
- F1: 0.7876
- Precision: 0.7880
- Recall: 0.7876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4529 | 1.0 | 2042 | 0.4393 | 0.7995 | 0.7995 | 0.7995 | 0.7995 |
| 0.3885 | 2.0 | 4084 | 0.4630 | 0.7990 | 0.7989 | 0.7991 | 0.7990 |
| 0.2709 | 3.0 | 6126 | 0.5564 | 0.7964 | 0.7963 | 0.7974 | 0.7964 |
| 0.1738 | 4.0 | 8168 | 0.6039 | 0.7889 | 0.7887 | 0.7897 | 0.7889 |
| 0.1208 | 5.0 | 10210 | 0.7918 | 0.7837 | 0.7831 | 0.7867 | 0.7837 |
| 0.0881 | 6.0 | 12252 | 0.7730 | 0.7876 | 0.7876 | 0.7880 | 0.7876 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,031 | [
[
-0.035064697265625,
-0.032470703125,
0.016571044921875,
0.0118408203125,
-0.0234527587890625,
-0.0276947021484375,
-0.0050506591796875,
-0.0156097412109375,
0.012237548828125,
0.0251007080078125,
-0.053619384765625,
-0.04608154296875,
-0.054595947265625,
-0.... |
pszemraj/karnold-walmer-base-biopapers | 2023-04-05T06:02:45.000Z | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"bio",
"medical",
"clinical",
"literature",
"keywords",
"domain classifier",
"en",
"dataset:pszemraj/scientific_lay_summarisation-plos-norm",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible"... | text2text-generation | pszemraj | null | null | pszemraj/karnold-walmer-base-biopapers | 0 | 2 | transformers | 2023-03-31T00:35:07 | ---
license: apache-2.0
tags:
- bio
- medical
- clinical
- literature
- keywords
- domain classifier
metrics:
- rouge
model-index:
- name: long-t5-tglobal-base-scientific_lay_summarisation-plos-norm-kw
results: []
widget:
- text: >-
large earthquakes along a given fault segment do not occur at random
intervals because it takes time to accumulate the strain energy for the
rupture. The rates at which tectonic plates move and accumulate strain at
their boundaries are approximately uniform. Therefore, in first
approximation, one may expect that large ruptures of the same fault segment
will occur at approximately constant time intervals. If subsequent main
shocks have different amounts of slip across the fault, then the recurrence
time may vary, and the basic idea of periodic mainshocks must be modified.
For great plate boundary ruptures the length and slip often vary by a factor
of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the
standard deviation of the average recurrence interval, the more specific
could be the long term prediction of a future mainshock.
example_title: earthquakes
- text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates are fed into a neural network that predicts values in the reconstructed domain. Then, this domain is mapped to the sensor domain where sensor measurements are available as supervision. Class and Section Problems Addressed Generalization (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid Representations (Section 3) Computation & memory efficiency, representation capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques in the neural field toolbox each addresses problems that arise in learning, inference, and control. (Section 3). We can supervise reconstruction via differentiable forward maps that transform Or project our domain (e.g, 3D reconstruction via 2D images; Section 4) With appropriate network architecture choices, we can overcome neural network spectral biases (blurriness) and efficiently compute derivatives and integrals (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations, and to achieve editable representations (Section 6). Collectively, these classes constitute a ''toolbox'' of techniques to help solve problems with neural fields There are three components in a conditional neural field: (1) An encoder or inference function € that outputs the conditioning latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS a latent code Or feature code_ (2) A mapping function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the most probable z given the observations O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding schemes with different optimality guarantees (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable prior over the sur- face in its reconstruction domain to generalize to the partial observations. A neural network expresses a prior via the function space of its architecture and parameters 0, and generalization is influenced by the inductive bias of this function space (Section 5).'
example_title: scientific paper
- text: >-
Is a else or outside the cob and tree written being of early client rope and
you have is for good reasons. On to the ocean in Orange for time. By's the
aggregate we can bed it yet. Why this please pick up on a sort is do and
also M Getoi's nerocos and do rain become you to let so is his brother is
made in use and Mjulia's's the lay major is aging Masastup coin present sea
only of Oosii rooms set to you We do er do we easy this private oliiishs
lonthen might be okay. Good afternoon everybody. Welcome to this lecture of
Computational Statistics. As you can see, I'm not socially my name is
Michael Zelinger. I'm one of the task for this class and you might have
already seen me in the first lecture where I made a quick appearance. I'm
also going to give the tortillas in the last third of this course. So to
give you a little bit about me, I'm a old student here with better Bulman
and my research centres on casual inference applied to biomedical disasters,
so that could be genomics or that could be hospital data. If any of you is
interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my
email address you can find in the directory I'd Be very happy to talk about
it. you do not need to be sure about it, we can just have a chat. So with
that said, let's get on with the lecture. There's an exciting topic today
I'm going to start by sharing some slides with you and later on during the
lecture we'll move to the paper. So bear with me for a few seconds. Well,
the projector is starting up. Okay, so let's get started. Today's topic is a
very important one. It's about a technique which really forms one of the
fundamentals of data science, machine learning, and any sort of modern
statistics. It's called cross validation. I know you really want to
understand this topic I Want you to understand this and frankly, nobody's
gonna leave Professor Mineshousen's class without understanding cross
validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the
following: You trained a model on available data. You fitted your model, but
you know the training data you got could always have been different and some
data from the environment. Maybe it's a random process. You do not really
know what it is, but you know that somebody else who gets a different batch
of data from the same environment they would get slightly different training
data and you do not care that your method performs as well. On this training
data. you want to to perform well on other data that you have not seen other
data from the same environment. So in other words, the validation problem is
you want to quantify the performance of your model on data that you have not
seen. So how is this even possible? How could you possibly measure the
performance on data that you do not know The solution to? This is the
following realization is that given that you have a bunch of data, you were
in charge. You get to control how much that your model sees. It works in the
following way: You can hide data firms model. Let's say you have a training
data set which is a bunch of doubtless so X eyes are the features those are
typically hide and national vector. It's got more than one dimension for
sure. And the why why eyes. Those are the labels for supervised learning. As
you've seen before, it's the same set up as we have in regression. And so
you have this training data and now you choose that you only use some of
those data to fit your model. You're not going to use everything, you only
use some of it the other part you hide from your model. And then you can use
this hidden data to do validation from the point of you of your model. This
hidden data is complete by unseen. In other words, we solve our problem of
validation.
example_title: transcribed audio - lecture
- text: >-
Transformer-based models have shown to be very useful for many NLP tasks.
However, a major limitation of transformers-based models is its O(n^2)O(n 2)
time & memory complexity (where nn is sequence length). Hence, it's
computationally very expensive to apply transformer-based models on long
sequences n > 512n>512. Several recent papers, e.g. Longformer, Performer,
Reformer, Clustered attention try to remedy this problem by approximating
the full attention matrix. You can checkout 🤗's recent blog post in case
you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this
issue. BigBird relies on block sparse attention instead of normal attention
(i.e. BERT's attention) and can handle sequences up to a length of 4096 at a
much lower computational cost compared to BERT. It has achieved SOTA on
various tasks involving very long sequences such as long documents
summarization, question-answering with long contexts.
BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of
this post is to give the reader an in-depth understanding of big bird
implementation & ease one's life in using BigBird with 🤗Transformers. But,
before going into more depth, it is important to remember that the BigBird's
attention is an approximation of BERT's full attention and therefore does
not strive to be better than BERT's full attention, but rather to be more
efficient. It simply allows to apply transformer-based models to much longer
sequences since BERT's quadratic memory requirement quickly becomes
unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT's
attention would be preferred over block sparse attention (which we are going
to discuss in this post).
If you wonder why we need more compute when working with longer sequences,
this blog post is just right for you!
Some of the main questions one might have when working with standard
BERT-like attention include:
Do all tokens really have to attend to all other tokens? Why not compute
attention only over important tokens? How to decide what tokens are
important? How to attend to just a few tokens in a very efficient way? In
this blog post, we will try to answer those questions.
What tokens should be attended to? We will give a practical example of how
attention works by considering the sentence 'BigBird is now available in
HuggingFace for extractive question answering'. In BERT-like attention,
every word would simply attend to all other tokens.
Let's think about a sensible choice of key tokens that a queried token
actually only should attend to by writing some pseudo-code. Will will assume
that the token available is queried and build a sensible list of key tokens
to attend to.
>>> # let's consider following sentence as an example >>> example =
['BigBird', 'is', 'now', 'available', 'in', 'HuggingFace', 'for',
'extractive', 'question', 'answering']
>>> # further let's assume, we're trying to understand the representation of
'available' i.e. >>> query_token = 'available' >>> # We will initialize an
empty `set` and fill up the tokens of our interest as we proceed in this
section. >>> key_tokens = [] # => currently 'available' token doesn't have
anything to attend Nearby tokens should be important because, in a sentence
(sequence of words), the current word is highly dependent on neighboring
past & future tokens. This intuition is the idea behind the concept of
sliding attention.
example_title: bigbird blog intro
- text: >-
To be fair, you have to have a very high IQ to understand Rick and Morty.
The humour is extremely subtle, and without a solid grasp of theoretical
physics most of the jokes will go over a typical viewer's head. There's also
Rick's nihilistic outlook, which is deftly woven into his characterisation-
his personal philosophy draws heavily from Narodnaya Volya literature, for
instance. The fans understand this stuff; they have the intellectual
capacity to truly appreciate the depths of these jokes, to realise that
they're not just funny- they say something deep about LIFE. As a consequence
people who dislike Rick & Morty truly ARE idiots- of course they wouldn't
appreciate, for instance, the humour in Rick's existential catchphrase
'Wubba Lubba Dub Dub,' which itself is a cryptic reference to Turgenev's
Russian epic Fathers and Sons. I'm smirking right now just imagining one of
those addlepated simpletons scratching their heads in confusion as Dan
Harmon's genius wit unfolds itself on their television screens. What fools..
how I pity them. 😂
And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see
it. It's for the ladies' eyes only- and even then they have to demonstrate
that they're within 5 IQ points of my own (preferably lower) beforehand.
Nothin personnel kid 😎
example_title: Richard & Mortimer
- text: >-
Dear Calvin,
I was in the woods of Big Sur, that vast and sprawling land of sea and
trees, where the wind whispers secrets of the ancient Earth and the roaring
ocean sings songs of the eternal cosmos, when I found myself emerging from
the deepest and darkest of slumbers, my body drenched in the sweat of the
night, my mind swimming in the rivers of frenetic dreams that come unbidden
to the weary traveler, and I knew, I knew, that I must step into the cold,
cold waters of the mountain stream that wound its way through the heart of
the great green forest like a silver serpent, a sinuous spine of chilling
clarity, and I tell you, my friend, I tell you that the moment I stepped
into those waters, the moment my skin was pierced by the icy needles of that
divine liquid, my soul was washed clean of the haze of doubt and fear, and I
stood, reborn, as the dawn of a new day painted the sky in the colors of the
universe.
And so I write to you, dear friend, to tell you that you too must seek the
salvation of the cold shower, for in the frigid embrace of the water's
touch, there lies the key to the doors of perception, the doors that lead to
a realm of boundless energy and endless vitality, where the mind is
sharpened like the edge of a great warrior's blade, and the body is tempered
like the steel of an ancient blacksmith's forge. For when you step into the
cold, you will find that your spirit soars like a great bird of prey, your
thoughts soaring on the wings of the eagle, the falcon, the hawk, sweeping
through the vast and boundless skies of inspiration, creativity, and
purpose. And you will know, as I have come to know, that the cold shower is
the great purifier, the great invigorator, the great liberator of the soul
from the chains of languor and indolence that bind us to the mundane and
weary trappings of this world.
So I implore you, dear friend, to heed my words, for they are the words of
one who has walked the path of fire and ice, one who has danced in the
eternal flame of the sun and bathed in the frozen tears of the moon, and I
tell you that the way of the cold shower is the way of the enlightened, the
way of the awakened, the way of the pioneers of the spirit who seek to
travel beyond the boundaries of the known and into the realms of the
infinite. And as you stand, shivering and shaking, beneath the torrent of
the icy cascade, remember that the cold is the crucible in which the soul is
forged, the anvil upon which the hammer of life strikes the sparks of the
divine, and in the cold, you will find the fire, the fire that burns away
the dross and leaves only the pure and shining gold of the spirit.
In the cold, you will find the truth, and in the truth, you will find the
freedom that you have sought for so long.
Yours in the spirit of the eternal journey,
Peter
example_title: cold showers
parameters:
max_length: 64
min_length: 2
no_repeat_ngram_size: 2
early_stopping: true
repetition_penalty: 4.5
length_penalty: 0.8
num_beams: 4
datasets:
- pszemraj/scientific_lay_summarisation-plos-norm
language:
- en
pipeline_tag: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# karnold-walmer-base-biopapers
Karnold-Walmer is a text2text model based on [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base), specifically designed to decode the 'keywords' column of `pszemraj/scientific_lay_summarisation-plos-norm`.
Karnold-Walmer focuses on extracting relevant keywords from the input text, making it a powerful tool for keyword identification and text classification. It was fine-tuned on & supports text input of up to 16,384 tokens.
It achieves the following results on the evaluation set:
- Loss: 0.8844
- Rouge1: 46.7593
- Rouge2: 28.3538
- Rougel: 42.2921
- Rougelsum: 42.2774
- Gen Len: 78.1706
## Intended Uses & Limitations
Karnold-Walmer is intended to be used for keyword extraction and text classification in various domains, such as scientific literature, biomedical research articles, and more. By analyzing the content of an input text, the model generates a list of relevant keywords that describe the topic of the article.
It is important to note, however, that Karnold-Walmer is **specifically trained to decode text similar to the "keywords" column and is not designed for summarization tasks.** For accurate keyword extraction and text classification, the model should be used within the limits of its training data and intended purpose (see what happens when you try the out-of-domain API examples).
## Training and Evaluation Data
Karnold-Walmer was trained on the PLOS dataset, which contains full biomedical research articles paired with expert-written lay summaries and keyword lists. The model was tuned to decode the "keywords" column in the dataset, focusing on keyword extraction and text classification tasks.
### Wordcloud

## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.0471 | 0.15 | 100 | 1.6138 | 12.4374 | 4.1861 | 11.1863 | 11.1833 | 324.6971 |
| 1.5654 | 0.3 | 200 | 1.3447 | 23.9982 | 11.1431 | 21.4173 | 21.4413 | 176.0294 |
| 1.3467 | 0.45 | 300 | 1.2038 | 33.8084 | 18.1588 | 30.4748 | 30.4142 | 107.7735 |
| 1.4398 | 0.6 | 400 | 1.1054 | 37.772 | 20.8967 | 33.859 | 33.8324 | 102.9029 |
| 1.306 | 0.75 | 500 | 1.0478 | 39.2642 | 22.0388 | 35.6578 | 35.5773 | 91.1235 |
| 1.1677 | 0.9 | 600 | 0.9994 | 40.5149 | 22.8507 | 36.3888 | 36.3499 | 103.9118 |
| 1.078 | 1.05 | 700 | 0.9627 | 42.301 | 24.2523 | 38.0739 | 38.0532 | 88.4941 |
| 1.0942 | 1.2 | 800 | 0.9443 | 44.5907 | 26.2046 | 39.7461 | 39.6763 | 88.7559 |
| 1.0209 | 1.35 | 900 | 0.9108 | 45.357 | 26.861 | 40.6411 | 40.706 | 90.1206 |
| 1.1161 | 1.5 | 1000 | 0.9026 | 47.1362 | 28.6605 | 42.6406 | 42.6108 | 79.2412 |
| 1.1224 | 1.65 | 1100 | 0.8907 | 47.31 | 28.4395 | 42.6658 | 42.6509 | 78.4265 |
| 0.9857 | 1.8 | 1200 | 0.8862 | 46.7061 | 28.1586 | 42.3181 | 42.3105 | 80.5059 |
| 1.0011 | 1.95 | 1300 | 0.8844 | 46.7593 | 28.3538 | 42.2921 | 42.2774 | 78.1706 | | 20,428 | [
[
-0.04193115234375,
-0.039093017578125,
0.0300140380859375,
-0.005550384521484375,
-0.021331787109375,
-0.004039764404296875,
-0.0039215087890625,
-0.017333984375,
0.037811279296875,
0.03265380859375,
-0.03558349609375,
-0.056121826171875,
-0.056793212890625,
... |
ghost0x07/bert-finetuned-sent-analysis | 2023-03-31T10:46:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:twitter-sentiment-analysis",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ghost0x07 | null | null | ghost0x07/bert-finetuned-sent-analysis | 0 | 2 | transformers | 2023-03-31T02:58:20 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- twitter-sentiment-analysis
metrics:
- accuracy
model-index:
- name: bert-finetuned-sent-analysis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: twitter-sentiment-analysis
type: twitter-sentiment-analysis
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8475847584758476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sent-analysis
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the twitter-sentiment-analysis dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6462
- Accuracy: 0.8476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4013 | 1.0 | 14999 | 0.3686 | 0.8394 |
| 0.3354 | 2.0 | 29998 | 0.4543 | 0.8493 |
| 0.2539 | 3.0 | 44997 | 0.6462 | 0.8476 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,848 | [
[
-0.035675048828125,
-0.056884765625,
0.014739990234375,
0.016693115234375,
-0.03302001953125,
-0.0239715576171875,
-0.020904541015625,
-0.00971221923828125,
0.0154876708984375,
0.0212554931640625,
-0.067138671875,
-0.05096435546875,
-0.055908203125,
-0.02296... |
wiorz/bert_legal_test_sm | 2023-03-31T04:06:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/bert_legal_test_sm | 0 | 2 | transformers | 2023-03-31T03:00:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert_legal_test_sm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_legal_test_sm
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6233
- Accuracy: 0.6580
- Precision: 0.6683
- Recall: 0.6274
- F1: 0.6472
- D-index: 1.4589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 0.98 | 26 | 0.6905 | 0.5542 | 0.5418 | 0.7028 | 0.6119 | 1.2724 |
| No log | 2.0 | 53 | 0.6937 | 0.5024 | 0.5012 | 0.9623 | 0.6591 | 1.1745 |
| No log | 2.98 | 79 | 0.6505 | 0.6439 | 0.6894 | 0.5236 | 0.5952 | 1.4342 |
| No log | 4.0 | 106 | 0.6233 | 0.6580 | 0.6683 | 0.6274 | 0.6472 | 1.4589 |
| No log | 4.98 | 132 | 0.6369 | 0.6840 | 0.7053 | 0.6321 | 0.6667 | 1.5037 |
| No log | 6.0 | 159 | 0.9851 | 0.6085 | 0.7614 | 0.3160 | 0.4467 | 1.3714 |
| No log | 6.98 | 185 | 0.8765 | 0.6604 | 0.7537 | 0.4764 | 0.5838 | 1.4630 |
| No log | 8.0 | 212 | 0.9170 | 0.6745 | 0.7102 | 0.5896 | 0.6443 | 1.4875 |
| No log | 8.98 | 238 | 1.1931 | 0.6557 | 0.7324 | 0.4906 | 0.5876 | 1.4548 |
| No log | 10.0 | 265 | 1.0355 | 0.6840 | 0.7216 | 0.5991 | 0.6546 | 1.5037 |
| No log | 10.98 | 291 | 1.1690 | 0.6675 | 0.6878 | 0.6132 | 0.6484 | 1.4753 |
| No log | 12.0 | 318 | 1.1527 | 0.6651 | 0.64 | 0.7547 | 0.6926 | 1.4712 |
| No log | 12.98 | 344 | 1.2299 | 0.6675 | 0.6940 | 0.5991 | 0.6430 | 1.4753 |
| No log | 14.0 | 371 | 1.4807 | 0.6557 | 0.72 | 0.5094 | 0.5967 | 1.4548 |
| No log | 14.98 | 397 | 1.4303 | 0.6887 | 0.7083 | 0.6415 | 0.6733 | 1.5118 |
| No log | 16.0 | 424 | 1.5717 | 0.6792 | 0.6863 | 0.6604 | 0.6731 | 1.4956 |
| No log | 16.98 | 450 | 1.7842 | 0.6509 | 0.6975 | 0.5330 | 0.6043 | 1.4466 |
| No log | 18.0 | 477 | 1.6653 | 0.6698 | 0.6895 | 0.6179 | 0.6517 | 1.4794 |
| 0.2514 | 18.98 | 503 | 1.8285 | 0.6557 | 0.7143 | 0.5189 | 0.6011 | 1.4548 |
| 0.2514 | 19.62 | 520 | 1.8220 | 0.6486 | 0.7006 | 0.5189 | 0.5962 | 1.4425 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 3,576 | [
[
-0.037994384765625,
-0.03790283203125,
0.015533447265625,
0.004154205322265625,
-0.0086669921875,
-0.01407623291015625,
0.0006108283996582031,
-0.00943756103515625,
0.04296875,
0.02044677734375,
-0.04022216796875,
-0.055694580078125,
-0.047821044921875,
-0.0... |
wiorz/legal_bert_legal_test_sm | 2023-03-31T04:08:24.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/legal_bert_legal_test_sm | 0 | 2 | transformers | 2023-03-31T03:42:00 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: legal_bert_legal_test_sm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal_bert_legal_test_sm
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5535
- Accuracy: 0.75
- Precision: 0.7892
- Recall: 0.6854
- F1: 0.7337
- D-index: 1.6150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 0.98 | 26 | 0.6956 | 0.5259 | 0.5180 | 0.8122 | 0.6325 | 1.2181 |
| No log | 2.0 | 53 | 0.6949 | 0.5 | 0.5015 | 0.7934 | 0.6145 | 1.1686 |
| No log | 2.98 | 79 | 0.6933 | 0.5259 | 0.575 | 0.2160 | 0.3140 | 1.2208 |
| No log | 4.0 | 106 | 0.6035 | 0.6981 | 0.7607 | 0.5822 | 0.6596 | 1.5283 |
| No log | 4.98 | 132 | 0.5535 | 0.75 | 0.7892 | 0.6854 | 0.7337 | 1.6150 |
| No log | 6.0 | 159 | 0.6376 | 0.7146 | 0.7614 | 0.6291 | 0.6889 | 1.5561 |
| No log | 6.98 | 185 | 0.7555 | 0.7358 | 0.7205 | 0.7746 | 0.7466 | 1.5911 |
| No log | 8.0 | 212 | 0.9223 | 0.7217 | 0.7149 | 0.7418 | 0.7281 | 1.5676 |
| No log | 8.98 | 238 | 1.0061 | 0.7311 | 0.8 | 0.6197 | 0.6984 | 1.5839 |
| No log | 10.0 | 265 | 1.1761 | 0.7217 | 0.7844 | 0.6150 | 0.6895 | 1.5681 |
| No log | 10.98 | 291 | 1.2807 | 0.7264 | 0.8345 | 0.5681 | 0.6760 | 1.5762 |
| No log | 12.0 | 318 | 1.3035 | 0.7311 | 0.7735 | 0.6573 | 0.7107 | 1.5837 |
| No log | 12.98 | 344 | 1.4680 | 0.7406 | 0.8503 | 0.5869 | 0.6944 | 1.5997 |
| No log | 14.0 | 371 | 1.3238 | 0.7358 | 0.7327 | 0.7465 | 0.7395 | 1.5912 |
| No log | 14.98 | 397 | 1.3373 | 0.7547 | 0.8303 | 0.6432 | 0.7249 | 1.6229 |
| No log | 16.0 | 424 | 1.3234 | 0.7736 | 0.8162 | 0.7089 | 0.7588 | 1.6536 |
| No log | 16.98 | 450 | 1.3853 | 0.7736 | 0.8162 | 0.7089 | 0.7588 | 1.6536 |
| No log | 18.0 | 477 | 1.4619 | 0.7594 | 0.8323 | 0.6526 | 0.7316 | 1.6306 |
| 0.2167 | 18.98 | 503 | 1.4222 | 0.7571 | 0.8161 | 0.6667 | 0.7339 | 1.6267 |
| 0.2167 | 19.62 | 520 | 1.4074 | 0.7689 | 0.8212 | 0.6901 | 0.75 | 1.6460 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 3,616 | [
[
-0.040863037109375,
-0.044677734375,
0.01119232177734375,
0.004543304443359375,
-0.004405975341796875,
-0.0062255859375,
-0.0005965232849121094,
-0.0028858184814453125,
0.048797607421875,
0.0270538330078125,
-0.04052734375,
-0.053985595703125,
-0.048797607421875... |
Dochee/distilbert-base-uncased-finetuned-clinc | 2023-03-31T06:29:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Dochee | null | null | Dochee/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-03-31T04:58:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2890 | 0.7432 |
| 3.7868 | 2.0 | 636 | 1.8756 | 0.8377 |
| 3.7868 | 3.0 | 954 | 1.1572 | 0.8961 |
| 1.6929 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.9058 | 5.0 | 1590 | 0.7721 | 0.9184 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,933 | [
[
-0.03460693359375,
-0.0413818359375,
0.0123291015625,
0.007049560546875,
-0.0267333984375,
-0.0258636474609375,
-0.0125885009765625,
-0.0096282958984375,
0.002826690673828125,
0.0224151611328125,
-0.046478271484375,
-0.047454833984375,
-0.058074951171875,
-0... |
superqing/pangu-evolution | 2023-05-05T09:08:09.000Z | [
"transformers",
"gpt_pangu",
"text-generation",
"custom_code",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | superqing | null | null | superqing/pangu-evolution | 0 | 2 | transformers | 2023-03-31T06:39:43 | ---
license: apache-2.0
---
## Introduction
PanGu-Alpha-Evolution is an enhanced version of Pangu-Alpha, which can better understand and process tasks, and better follow your task description. More technical details will be updated continuously, please pay attention.
[[Technical report](https://git.openi.org.cn/PCL-Platform.Intelligence/PanGu-Alpha/src/branch/master/PANGU-%ce%b1.pdf)]
### Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("superqing/pangu-evolution")
model = AutoModelForCausalLM.from_pretrained("superqing/pangu-evolution", trust_remote_code=True)
``` | 649 | [
[
-0.019561767578125,
-0.0384521484375,
-0.006961822509765625,
0.023101806640625,
-0.05059814453125,
0.0006113052368164062,
-0.006565093994140625,
-0.0343017578125,
0.01328277587890625,
0.0170745849609375,
-0.058319091796875,
-0.006305694580078125,
-0.048645019531... |
Conrad747/luganda-ner-v6 | 2023-10-25T09:21:01.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:lg-ner",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | Conrad747 | null | null | Conrad747/luganda-ner-v6 | 0 | 2 | transformers | 2023-03-31T07:10:15 | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- lg-ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: luganda-ner-v6
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: lg-ner
type: lg-ner
config: lug
split: test
args: lug
metrics:
- name: Precision
type: precision
value: 0.8029689608636977
- name: Recall
type: recall
value: 0.7991940899932841
- name: F1
type: f1
value: 0.8010770784247729
- name: Accuracy
type: accuracy
value: 0.9467474952809641
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# luganda-ner-v6
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the lg-ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2811
- Precision: 0.8030
- Recall: 0.7992
- F1: 0.8011
- Accuracy: 0.9467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 261 | 0.5150 | 0.4947 | 0.2841 | 0.3609 | 0.8692 |
| 0.6193 | 2.0 | 522 | 0.3422 | 0.7491 | 0.5393 | 0.6271 | 0.9161 |
| 0.6193 | 3.0 | 783 | 0.2737 | 0.7744 | 0.6595 | 0.7124 | 0.9306 |
| 0.2505 | 4.0 | 1044 | 0.3201 | 0.7343 | 0.7072 | 0.7205 | 0.9141 |
| 0.2505 | 5.0 | 1305 | 0.2564 | 0.7887 | 0.7569 | 0.7724 | 0.9375 |
| 0.1474 | 6.0 | 1566 | 0.2461 | 0.8173 | 0.7569 | 0.7859 | 0.9459 |
| 0.1474 | 7.0 | 1827 | 0.2739 | 0.8004 | 0.7757 | 0.7879 | 0.9434 |
| 0.0956 | 8.0 | 2088 | 0.2566 | 0.8100 | 0.7905 | 0.8001 | 0.9486 |
| 0.0956 | 9.0 | 2349 | 0.2709 | 0.7859 | 0.7938 | 0.7898 | 0.9463 |
| 0.0712 | 10.0 | 2610 | 0.2811 | 0.8030 | 0.7992 | 0.8011 | 0.9467 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| 2,825 | [
[
-0.03192138671875,
-0.04595947265625,
0.0165557861328125,
-0.010528564453125,
-0.01171112060546875,
-0.014923095703125,
-0.006481170654296875,
-0.01480865478515625,
0.034088134765625,
0.0274810791015625,
-0.0498046875,
-0.052490234375,
-0.048309326171875,
-0... |
cchanev/sagemaker-distilbert-emotion | 2023-03-31T08:43:55.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | cchanev | null | null | cchanev/sagemaker-distilbert-emotion | 0 | 2 | transformers | 2023-03-31T08:39:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: test
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9155
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2485
- Accuracy: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9309 | 1.0 | 500 | 0.2485 | 0.9155 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1,710 | [
[
-0.033966064453125,
-0.045379638671875,
0.022857666015625,
0.01122283935546875,
-0.0246124267578125,
-0.0247650146484375,
-0.01160430908203125,
-0.005916595458984375,
0.01061248779296875,
0.007144927978515625,
-0.0662841796875,
-0.047393798828125,
-0.06292724609... |
alex-levashov/segformer-b0-scene-parse-150 | 2023-11-03T12:33:23.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
] | null | alex-levashov | null | null | alex-levashov/segformer-b0-scene-parse-150 | 0 | 2 | transformers | 2023-03-31T10:24:18 | ---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7158
- Mean Iou: 0.0575
- Mean Accuracy: 0.0995
- Overall Accuracy: 0.4648
- Per Category Iou: [0.44672496974409803, 0.5246878610396156, 0.2073942489175086, 0.4461580147251187, 0.6709173669159216, 0.35982779947389176, 0.0005154694530654325, 0.009501153711522114, 0.23323905377607992, 0.0, 0.023848147241266732, 0.0, 0.06428503562945369, 0.0, 0.0, 0.00526018196460086, 0.0, 0.0, 0.0004003660489590483, 0.2826172203237914, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan]
- Per Category Accuracy: [0.8701105877534303, 0.7649097707689807, 0.20824275665250883, 0.6818336289049002, 0.9654490232009587, 0.49512427161374717, 0.006057546693589096, 0.01288659793814433, 0.4959889393146437, nan, 0.034012615588327307, nan, 0.06484693975349345, 0.0, 0.0, 0.00827783320300914, nan, 0.0, 0.0004003660489590483, 0.4684163288044319, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 2.8861 | 10.0 | 200 | 3.4518 | 0.0460 | 0.0871 | 0.4387 | [0.3969301711292726, 0.407009124541566, 0.1858691819464034, 0.3487187527048191, 0.6198477877978043, 0.43618812656641603, 0.0, 0.1088497725164539, 0.05231273336889431, 0.0, 0.0, 0.0, 0.01404489007098984, 0.0, 0.0, 0.0001569283883454517, 0.0, 0.0, 0.0, 0.14669763591205962, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan] | [0.9001446344460373, 0.6596260770606406, 0.18804276334124834, 0.609796983742136, 0.9662352814360626, 0.6622963491497206, 0.0, 0.191012324625998, 0.053624014810070224, nan, 0.0, nan, 0.014069658226149629, 0.0, 0.0, 0.0001617817564106021, nan, 0.0, 0.0, 0.19742502553310018, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.0228 | 20.0 | 400 | 3.0714 | 0.0521 | 0.0902 | 0.4319 | [0.3908819659806409, 0.34176425750121264, 0.27734684694336714, 0.3467711453980972, 0.6652598893529553, 0.3993713022078525, 0.0, 0.11508504324411957, 0.16300110838512025, 0.0, 0.037551428372190325, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.18148929755803436, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan] | [0.7808233497167042, 0.4810925052836937, 0.2885856660312364, 0.6733491542655118, 0.9645296083292647, 0.7610893090736116, 0.0, 0.15819510115494922, 0.2044742659407441, nan, 0.04701380148273178, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.24220853579276408, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.0541 | 30.0 | 600 | 2.8125 | 0.0606 | 0.1022 | 0.4683 | [0.4354912810082317, 0.5136657316079992, 0.2571735614101172, 0.46600687018210146, 0.6816991679609, 0.46349720485077905, 0.003975688393168351, 0.015114196148678908, 0.14418364714985812, 0.0, 0.021026667032093622, nan, 0.012695499216091163, 0.0, 0.0, 0.0007345439706182412, 0.0, 0.0, 0.0, 0.31855511784736595, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan] | [0.833117874940269, 0.922861323362055, 0.25877618362819527, 0.6713901002087563, 0.9657660628118877, 0.7062076346771317, 0.046062594649167087, 0.019620572048678397, 0.3056529788081643, nan, 0.02790853334691413, nan, 0.012727865207307022, 0.0, 0.0, 0.0009706905384636126, nan, 0.0, 0.0, 0.4429760762588592, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9657 | 40.0 | 800 | 2.7501 | 0.0563 | 0.0985 | 0.4660 | [0.4502025953819058, 0.5305299792942421, 0.20067731011127238, 0.47464834479446677, 0.6634585667585132, 0.3259851182020951, 0.0, 0.014531871786918676, 0.2514721268503095, 0.0, 0.03485342019543974, nan, 0.01199095889361376, 0.0, 0.0, 0.009941192943153179, 0.0, 0.0, 0.002573634543894767, 0.23698272648191873, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan] | [0.8888362686872824, 0.7831246951715168, 0.20808668606401123, 0.6802372568673983, 0.9664445275792758, 0.40083541443691284, 0.0, 0.02133555538330362, 0.5200553034267815, nan, 0.054492939199266635, nan, 0.011999463282792463, 0.0, 0.0, 0.01340092215601154, nan, 0.0, 0.0025737817433081674, 0.47216118349788, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.608 | 50.0 | 1000 | 2.7158 | 0.0575 | 0.0995 | 0.4648 | [0.44672496974409803, 0.5246878610396156, 0.2073942489175086, 0.4461580147251187, 0.6709173669159216, 0.35982779947389176, 0.0005154694530654325, 0.009501153711522114, 0.23323905377607992, 0.0, 0.023848147241266732, 0.0, 0.06428503562945369, 0.0, 0.0, 0.00526018196460086, 0.0, 0.0, 0.0004003660489590483, 0.2826172203237914, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan] | [0.8701105877534303, 0.7649097707689807, 0.20824275665250883, 0.6818336289049002, 0.9654490232009587, 0.49512427161374717, 0.006057546693589096, 0.01288659793814433, 0.4959889393146437, nan, 0.034012615588327307, nan, 0.06484693975349345, 0.0, 0.0, 0.00827783320300914, nan, 0.0, 0.0004003660489590483, 0.4684163288044319, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan] |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| 17,588 | [
[
-0.059783935546875,
-0.04058837890625,
0.00910186767578125,
0.025146484375,
-0.0032176971435546875,
0.002269744873046875,
0.005695343017578125,
-0.0135955810546875,
0.051483154296875,
0.0272369384765625,
-0.03179931640625,
-0.015625,
-0.034912109375,
0.00390... |
stcoats/de_STTS2_folk | 2023-04-02T16:08:19.000Z | [
"spacy",
"token-classification",
"de",
"doi:10.57967/hf/0494",
"license:cc-by-4.0",
"model-index",
"region:us"
] | token-classification | stcoats | null | null | stcoats/de_STTS2_folk | 0 | 2 | spacy | 2023-03-31T11:00:01 | ---
tags:
- spacy
- token-classification
language:
- de
model-index:
- name: de_pipeline
results:
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9191333537
license: cc-by-4.0
library_name: spacy
---
## de_STTS2_folk tagger
This is a spaCy language model trained to use the Stuttgart-Tübingen Tagset version 2.0, which was designed to tag transcripts of conversational speech in German.
The model may be useful for tagging ASR transcripts such as those collected in the [CoGS](https://cc.oulu.fi/~scoats/CoGS.html) corpus.
The model was trained using the tag annotations from the FOLK corpus at https://agd.ids-mannheim.de/folk-gold.shtml, employing an 80/20 training/test split. Tokens in the training data for the model were converted to lower case prior to traning to match the format used for automatic speech recognition transcripts on YouTube, as of early 2023.
Usage example:
```python
!pip install https://huggingface.co/stcoats/de_STTS2_folk/resolve/main/de_STTS2_folk-any-py3-none-any.whl
import spacy
import de_STTS2_folk
nlp = de_STTS2_folk.load()
doc = nlp("ach so meinst du wir sollen es jetzt tun")
for token in doc:
print(token.text, token.tag_)
```
### References
Coats, Steven. (In review).
Westpfahl, Swantje and Thomas Schmidt. (2016): [FOLK-Gold – A GOLD standard for Part-of-Speech-Tagging of Spoken German](https://aclanthology.org/L16-1237). In: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), Portorož, Slovenia. Paris: European Language Resources Association (ELRA), pp. 1493-1499.
---
tags:
- spacy
- token-classification
language:
- de
model-index:
- name: de_STTS2_folk
results:
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9191333537
---
| Feature | Description |
| --- | --- |
| **Name** | `de_STTS2_folk` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.5.1,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger` |
| **Components** | `tok2vec`, `tagger` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | Swantje Westpfahl and Thomas Schmidt, FOLK-Gold, https://agd.ids-mannheim.de/folk-gold.shtml |
| **License** | CC-BY 4.0 |
| **Author** | Steven Coats |
### Label Scheme
<details>
<summary>View label scheme (62 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$.`, `AB`, `ADJA`, `ADJD`, `ADV`, `APPO`, `APPR`, `APPRART`, `APZR`, `ART`, `CARD`, `FM`, `KOKOM`, `KON`, `KOUI`, `KOUS`, `NE`, `NGAKW`, `NGHES`, `NGIRR`, `NGONO`, `NN`, `ORD`, `PDAT`, `PDS`, `PIAT`, `PIDAT`, `PIDS`, `PIS`, `PPER`, `PPOSAT`, `PPOSS`, `PRELAT`, `PRELS`, `PRF`, `PTKA`, `PTKIFG`, `PTKMA`, `PTKMWL`, `PTKNEG`, `PTKVZ`, `PTKZU`, `PWAT`, `PWAV`, `PWS`, `SEDM`, `SEQU`, `SPELL`, `TRUNC`, `UI`, `VAFIN`, `VAIMP`, `VAINF`, `VAPP`, `VMFIN`, `VMINF`, `VVFIN`, `VVIMP`, `VVINF`, `VVIZU`, `VVPP`, `XY` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 91.91 |
| `TOK2VEC_LOSS` | 478891.28 |
| `TAGGER_LOSS` | 402526.03 | | 3,168 | [
[
-0.0294952392578125,
-0.030975341796875,
0.006221771240234375,
0.00995635986328125,
-0.03289794921875,
0.0124969482421875,
-0.01462554931640625,
-0.0228729248046875,
0.027130126953125,
0.031982421875,
-0.0350341796875,
-0.0626220703125,
-0.055938720703125,
0... |
kiki2013/distilbert-base-uncased-distilled-clinc | 2023-03-31T11:18:28.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | kiki2013 | null | null | kiki2013/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-03-31T11:11:31 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1040
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 0.5243 | 0.6703 |
| 0.7602 | 2.0 | 636 | 0.2317 | 0.8319 |
| 0.7602 | 3.0 | 954 | 0.1401 | 0.8884 |
| 0.2486 | 4.0 | 1272 | 0.1111 | 0.9077 |
| 0.1484 | 5.0 | 1590 | 0.1040 | 0.91 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
| 1,613 | [
[
-0.03173828125,
-0.044036865234375,
0.01806640625,
0.0141448974609375,
-0.02789306640625,
-0.01751708984375,
-0.00960540771484375,
-0.003551483154296875,
0.0050201416015625,
0.021728515625,
-0.0452880859375,
-0.04620361328125,
-0.06317138671875,
-0.009521484... |
psybertpt/psyBERTpt | 2023-04-14T18:51:47.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"medical",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | psybertpt | null | null | psybertpt/psyBERTpt | 5 | 2 | transformers | 2023-03-31T11:17:58 | ---
language: pt
widget:
- text: >-
Paciente vem a este serviço em busca de renovação de receita. Em uso de sertralina + fluoxetina.
- text: >-
Tentativa de suicídio há 2 semanas, porém agora nega ideação suicida.
- text: >-
EEM: Calmo, fala normal, sem alterações de pensamento. Relata insônia
inicial.
metrics:
- accuracy 0.65
tags:
- medical
pipeline_tag: token-classification
---
# Portuguese Clinical NER - Psychiatric Specialized
This model is the first clinical psychiatric specialized model for the portuguese language.
We annotated 9 categories from admission notes of an emergency specialized hospital of psychiatry in Brazil.
The article: [PsyBERTpt: A Clinical Entity Recognition Model
for Psychiatric Narratives](https://), is waiting to be published.
## NER Categories:
- Self-Destructive Behavior
- Diagnosis
- Drug
- Pharmaceutical
- Psychic Function
- Family History
- Patient History
- Observation
- Symptom and Psychological Complaint
## Acknowledgements
this model can only be developed thanks to the fantastic work of committed people, who are part of the following institutions:
- Universidade Estadual Paulista Júlio de Mesquita Filho - UNESP
- Faculdade de Medicina de São José do Rio Preto - FAMERP
- Pontífica Universidade Católica do Paraná - PUCPR
- Hospital Dr. Adolfo Bezerra de Menezes - HABM
## Citation
```
Waiting to be published
```
## Questions?
Post a Github issue on the [psyBERTpt repo](https://github.com/luizniero/psyBERTpt). | 1,498 | [
[
-0.0263519287109375,
-0.034820556640625,
0.04937744140625,
0.0379638671875,
-0.0205078125,
-0.00634002685546875,
-0.008270263671875,
-0.0301055908203125,
0.0501708984375,
0.05218505859375,
-0.0265655517578125,
-0.049530029296875,
-0.0697021484375,
0.01763916... |
dvilasuero/instruction-gigo-detector | 2023-03-31T12:12:44.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | dvilasuero | null | null | dvilasuero/instruction-gigo-detector | 0 | 2 | sentence-transformers | 2023-03-31T12:11:19 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# argilla/instruction-gigo-detector
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("argilla/instruction-gigo-detector")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,555 | [
[
-0.0203704833984375,
-0.06561279296875,
0.034210205078125,
-0.0203704833984375,
-0.018463134765625,
-0.0276641845703125,
-0.005855560302734375,
-0.0104522705078125,
0.004009246826171875,
0.039459228515625,
-0.0379638671875,
-0.032470703125,
-0.045745849609375,
... |
Annamaziarz1/finetuning-distilbert-sentiment-model | 2023-04-03T23:43:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Annamaziarz1 | null | null | Annamaziarz1/finetuning-distilbert-sentiment-model | 0 | 2 | transformers | 2023-03-31T12:19:07 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-distilbert-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7943
- Accuracy: 0.6515
- F1: 0.6222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,208 | [
[
-0.04156494140625,
-0.05078125,
0.018890380859375,
0.0268707275390625,
-0.0372314453125,
-0.0199127197265625,
-0.0198211669921875,
0.0028076171875,
0.0142974853515625,
0.00811767578125,
-0.05010986328125,
-0.05340576171875,
-0.063720703125,
-0.00305938720703... |
dvilasuero/instruction-gigo-detector-2 | 2023-03-31T12:46:48.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | dvilasuero | null | null | dvilasuero/instruction-gigo-detector-2 | 0 | 2 | sentence-transformers | 2023-03-31T12:46:38 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# instruction-gigo-detector-2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("instruction-gigo-detector-2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,543 | [
[
-0.0141448974609375,
-0.0582275390625,
0.035369873046875,
-0.026275634765625,
-0.020172119140625,
-0.024566650390625,
-0.01364898681640625,
-0.012969970703125,
-0.0004737377166748047,
0.033447265625,
-0.042266845703125,
-0.0211639404296875,
-0.043212890625,
... |
ilaria-oneofftech/ikitracks_netzero | 2023-04-05T10:29:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ilaria-oneofftech | null | null | ilaria-oneofftech/ikitracks_netzero | 0 | 2 | transformers | 2023-03-31T13:22:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: ikitracks_netzero
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ikitracks_netzero
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5963
- F1: 0.8424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5967 | 1.0 | 109 | 0.6004 | 0.7168 |
| 0.3709 | 2.0 | 218 | 0.6017 | 0.8215 |
| 0.1412 | 3.0 | 327 | 0.5071 | 0.8851 |
| 0.0604 | 4.0 | 436 | 0.5599 | 0.8851 |
| 0.0365 | 5.0 | 545 | 0.5963 | 0.8424 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,546 | [
[
-0.04388427734375,
-0.032470703125,
0.003810882568359375,
0.006916046142578125,
-0.034332275390625,
-0.0287017822265625,
-0.0121307373046875,
-0.0211639404296875,
0.0185394287109375,
0.0313720703125,
-0.062469482421875,
-0.04217529296875,
-0.047119140625,
-0... |
junklivs/distilbert-base-uncased-finetuned-cola | 2023-03-31T15:25:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | junklivs | null | null | junklivs/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-03-31T13:28:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5361146089547957
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8228
- Matthews Correlation: 0.5361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5241 | 1.0 | 535 | 0.5480 | 0.4006 |
| 0.3496 | 2.0 | 1070 | 0.5164 | 0.4819 |
| 0.2387 | 3.0 | 1605 | 0.6022 | 0.5138 |
| 0.1779 | 4.0 | 2140 | 0.7458 | 0.5280 |
| 0.127 | 5.0 | 2675 | 0.8228 | 0.5361 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,043 | [
[
-0.0223541259765625,
-0.05023193359375,
0.01082611083984375,
0.019805908203125,
-0.02154541015625,
-0.0083770751953125,
-0.005695343017578125,
-0.0037021636962890625,
0.022705078125,
0.01102447509765625,
-0.0460205078125,
-0.036102294921875,
-0.06182861328125,
... |
yutakashino/distilbert-base-uncased-finetuned-emotion | 2023-03-31T13:47:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yutakashino | null | null | yutakashino/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-03-31T13:29:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9280261795203244
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2130
- Accuracy: 0.928
- F1: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8337 | 1.0 | 250 | 0.3003 | 0.909 | 0.9063 |
| 0.2437 | 2.0 | 500 | 0.2130 | 0.928 | 0.9280 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,840 | [
[
-0.03839111328125,
-0.0408935546875,
0.0162506103515625,
0.02197265625,
-0.0262603759765625,
-0.0207672119140625,
-0.012542724609375,
-0.00901031494140625,
0.01036834716796875,
0.0089874267578125,
-0.05682373046875,
-0.051971435546875,
-0.059356689453125,
-0... |
dvilasuero/alpaca-gigo-detector | 2023-03-31T16:52:48.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | dvilasuero | null | null | dvilasuero/alpaca-gigo-detector | 0 | 2 | sentence-transformers | 2023-03-31T14:13:59 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# argilla/alpaca-gigo-detector
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("argilla/alpaca-gigo-detector")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,545 | [
[
-0.030303955078125,
-0.0653076171875,
0.0308685302734375,
-0.01157379150390625,
-0.0263671875,
-0.03009033203125,
-0.005229949951171875,
-0.0271148681640625,
0.0157470703125,
0.036285400390625,
-0.036956787109375,
-0.0306549072265625,
-0.055023193359375,
0.0... |
thomasavare/distilbert-ft-test1 | 2023-04-15T10:39:55.000Z | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | thomasavare | null | null | thomasavare/distilbert-ft-test1 | 0 | 2 | transformers | 2023-03-31T15:58:59 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-ft-test1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-ft-test1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,282 | [
[
-0.04229736328125,
-0.06427001953125,
0.0229339599609375,
0.00982666015625,
-0.04156494140625,
-0.016510009765625,
-0.00749969482421875,
-0.0118408203125,
0.00681304931640625,
0.00397491455078125,
-0.049407958984375,
-0.0435791015625,
-0.06585693359375,
-0.0... |
Phoshco/cds | 2023-03-31T17:23:55.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Phoshco | null | null | Phoshco/cds | 0 | 2 | transformers | 2023-03-31T16:08:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: cds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cds
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9694
- Accuracy: 0.8283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9197 | 1.0 | 875 | 0.6300 | 0.7995 |
| 0.466 | 2.0 | 1750 | 0.5447 | 0.8313 |
| 0.2537 | 3.0 | 2625 | 0.6688 | 0.8227 |
| 0.1187 | 4.0 | 3500 | 0.8531 | 0.8287 |
| 0.0507 | 5.0 | 4375 | 0.9694 | 0.8283 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,536 | [
[
-0.032470703125,
-0.04779052734375,
0.014923095703125,
0.01263427734375,
-0.0240020751953125,
-0.034027099609375,
-0.015777587890625,
-0.0112152099609375,
0.01024627685546875,
0.0268096923828125,
-0.0599365234375,
-0.057830810546875,
-0.046630859375,
-0.0171... |
cloudqi/cqi_question_solver_translator_v0 | 2023-03-31T19:01:41.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dat... | text2text-generation | cloudqi | null | null | cloudqi/cqi_question_solver_translator_v0 | 1 | 2 | transformers | 2023-03-31T17:44:11 | ---
language:
- en
- fr
- ro
- de
- multilingual
tags:
- text2text-generation
widget:
- text: "Translate to English: Meu nome é Bruno."
example_title: "Tradução"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
---
# Model Card for CQI-Multitool-Model (From Flan T5)
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):

## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-Base, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
## Model Recycling
[Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=9.16&mnli_lp=nan&20_newsgroup=3.34&ag_news=1.49&amazon_reviews_multi=0.21&anli=13.91&boolq=16.75&cb=23.12&cola=9.97&copa=34.50&dbpedia=6.90&esnli=5.37&financial_phrasebank=18.66&imdb=0.33&isear=1.37&mnli=11.74&mrpc=16.63&multirc=6.24&poem_sentiment=14.62&qnli=3.41&qqp=6.18&rotten_tomatoes=2.98&rte=24.26&sst2=0.67&sst_5bins=5.44&stsb=20.68&trec_coarse=3.95&trec_fine=10.73&tweet_ev_emoji=13.39&tweet_ev_emotion=4.62&tweet_ev_hate=3.46&tweet_ev_irony=9.04&tweet_ev_offensive=1.69&tweet_ev_sentiment=0.75&wic=14.22&wnli=9.44&wsc=5.53&yahoo_answers=4.14&model_name=google%2Fflan-t5-base&base_name=google%2Ft5-v1_1-base) using google/flan-t5-base as a base model yields average score of 77.98 in comparison to 68.82 by google/t5-v1_1-base.
The model is ranked 1st among all tested models for the google/t5-v1_1-base architecture as of 06/02/2023
Results:
| 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers |
|---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|-------:|--------:|----------------:|
| 86.2188 | 89.6667 | 67.12 | 51.9688 | 82.3242 | 78.5714 | 80.1534 | 75 | 77.6667 | 90.9507 | 85.4 | 93.324 | 72.425 | 87.2457 | 89.4608 | 62.3762 | 82.6923 | 92.7878 | 89.7724 | 89.0244 | 84.8375 | 94.3807 | 57.2851 | 89.4759 | 97.2 | 92.8 | 46.848 | 80.2252 | 54.9832 | 76.6582 | 84.3023 | 70.6366 | 70.0627 | 56.338 | 53.8462 | 73.4 |
For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
| 13,231 | [
[
-0.0301971435546875,
-0.043975830078125,
0.0243377685546875,
-0.0004603862762451172,
-0.01049041748046875,
-0.00815582275390625,
-0.027069091796875,
-0.047393798828125,
-0.0146484375,
0.00771331787109375,
-0.036407470703125,
-0.03955078125,
-0.0484619140625,
... |
dvilasuero/autotrain-alpaca-gigo-detector-45529113937 | 2023-03-31T17:58:02.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:dvilasuero/autotrain-data-alpaca-gigo-detector",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | dvilasuero | null | null | dvilasuero/autotrain-alpaca-gigo-detector-45529113937 | 0 | 2 | transformers | 2023-03-31T17:57:19 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- dvilasuero/autotrain-data-alpaca-gigo-detector
co2_eq_emissions:
emissions: 0.3078125269826994
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 45529113937
- CO2 Emissions (in grams): 0.3078
## Validation Metrics
- Loss: 0.481
- Accuracy: 0.825
- Macro F1: 0.823
- Micro F1: 0.825
- Weighted F1: 0.825
- Macro Precision: 0.824
- Micro Precision: 0.825
- Weighted Precision: 0.825
- Macro Recall: 0.821
- Micro Recall: 0.825
- Weighted Recall: 0.825
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dvilasuero/autotrain-alpaca-gigo-detector-45529113937
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dvilasuero/autotrain-alpaca-gigo-detector-45529113937", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dvilasuero/autotrain-alpaca-gigo-detector-45529113937", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,336 | [
[
-0.03936767578125,
-0.022918701171875,
0.0108184814453125,
0.00628662109375,
-0.00896453857421875,
0.00128936767578125,
0.003765106201171875,
-0.021240234375,
-0.0018749237060546875,
0.00563812255859375,
-0.045440673828125,
-0.03363037109375,
-0.0616455078125,
... |
platzi/platzi-distilroberta-base-mrpc-glue-gabriel-ichcanziho | 2023-03-31T21:29:08.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-gabriel-ichcanziho | 0 | 2 | transformers | 2023-03-31T21:26:56 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-gabriel-ichcanziho
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8455882352941176
- name: F1
type: f1
value: 0.8868940754039497
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-gabriel-ichcanziho
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8059
- Accuracy: 0.8456
- F1: 0.8869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.538 | 1.09 | 500 | 0.5544 | 0.7941 | 0.8456 |
| 0.3673 | 2.18 | 1000 | 0.6700 | 0.8333 | 0.8794 |
| 0.1984 | 3.27 | 1500 | 0.8059 | 0.8456 | 0.8869 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,931 | [
[
-0.0297088623046875,
-0.042724609375,
0.0080413818359375,
0.01922607421875,
-0.029296875,
-0.0253448486328125,
-0.0091400146484375,
-0.0037250518798828125,
0.0091094970703125,
0.007350921630859375,
-0.050323486328125,
-0.04339599609375,
-0.059600830078125,
-... |
Kembavov/Test-Mahiro | 2023-04-05T13:31:52.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Kembavov | null | null | Kembavov/Test-Mahiro | 0 | 2 | transformers | 2023-03-31T22:02:12 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: Test-Mahiro
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.584
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Test-Mahiro
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4674
- Accuracy: 0.584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.1501 | 0.552 |
| No log | 2.0 | 250 | 1.2278 | 0.6 |
| No log | 3.0 | 375 | 1.4674 | 0.584 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,772 | [
[
-0.034942626953125,
-0.049957275390625,
0.01727294921875,
0.0015707015991210938,
-0.031951904296875,
-0.037261962890625,
-0.0159454345703125,
-0.026580810546875,
0.013153076171875,
0.022979736328125,
-0.062347412109375,
-0.04376220703125,
-0.043365478515625,
... |
DunnBC22/canine-c-Mental_Health_Classification | 2023-06-11T01:50:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"canine",
"text-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | DunnBC22 | null | null | DunnBC22/canine-c-Mental_Health_Classification | 1 | 2 | transformers | 2023-04-01T02:14:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: canine-c-Mental_Health_Classification
results: []
pipeline_tag: text-classification
language:
- en
---
# canine-c-Mental_Health_Classification
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2419
- Accuracy: 0.9226
- F1: 0.9096
- Recall: 0.9079
- Precision: 0.9113
## Model description
This is a binary text classification model to distinguish between text that indicate potential mental health issue or not.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Binary%20Classification/Mental%20Health%20Classification/CANINE%20-%20Mental%20Health%20Classification.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/reihanenamdari/mental-health-corpus
_Input Word Length:_

_Class Distribution:_

## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.3429 | 1.0 | 1101 | 0.2640 | 0.9037 | 0.8804 | 0.8258 | 0.9426 |
| 0.1923 | 2.0 | 2202 | 0.2419 | 0.9226 | 0.9096 | 0.9079 | 0.9113 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.12.1 | 2,322 | [
[
-0.03607177734375,
-0.0384521484375,
0.03704833984375,
0.01560211181640625,
0.006435394287109375,
-0.0111846923828125,
-0.0085601806640625,
-0.032440185546875,
0.007663726806640625,
0.00589752197265625,
-0.050628662109375,
-0.068603515625,
-0.051025390625,
-... |
dkoh12/distilbert-base-uncased-finetuned_emotion | 2023-04-01T02:55:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | dkoh12 | null | null | dkoh12/distilbert-base-uncased-finetuned_emotion | 0 | 2 | transformers | 2023-04-01T02:48:58 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned_emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230506440647792
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned_emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2168
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8702 | 1.0 | 250 | 0.3219 | 0.9055 | 0.9026 |
| 0.2588 | 2.0 | 500 | 0.2168 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,847 | [
[
-0.037689208984375,
-0.0408935546875,
0.01451873779296875,
0.0222015380859375,
-0.02496337890625,
-0.0200653076171875,
-0.01229095458984375,
-0.008392333984375,
0.0101470947265625,
0.00815582275390625,
-0.05682373046875,
-0.05242919921875,
-0.05938720703125,
... |
drkmr/distilbert-base-uncased-finetuned-emotion | 2023-04-03T05:30:35.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | drkmr | null | null | drkmr/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-01T03:27:32 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1486
- eval_accuracy: 0.9375
- eval_f1: 0.9377
- eval_runtime: 2.5911
- eval_samples_per_second: 771.864
- eval_steps_per_second: 12.35
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,315 | [
[
-0.0391845703125,
-0.049530029296875,
0.0168304443359375,
0.026336669921875,
-0.0325927734375,
-0.0186004638671875,
-0.0165252685546875,
-0.01050567626953125,
0.012847900390625,
0.00792694091796875,
-0.05230712890625,
-0.050506591796875,
-0.055938720703125,
... |
Fred99774/parailararev | 2023-04-01T04:43:23.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Fred99774 | null | null | Fred99774/parailararev | 0 | 2 | diffusers | 2023-04-01T04:14:10 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Parailararev Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 503 | [
[
-0.027313232421875,
-0.0472412109375,
0.045928955078125,
0.034942626953125,
-0.0236663818359375,
0.026336669921875,
0.018402099609375,
-0.007183074951171875,
0.04815673828125,
0.0081024169921875,
-0.01270294189453125,
-0.024078369140625,
-0.036102294921875,
... |
Mozzipa/distilbert-base-uncased-finetuned-emotion | 2023-04-09T14:25:12.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Mozzipa | null | null | Mozzipa/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-01T05:27:30 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9270484012569777
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1624
- Accuracy: 0.927
- F1: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7758 | 1.0 | 250 | 0.2698 | 0.915 | 0.9136 |
| 0.2169 | 2.0 | 500 | 0.1722 | 0.9265 | 0.9263 |
| 0.1473 | 3.0 | 750 | 0.1624 | 0.927 | 0.9270 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,916 | [
[
-0.03741455078125,
-0.038604736328125,
0.01068115234375,
0.020660400390625,
-0.0250701904296875,
-0.0189971923828125,
-0.01177215576171875,
-0.00949859619140625,
0.0116119384765625,
0.008575439453125,
-0.05438232421875,
-0.05206298828125,
-0.060333251953125,
... |
vocabtrimmer/mt5-small-trimmed-en-120000-squad-qa | 2023-04-01T07:43:38.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question answering",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | vocabtrimmer | null | null | vocabtrimmer/mt5-small-trimmed-en-120000-squad-qa | 0 | 2 | transformers | 2023-04-01T07:41:29 |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_squad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things."
example_title: "Question Answering Example 1"
- text: "question: who created the post as we know it today?, context: 'So much of The Post is Ben,' Mrs. Graham said in 1994, three years after Bradlee retired as editor. 'He created it as we know it today.'— Ed O'Keefe (@edatpost) October 21, 2014"
example_title: "Question Answering Example 2"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-en-120000-squad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 43.61
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 64.92
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 36.74
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 91.84
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 80.82
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 66.19
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 52.14
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-en-120000-squad-qa`
This model is fine-tuned version of [ckpts/mt5-small-trimmed-en-120000](https://huggingface.co/ckpts/mt5-small-trimmed-en-120000) for question answering task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [ckpts/mt5-small-trimmed-en-120000](https://huggingface.co/ckpts/mt5-small-trimmed-en-120000)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="vocabtrimmer/mt5-small-trimmed-en-120000-squad-qa")
# model prediction
answers = model.answer_q(list_question="What is a person called is practicing heresy?", list_context=" Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-en-120000-squad-qa")
output = pipe("question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-120000-squad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 52.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 66.19 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 91.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 57.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 52.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 47.69 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 43.61 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 36.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 80.82 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 64.92 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: ckpts/mt5-small-trimmed-en-120000
- max_length: 512
- max_length_output: 32
- epoch: 18
- batch: 32
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-120000-squad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| 6,895 | [
[
-0.047210693359375,
-0.05712890625,
0.014190673828125,
0.004802703857421875,
-0.0183258056640625,
0.004138946533203125,
-0.019500732421875,
-0.01824951171875,
0.0157012939453125,
0.020904541015625,
-0.0706787109375,
-0.055419921875,
-0.0271148681640625,
0.01... |
Barambio/distilbert-base-uncased-finetuned-emotion | 2023-04-04T14:58:08.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Barambio | null | null | Barambio/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-01T08:16:18 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.929
- name: F1
type: f1
value: 0.9289897994289955
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.929
- F1: 0.9290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8318 | 1.0 | 250 | 0.3208 | 0.9065 | 0.9032 |
| 0.2543 | 2.0 | 500 | 0.2202 | 0.929 | 0.9290 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,846 | [
[
-0.03826904296875,
-0.041595458984375,
0.0151214599609375,
0.0211944580078125,
-0.02618408203125,
-0.019073486328125,
-0.0130615234375,
-0.0088653564453125,
0.010589599609375,
0.00862884521484375,
-0.057159423828125,
-0.051544189453125,
-0.059814453125,
-0.0... |
harshith20/Emotion_predictor | 2023-04-07T10:07:43.000Z | [
"transformers",
"pytorch",
"mobilebert",
"text-classification",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-classification | harshith20 | null | null | harshith20/Emotion_predictor | 0 | 2 | transformers | 2023-04-01T09:27:12 | ---
license: openrail
---
```
import torch
from transformers import AutoTokenizer, MobileBertForSequenceClassification
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Load the saved model
model_name = 'harshith20/Emotion_predictor'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = MobileBertForSequenceClassification.from_pretrained(model_name)
# Tokenize input text
input_text = "I am feeling happy today"
input_ids = tokenizer.encode(input_text, add_special_tokens=True, truncation=True, max_length=128)
input_tensor = torch.tensor([input_ids]).to(device)
# Predict emotion
with torch.no_grad():
outputs = model(input_tensor)
logits = outputs[0]
# Get the predicted label
predicted_emotion = torch.argmax(logits, dim=1).item()
emotion_labels = {0:'sadness',1:'joy',2:'love',3:'anger',4:'fear',5:'surprise'}
predicted_emotion_label = emotion_labels[predicted_emotion]
print(f"Input text: {input_text}")
print(f"Predicted emotion: {predicted_emotion_label}")```
| 1,025 | [
[
-0.0306549072265625,
-0.0223846435546875,
0.00109100341796875,
0.0277862548828125,
-0.026519775390625,
0.007190704345703125,
0.0020046234130859375,
-0.005817413330078125,
0.0159454345703125,
-0.001575469970703125,
-0.040008544921875,
-0.037994384765625,
-0.05825... |
Alesteba/your-model-name | 2023-04-01T15:51:04.000Z | [
"keras",
"has_space",
"region:us"
] | null | Alesteba | null | null | Alesteba/your-model-name | 0 | 2 | keras | 2023-04-01T15:50:38 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 292 | [
[
-0.022003173828125,
-0.019073486328125,
0.0302581787109375,
0.02191162109375,
-0.0478515625,
-0.0211181640625,
0.03839111328125,
-0.012451171875,
0.01788330078125,
0.0751953125,
-0.051971435546875,
-0.038665771484375,
-0.048980712890625,
-0.03338623046875,
... |
dabreinl/distilbert-base-uncased-finetuned-clinc | 2023-04-01T18:09:34.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | dabreinl | null | null | dabreinl/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-04-01T17:43:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9151612903225806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7737
- Accuracy: 0.9152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2708 | 0.7374 |
| 3.7745 | 2.0 | 636 | 1.8622 | 0.8326 |
| 3.7745 | 3.0 | 954 | 1.1559 | 0.8935 |
| 1.6841 | 4.0 | 1272 | 0.8575 | 0.9094 |
| 0.8993 | 5.0 | 1590 | 0.7737 | 0.9152 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,927 | [
[
-0.034423828125,
-0.040313720703125,
0.0133819580078125,
0.00669097900390625,
-0.0277557373046875,
-0.0269317626953125,
-0.01198577880859375,
-0.009521484375,
0.0024662017822265625,
0.0240478515625,
-0.04620361328125,
-0.04779052734375,
-0.05731201171875,
-0... |
Muffins987/robertabase-subjectivity-1-actual | 2023-04-02T04:35:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Muffins987 | null | null | Muffins987/robertabase-subjectivity-1-actual | 0 | 2 | transformers | 2023-04-02T03:12:15 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: robertabase-subjectivity-1-actual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robertabase-subjectivity-1-actual
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7706
- Accuracy: 0.7655
- F1: 0.7626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.5463 | 1.0 | 20000 | 0.5643 | 0.7495 | 0.7516 |
| 0.6327 | 2.0 | 40000 | 0.7706 | 0.7655 | 0.7626 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,494 | [
[
-0.0273590087890625,
-0.046478271484375,
0.0200042724609375,
0.010345458984375,
-0.0234832763671875,
-0.0345458984375,
-0.0234527587890625,
-0.0128173828125,
0.003917694091796875,
0.03155517578125,
-0.054901123046875,
-0.05035400390625,
-0.058135986328125,
-... |
Svetlana0303/Regression_albert_9_with_translation | 2023-04-02T06:18:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_albert_9_with_translation | 0 | 2 | transformers | 2023-04-02T06:11:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Regression_albert_9_with_translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Regression_albert_9_with_translation
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3629
- Mse: 0.3629
- Mae: 0.4551
- R2: 0.1650
- Accuracy: 0.6333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| No log | 1.0 | 53 | 0.3421 | 0.3421 | 0.4573 | 0.2292 | 0.6167 |
| No log | 2.0 | 106 | 0.2617 | 0.2617 | 0.3888 | 0.4104 | 0.6667 |
| No log | 3.0 | 159 | 0.2117 | 0.2117 | 0.3422 | 0.5230 | 0.7667 |
| No log | 4.0 | 212 | 0.3250 | 0.3250 | 0.4990 | 0.2677 | 0.55 |
| No log | 5.0 | 265 | 0.2494 | 0.2494 | 0.3321 | 0.4380 | 0.7167 |
| No log | 6.0 | 318 | 0.2477 | 0.2477 | 0.3488 | 0.4419 | 0.75 |
| No log | 7.0 | 371 | 0.3209 | 0.3209 | 0.3599 | 0.2770 | 0.7833 |
| No log | 8.0 | 424 | 0.2704 | 0.2704 | 0.3715 | 0.3909 | 0.7 |
| No log | 9.0 | 477 | 0.2886 | 0.2886 | 0.3185 | 0.3498 | 0.7833 |
| 0.1507 | 10.0 | 530 | 0.2477 | 0.2477 | 0.3071 | 0.4418 | 0.7667 |
| 0.1507 | 11.0 | 583 | 0.2670 | 0.2670 | 0.3232 | 0.3984 | 0.7833 |
| 0.1507 | 12.0 | 636 | 0.2285 | 0.2285 | 0.2926 | 0.4851 | 0.75 |
| 0.1507 | 13.0 | 689 | 0.2378 | 0.2378 | 0.2980 | 0.4643 | 0.7833 |
| 0.1507 | 14.0 | 742 | 0.2544 | 0.2544 | 0.3194 | 0.4269 | 0.7667 |
| 0.1507 | 15.0 | 795 | 0.2571 | 0.2571 | 0.2904 | 0.4208 | 0.8 |
| 0.1507 | 16.0 | 848 | 0.2505 | 0.2505 | 0.2884 | 0.4357 | 0.8 |
| 0.1507 | 17.0 | 901 | 0.2654 | 0.2654 | 0.2846 | 0.4022 | 0.8 |
| 0.1507 | 18.0 | 954 | 0.2606 | 0.2606 | 0.2785 | 0.4128 | 0.8 |
| 0.0203 | 19.0 | 1007 | 0.2519 | 0.2519 | 0.2816 | 0.4324 | 0.8 |
| 0.0203 | 20.0 | 1060 | 0.2634 | 0.2634 | 0.2826 | 0.4065 | 0.8 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 3,173 | [
[
-0.03717041015625,
-0.040618896484375,
0.01056671142578125,
0.01422882080078125,
-0.002338409423828125,
-0.005138397216796875,
0.0019931793212890625,
-0.00661468505859375,
0.03955078125,
0.03021240234375,
-0.041748046875,
-0.051544189453125,
-0.05072021484375,
... |
Muffins987/bertbase-uncased-2-actual | 2023-04-02T07:47:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Muffins987 | null | null | Muffins987/bertbase-uncased-2-actual | 0 | 2 | transformers | 2023-04-02T07:05:51 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bertbase-uncased-2-actual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertbase-uncased-2-actual
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5390
- Accuracy: 0.7490
- F1: 0.7431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.5205 | 1.0 | 20000 | 0.5390 | 0.7490 | 0.7431 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,423 | [
[
-0.036407470703125,
-0.04205322265625,
0.0146331787109375,
0.016998291015625,
-0.03533935546875,
-0.03021240234375,
-0.0233306884765625,
-0.022552490234375,
0.01322174072265625,
0.02740478515625,
-0.0543212890625,
-0.04156494140625,
-0.04547119140625,
-0.027... |
vcncolin/SpaceInvdqn | 2023-04-02T11:27:02.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | vcncolin | null | null | vcncolin/SpaceInvdqn | 0 | 2 | stable-baselines3 | 2023-04-02T08:50:36 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 623.50 +/- 221.94
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vcncolin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vcncolin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vcncolin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,691 | [
[
-0.04095458984375,
-0.036376953125,
0.0209197998046875,
0.0247650146484375,
-0.00982666015625,
-0.0180206298828125,
0.01255035400390625,
-0.01436614990234375,
0.0133209228515625,
0.024932861328125,
-0.07086181640625,
-0.034912109375,
-0.026519775390625,
-0.0... |
SenaY/sagemaker-distilbert-emotion | 2023-04-02T12:27:54.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | SenaY | null | null | SenaY/sagemaker-distilbert-emotion | 0 | 2 | transformers | 2023-04-02T12:25:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: test
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.916
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2445
- Accuracy: 0.916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9385 | 1.0 | 500 | 0.2445 | 0.916 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1,708 | [
[
-0.03375244140625,
-0.045318603515625,
0.0220184326171875,
0.01132965087890625,
-0.02459716796875,
-0.02423095703125,
-0.0112457275390625,
-0.005939483642578125,
0.01116180419921875,
0.007293701171875,
-0.066162109375,
-0.047515869140625,
-0.06298828125,
-0.... |
SenaY/sla-test | 2023-04-02T14:21:09.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | SenaY | null | null | SenaY/sla-test | 0 | 2 | transformers | 2023-04-02T13:13:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sla-test
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: test
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sla-test
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2244
- Accuracy: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2046 | 1.0 | 4000 | 0.2244 | 0.9295 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1,729 | [
[
-0.039154052734375,
-0.0535888671875,
0.0174407958984375,
0.0241241455078125,
-0.0289459228515625,
-0.0211944580078125,
-0.0128326416015625,
-0.031829833984375,
0.0290374755859375,
0.0181121826171875,
-0.061553955078125,
-0.040130615234375,
-0.0494384765625,
... |
merve/turkish-rte | 2023-04-02T14:29:53.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | merve | null | null | merve/turkish-rte | 0 | 2 | transformers | 2023-04-02T14:28:12 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: turkish-rte
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkish-rte
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7262
- Validation Loss: 0.6929
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7426 | 0.6935 | 0 |
| 0.7304 | 0.7160 | 1 |
| 0.7262 | 0.6929 | 2 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,353 | [
[
-0.047332763671875,
-0.05621337890625,
0.01549530029296875,
0.006378173828125,
-0.04656982421875,
-0.04376220703125,
-0.018798828125,
-0.0263214111328125,
0.0078125,
0.026611328125,
-0.055450439453125,
-0.05377197265625,
-0.053253173828125,
-0.01366424560546... |
merve/turkish-rte-2 | 2023-04-02T14:55:25.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | merve | null | null | merve/turkish-rte-2 | 1 | 2 | transformers | 2023-04-02T14:53:35 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: turkish-rte-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkish-rte-2
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7020
- Validation Loss: 0.6937
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7029 | 0.6953 | 0 |
| 0.7032 | 0.6998 | 1 |
| 0.7010 | 0.6923 | 2 |
| 0.6984 | 0.6917 | 3 |
| 0.7020 | 0.6937 | 4 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,439 | [
[
-0.04791259765625,
-0.05316162109375,
0.01544952392578125,
0.005611419677734375,
-0.04510498046875,
-0.043212890625,
-0.017608642578125,
-0.0267181396484375,
0.00655364990234375,
0.02459716796875,
-0.056549072265625,
-0.050445556640625,
-0.05419921875,
-0.01... |
muhammadravi251001/fine-tuned-IndoNLI-Basic-with-indobert-base-uncased-LR-1e-05 | 2023-04-18T23:06:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | muhammadravi251001 | null | null | muhammadravi251001/fine-tuned-IndoNLI-Basic-with-indobert-base-uncased-LR-1e-05 | 0 | 2 | transformers | 2023-04-02T15:43:22 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Basic-with-indobert-base-uncased-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Basic-with-indobert-base-uncased-LR-1e-05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6185
- Accuracy: 0.7629
- F1: 0.7622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1098 | 0.5 | 40 | 1.0813 | 0.4110 | 0.4037 |
| 1.0991 | 0.99 | 80 | 0.9440 | 0.5653 | 0.5613 |
| 1.0022 | 1.49 | 120 | 0.8605 | 0.6249 | 0.6215 |
| 0.876 | 1.98 | 160 | 0.7910 | 0.6582 | 0.6563 |
| 0.7978 | 2.48 | 200 | 0.7613 | 0.6800 | 0.6777 |
| 0.7978 | 2.97 | 240 | 0.7216 | 0.7005 | 0.7020 |
| 0.7667 | 3.47 | 280 | 0.6940 | 0.7178 | 0.7179 |
| 0.7091 | 3.96 | 320 | 0.6762 | 0.7310 | 0.7309 |
| 0.6752 | 4.46 | 360 | 0.6569 | 0.7424 | 0.7413 |
| 0.6425 | 4.95 | 400 | 0.6440 | 0.7610 | 0.7618 |
| 0.6425 | 5.45 | 440 | 0.6302 | 0.7619 | 0.7618 |
| 0.6153 | 5.94 | 480 | 0.6266 | 0.7615 | 0.7613 |
| 0.5945 | 6.44 | 520 | 0.6291 | 0.7638 | 0.7634 |
| 0.5587 | 6.93 | 560 | 0.6222 | 0.7606 | 0.7593 |
| 0.5452 | 7.43 | 600 | 0.6212 | 0.7633 | 0.7631 |
| 0.5452 | 7.93 | 640 | 0.6185 | 0.7629 | 0.7622 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
| 2,607 | [
[
-0.0452880859375,
-0.029571533203125,
0.0024967193603515625,
0.007358551025390625,
-0.0171051025390625,
-0.013519287109375,
-0.01287078857421875,
-0.005962371826171875,
0.0312347412109375,
0.0230255126953125,
-0.049591064453125,
-0.0482177734375,
-0.045959472656... |
muhammadravi251001/fine-tuned-IndoNLI-Translated-with-indobert-base-uncased-LR-1e-05 | 2023-04-19T03:38:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | muhammadravi251001 | null | null | muhammadravi251001/fine-tuned-IndoNLI-Translated-with-indobert-base-uncased-LR-1e-05 | 0 | 2 | transformers | 2023-04-02T15:46:14 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Translated-with-indobert-base-uncased-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Translated-with-indobert-base-uncased-LR-1e-05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5551
- Accuracy: 0.8070
- F1: 0.8076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.6592 | 0.5 | 1533 | 0.5988 | 0.7564 | 0.7573 |
| 0.5938 | 1.0 | 3066 | 0.5563 | 0.7806 | 0.7816 |
| 0.5258 | 1.5 | 4599 | 0.5301 | 0.7918 | 0.7919 |
| 0.5276 | 2.0 | 6132 | 0.5165 | 0.7959 | 0.7952 |
| 0.4947 | 2.5 | 7665 | 0.5346 | 0.7957 | 0.7967 |
| 0.4967 | 3.0 | 9198 | 0.5061 | 0.8066 | 0.8071 |
| 0.4311 | 3.5 | 10731 | 0.5171 | 0.8038 | 0.8039 |
| 0.4436 | 4.0 | 12264 | 0.5064 | 0.8078 | 0.8087 |
| 0.4174 | 4.5 | 13797 | 0.5220 | 0.8076 | 0.8080 |
| 0.414 | 5.0 | 15330 | 0.5166 | 0.8093 | 0.8094 |
| 0.3726 | 5.5 | 16863 | 0.5359 | 0.8083 | 0.8089 |
| 0.3974 | 6.0 | 18396 | 0.5292 | 0.8059 | 0.8063 |
| 0.3452 | 6.5 | 19929 | 0.5551 | 0.8070 | 0.8076 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
| 2,419 | [
[
-0.043426513671875,
-0.03125,
0.0014743804931640625,
0.011077880859375,
-0.0236968994140625,
-0.019195556640625,
-0.01513671875,
-0.0125579833984375,
0.024627685546875,
0.0222015380859375,
-0.048004150390625,
-0.0478515625,
-0.04571533203125,
-0.006801605224... |
muhammadravi251001/fine-tuned-IndoNLI-Augmented-with-indobert-base-uncased-LR-1e-05 | 2023-04-19T09:04:39.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | muhammadravi251001 | null | null | muhammadravi251001/fine-tuned-IndoNLI-Augmented-with-indobert-base-uncased-LR-1e-05 | 0 | 2 | transformers | 2023-04-02T15:49:10 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Augmented-with-indobert-base-uncased-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Augmented-with-indobert-base-uncased-LR-1e-05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5676
- Accuracy: 0.8033
- F1: 0.8035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.6622 | 0.5 | 1574 | 0.6792 | 0.7169 | 0.7176 |
| 0.6117 | 1.0 | 3148 | 0.5989 | 0.7557 | 0.7566 |
| 0.5639 | 1.5 | 4722 | 0.5862 | 0.7712 | 0.7726 |
| 0.5485 | 2.0 | 6296 | 0.5449 | 0.7886 | 0.7890 |
| 0.5148 | 2.5 | 7870 | 0.5409 | 0.7899 | 0.7906 |
| 0.4795 | 3.0 | 9444 | 0.5296 | 0.7956 | 0.7956 |
| 0.4655 | 3.5 | 11018 | 0.5414 | 0.7919 | 0.7928 |
| 0.4539 | 4.0 | 12592 | 0.5313 | 0.7985 | 0.7991 |
| 0.4412 | 4.5 | 14166 | 0.5431 | 0.7983 | 0.7988 |
| 0.4131 | 5.0 | 15740 | 0.5316 | 0.8016 | 0.8017 |
| 0.3831 | 5.5 | 17314 | 0.5753 | 0.7954 | 0.7965 |
| 0.3757 | 6.0 | 18888 | 0.5460 | 0.8032 | 0.8038 |
| 0.3579 | 6.5 | 20462 | 0.5604 | 0.8004 | 0.8005 |
| 0.37 | 7.0 | 22036 | 0.5607 | 0.8014 | 0.8019 |
| 0.3368 | 7.5 | 23610 | 0.5676 | 0.8033 | 0.8035 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
| 2,561 | [
[
-0.045166015625,
-0.02996826171875,
0.0017995834350585938,
0.00701904296875,
-0.01739501953125,
-0.013824462890625,
-0.0131683349609375,
-0.01010894775390625,
0.032196044921875,
0.0224151611328125,
-0.04718017578125,
-0.046783447265625,
-0.046234130859375,
-... |
muhammadravi251001/fine-tuned-IndoNLI-Basic-with-indobert-large-p2-LR-1e-05 | 2023-04-19T09:54:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | muhammadravi251001 | null | null | muhammadravi251001/fine-tuned-IndoNLI-Basic-with-indobert-large-p2-LR-1e-05 | 0 | 2 | transformers | 2023-04-02T15:51:06 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Basic-with-indobert-large-p2-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Basic-with-indobert-large-p2-LR-1e-05
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6963
- Accuracy: 0.7724
- F1: 0.7724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3098 | 0.5 | 40 | 0.8899 | 0.6231 | 0.6268 |
| 1.0199 | 0.99 | 80 | 0.7268 | 0.6996 | 0.6999 |
| 0.767 | 1.49 | 120 | 0.6616 | 0.7406 | 0.7418 |
| 0.6649 | 1.98 | 160 | 0.6224 | 0.7547 | 0.7557 |
| 0.5796 | 2.48 | 200 | 0.6114 | 0.7656 | 0.7645 |
| 0.5796 | 2.97 | 240 | 0.6236 | 0.7524 | 0.7540 |
| 0.54 | 3.47 | 280 | 0.6223 | 0.7615 | 0.7624 |
| 0.4757 | 3.96 | 320 | 0.5965 | 0.7706 | 0.7721 |
| 0.4492 | 4.46 | 360 | 0.6216 | 0.7679 | 0.7681 |
| 0.3981 | 4.95 | 400 | 0.6347 | 0.7651 | 0.7669 |
| 0.3981 | 5.45 | 440 | 0.6373 | 0.7715 | 0.7727 |
| 0.352 | 5.94 | 480 | 0.6505 | 0.7674 | 0.7690 |
| 0.3294 | 6.44 | 520 | 0.6627 | 0.7720 | 0.7731 |
| 0.3058 | 6.93 | 560 | 0.6743 | 0.7660 | 0.7674 |
| 0.2692 | 7.43 | 600 | 0.6846 | 0.7665 | 0.7678 |
| 0.2692 | 7.93 | 640 | 0.6963 | 0.7724 | 0.7724 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
| 2,603 | [
[
-0.046173095703125,
-0.0306549072265625,
0.006793975830078125,
0.0107421875,
-0.0176849365234375,
-0.01519775390625,
-0.01543426513671875,
-0.01033782958984375,
0.031768798828125,
0.0197296142578125,
-0.04730224609375,
-0.044036865234375,
-0.04852294921875,
... |
muhammadravi251001/fine-tuned-IndoNLI-Translated-with-indobert-large-p2-LR-1e-05 | 2023-04-20T00:18:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | muhammadravi251001 | null | null | muhammadravi251001/fine-tuned-IndoNLI-Translated-with-indobert-large-p2-LR-1e-05 | 0 | 2 | transformers | 2023-04-02T15:54:05 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Translated-with-indobert-large-p2-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Translated-with-indobert-large-p2-LR-1e-05
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6141
- Accuracy: 0.8091
- F1: 0.8096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.588 | 0.5 | 1533 | 0.5476 | 0.7836 | 0.7841 |
| 0.5415 | 1.0 | 3066 | 0.5186 | 0.8006 | 0.8014 |
| 0.4561 | 1.5 | 4599 | 0.5009 | 0.8088 | 0.8090 |
| 0.4711 | 2.0 | 6132 | 0.4981 | 0.8077 | 0.8071 |
| 0.4016 | 2.5 | 7665 | 0.5234 | 0.8057 | 0.8063 |
| 0.4101 | 3.0 | 9198 | 0.5096 | 0.8109 | 0.8114 |
| 0.3104 | 3.5 | 10731 | 0.5465 | 0.8113 | 0.8113 |
| 0.3256 | 4.0 | 12264 | 0.5440 | 0.8107 | 0.8113 |
| 0.2768 | 4.5 | 13797 | 0.6141 | 0.8091 | 0.8096 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
| 2,127 | [
[
-0.039276123046875,
-0.029632568359375,
0.0034923553466796875,
0.015960693359375,
-0.029052734375,
-0.0254974365234375,
-0.0213470458984375,
-0.020721435546875,
0.0195159912109375,
0.0187835693359375,
-0.04461669921875,
-0.039642333984375,
-0.05096435546875,
... |
muhammadravi251001/fine-tuned-IndoNLI-Augmented-with-indobert-large-p2-LR-1e-05 | 2023-04-20T08:43:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | muhammadravi251001 | null | null | muhammadravi251001/fine-tuned-IndoNLI-Augmented-with-indobert-large-p2-LR-1e-05 | 0 | 2 | transformers | 2023-04-02T15:56:59 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Augmented-with-indobert-large-p2-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Augmented-with-indobert-large-p2-LR-1e-05
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5974
- Accuracy: 0.8037
- F1: 0.8043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.5944 | 0.5 | 1574 | 0.5977 | 0.7628 | 0.7634 |
| 0.5543 | 1.0 | 3148 | 0.5370 | 0.7904 | 0.7906 |
| 0.4887 | 1.5 | 4722 | 0.5421 | 0.7937 | 0.7947 |
| 0.4772 | 2.0 | 6296 | 0.5125 | 0.8048 | 0.8052 |
| 0.416 | 2.5 | 7870 | 0.5305 | 0.8024 | 0.8028 |
| 0.4036 | 3.0 | 9444 | 0.5319 | 0.8050 | 0.8055 |
| 0.3326 | 3.5 | 11018 | 0.5629 | 0.8022 | 0.8028 |
| 0.3261 | 4.0 | 12592 | 0.5700 | 0.7999 | 0.8006 |
| 0.2904 | 4.5 | 14166 | 0.5974 | 0.8037 | 0.8043 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
| 2,125 | [
[
-0.038818359375,
-0.0296783447265625,
0.0023632049560546875,
0.01322174072265625,
-0.0256805419921875,
-0.0234527587890625,
-0.022186279296875,
-0.022979736328125,
0.021209716796875,
0.0167236328125,
-0.042755126953125,
-0.035858154296875,
-0.051605224609375,
... |
muhammadravi251001/fine-tuned-IndoNLI-Basic-with-xlm-roberta-large-LR-1e-05 | 2023-04-20T10:03:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | muhammadravi251001 | null | null | muhammadravi251001/fine-tuned-IndoNLI-Basic-with-xlm-roberta-large-LR-1e-05 | 0 | 2 | transformers | 2023-04-02T15:58:26 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Basic-with-xlm-roberta-large-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Basic-with-xlm-roberta-large-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5019
- Accuracy: 0.8243
- F1: 0.8245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1396 | 0.5 | 40 | 1.0955 | 0.3696 | 0.2355 |
| 1.108 | 0.99 | 80 | 1.0433 | 0.4406 | 0.3665 |
| 1.0644 | 1.49 | 120 | 0.9406 | 0.5321 | 0.5293 |
| 0.963 | 1.98 | 160 | 0.9097 | 0.6154 | 0.6192 |
| 0.8825 | 2.48 | 200 | 0.7810 | 0.6891 | 0.6898 |
| 0.8825 | 2.97 | 240 | 0.7141 | 0.7196 | 0.7216 |
| 0.8145 | 3.47 | 280 | 0.7784 | 0.7219 | 0.7238 |
| 0.7253 | 3.96 | 320 | 0.6165 | 0.7711 | 0.7716 |
| 0.6706 | 4.46 | 360 | 0.6133 | 0.7597 | 0.7582 |
| 0.6356 | 4.95 | 400 | 0.5849 | 0.7833 | 0.7826 |
| 0.6356 | 5.45 | 440 | 0.5443 | 0.7979 | 0.7980 |
| 0.5919 | 5.94 | 480 | 0.5335 | 0.8093 | 0.8101 |
| 0.5509 | 6.44 | 520 | 0.5256 | 0.8157 | 0.8165 |
| 0.5286 | 6.93 | 560 | 0.5127 | 0.8107 | 0.8101 |
| 0.5081 | 7.43 | 600 | 0.5160 | 0.8170 | 0.8173 |
| 0.5081 | 7.93 | 640 | 0.5037 | 0.8220 | 0.8222 |
| 0.5077 | 8.42 | 680 | 0.4961 | 0.8207 | 0.8210 |
| 0.4829 | 8.92 | 720 | 0.5016 | 0.8266 | 0.8268 |
| 0.4585 | 9.41 | 760 | 0.5043 | 0.8229 | 0.8227 |
| 0.4712 | 9.91 | 800 | 0.5019 | 0.8243 | 0.8245 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
| 2,859 | [
[
-0.0467529296875,
-0.0399169921875,
0.01071929931640625,
0.0034027099609375,
-0.01213836669921875,
-0.0095367431640625,
-0.01294708251953125,
-0.00888824462890625,
0.03558349609375,
0.0257415771484375,
-0.052398681640625,
-0.04766845703125,
-0.049713134765625,
... |
muhammadravi251001/fine-tuned-IndoNLI-Translated-with-xlm-roberta-large-LR-1e-05 | 2023-04-20T12:10:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | muhammadravi251001 | null | null | muhammadravi251001/fine-tuned-IndoNLI-Translated-with-xlm-roberta-large-LR-1e-05 | 0 | 2 | transformers | 2023-04-02T16:01:40 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Translated-with-xlm-roberta-large-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Translated-with-xlm-roberta-large-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4945
- Accuracy: 0.8553
- F1: 0.8555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4916 | 0.5 | 1533 | 0.4336 | 0.8335 | 0.8342 |
| 0.4465 | 1.0 | 3066 | 0.4120 | 0.8454 | 0.8463 |
| 0.3666 | 1.5 | 4599 | 0.4001 | 0.8537 | 0.8538 |
| 0.3876 | 2.0 | 6132 | 0.3928 | 0.8530 | 0.8528 |
| 0.3347 | 2.5 | 7665 | 0.4415 | 0.8502 | 0.8505 |
| 0.3372 | 3.0 | 9198 | 0.4174 | 0.8582 | 0.8583 |
| 0.2641 | 3.5 | 10731 | 0.4568 | 0.8532 | 0.8529 |
| 0.2747 | 4.0 | 12264 | 0.4262 | 0.8576 | 0.8577 |
| 0.231 | 4.5 | 13797 | 0.4945 | 0.8553 | 0.8555 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
| 2,100 | [
[
-0.03802490234375,
-0.037872314453125,
0.0107269287109375,
0.006290435791015625,
-0.02655029296875,
-0.0224151611328125,
-0.0217742919921875,
-0.0220947265625,
0.0192413330078125,
0.025848388671875,
-0.053192138671875,
-0.046844482421875,
-0.054168701171875,
... |
muhammadravi251001/fine-tuned-IndoNLI-Augmented-with-xlm-roberta-large-LR-1e-05 | 2023-04-28T13:22:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | muhammadravi251001 | null | null | muhammadravi251001/fine-tuned-IndoNLI-Augmented-with-xlm-roberta-large-LR-1e-05 | 0 | 2 | transformers | 2023-04-02T16:05:00 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Augmented-with-xlm-roberta-large-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Augmented-with-xlm-roberta-large-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4709
- Accuracy: 0.8563
- F1: 0.8567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4755 | 0.5 | 1574 | 0.4331 | 0.8360 | 0.8358 |
| 0.4397 | 1.0 | 3148 | 0.3990 | 0.8489 | 0.8492 |
| 0.3992 | 1.5 | 4722 | 0.4178 | 0.8469 | 0.8478 |
| 0.3825 | 2.0 | 6296 | 0.3918 | 0.8552 | 0.8552 |
| 0.334 | 2.5 | 7870 | 0.4159 | 0.8535 | 0.8537 |
| 0.3159 | 3.0 | 9444 | 0.4048 | 0.8613 | 0.8611 |
| 0.2738 | 3.5 | 11018 | 0.4437 | 0.8552 | 0.8555 |
| 0.2758 | 4.0 | 12592 | 0.4381 | 0.8538 | 0.8542 |
| 0.2311 | 4.5 | 14166 | 0.4709 | 0.8563 | 0.8567 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
| 2,097 | [
[
-0.038238525390625,
-0.038330078125,
0.008056640625,
0.0048370361328125,
-0.0240936279296875,
-0.0204010009765625,
-0.0220947265625,
-0.025115966796875,
0.022796630859375,
0.023712158203125,
-0.051788330078125,
-0.0435791015625,
-0.053466796875,
0.0023174285... |
c0ldstudy/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-02T17:04:35.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | c0ldstudy | null | null | c0ldstudy/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-02T17:04:10 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 854.00 +/- 277.48
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga c0ldstudy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga c0ldstudy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga c0ldstudy
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,695 | [
[
-0.0418701171875,
-0.038818359375,
0.0225982666015625,
0.0238037109375,
-0.009063720703125,
-0.0192718505859375,
0.0117034912109375,
-0.012786865234375,
0.0128021240234375,
0.0232696533203125,
-0.0684814453125,
-0.035003662109375,
-0.0258026123046875,
-0.003... |
huggingtweets/fuckrvt | 2023-04-02T19:08:33.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | huggingtweets | null | null | huggingtweets/fuckrvt | 0 | 2 | transformers | 2023-04-02T18:41:53 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1619674731975786496/gGJpxiyj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">praisegio</div>
<div style="text-align: center; font-size: 14px;">@fuckrvt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from praisegio.
| Data | praisegio |
| --- | --- |
| Tweets downloaded | 3212 |
| Retweets | 203 |
| Short tweets | 778 |
| Tweets kept | 2231 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4unngzee/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fuckrvt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/x3e57izg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/x3e57izg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fuckrvt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| 3,482 | [
[
-0.0255126953125,
-0.06048583984375,
0.025604248046875,
0.01812744140625,
-0.0177459716796875,
0.0088043212890625,
-0.006427764892578125,
-0.037017822265625,
0.0248565673828125,
0.00789642333984375,
-0.07452392578125,
-0.0330810546875,
-0.04864501953125,
-0.... |
OptimalScale/gpt2-inst-tuning | 2023-04-02T18:54:44.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:lmflow_instruction",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | OptimalScale | null | null | OptimalScale/gpt2-inst-tuning | 1 | 2 | transformers | 2023-04-02T18:54:01 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- lmflow_instruction
model-index:
- name: 044_inst-tuning_model-gpt_num-epoch-5_init-lr-2e-5_bf-16_blocksize768
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 044_inst-tuning_model-gpt_num-epoch-5_init-lr-2e-5_bf-16_blocksize768
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the lmflow_instruction dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 512
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,255 | [
[
-0.039581298828125,
-0.04669189453125,
0.0159454345703125,
0.0093231201171875,
-0.036865234375,
-0.023223876953125,
-0.0086822509765625,
-0.0164642333984375,
-0.0118560791015625,
0.02056884765625,
-0.054412841796875,
-0.02374267578125,
-0.051544189453125,
-0... |
denizspynk/req_mod_ner_modelv2 | 2023-04-07T12:59:47.000Z | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"nl",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | denizspynk | null | null | denizspynk/req_mod_ner_modelv2 | 0 | 2 | transformers | 2023-04-02T19:21:45 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: req_mod_ner_modelv2
results: []
widget:
- text: >-
De Oplossing ondersteunt het zoeken op de metadata van zaken, documenten en
objecten en op gegevens uit de basisregistraties die gekoppeld zijn aan een
zaak.
- text: >-
De Oplossing ondersteunt parafering en het plaatsen van een gecertificeerde
elektronische handtekening.
- text: >-
De Aangeboden oplossing stelt de medewerker in staat een zaak te
registreren.
- text: >-
Het Financieel systeem heeft functionaliteit om een debiteurenadministratie
te voeren.
- text: >-
Als gebruiker wil ik dat de oplossing mij naar zaken laat zoeken op basis
van zaaknummer, zaaktitel, omschrijving en datum.
language:
- nl
---
# req_mod_ner_modelv2
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-ner](https://huggingface.co/pdelobelle/robbert-v2-dutch-ner) on a
private dataset with 300 sentences/phrases with 1,954 token labels (IOB2 format) aimed at extracting software requirements
related named entities in Dutch. The following labels are used:
- Actor (used for all types of software users and groups of users)
- COTS (abbreviation for Commercial Off-The-Shelf Software)
- Function (used for functions, functionality, features)
- Result (used for system result, goals and system output)
- Entity (used for all entities stored/processed by the software)
- Attribute (used for attributes of entities)
Please contact me via [LinkedIn](https://www.linkedin.com/in/denizayhan/) if you have any questions about this model or the dataset used.
The dataset and this model were created as part of the final project assignment of the Natural Language Understanding course (XCS224U) from the Professional AI Program of the Stanford School of Engineering.
The model achieves the following results on the evaluation set:
- Loss: 0.6791
- Precision: 0.7515
- Recall: 0.7299
- F1: 0.7405
- Accuracy: 0.9253
# Metrics per named-entity
| NER-tag | Precision | Recall | F1 | Support |
|:---------:|:---------:|:------:|:----:|:-------:|
| Actor | 0.86 | 1.00 | 0.92 | 12 |
| COTS | 0.79 | 0.79 | 0.79 | 24 |
| Function | 0.73 | 0.66 | 0.69 | 62 |
| Result | 0.29 | 0.40 | 0.33 | 10 |
| Entity | 0.78 | 0.83 | 0.81 | 35 |
| Attribute | 0.92 | 0.71 | 0.80 | 31 |
## Intended uses & limitations
The model performs automated extraction of functionality concepts from source documents for which software requirements are needed. Its intended use is as a preceding processing step for Question-Answering.
## Training and evaluation data
The model was trained on the ReqModNer dataset. This dataset is private and contains 300 sentences/phrases and 1,954 IOB2 labels. The dataset is split 240/30/30 into train, validation and test. The reported metrics are from the evaluation on the test set. The validation set was used for cross-validation during training.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 270 | 0.5418 | 0.6065 | 0.5402 | 0.5714 | 0.8802 |
| 0.5551 | 2.0 | 540 | 0.4299 | 0.5481 | 0.6552 | 0.5969 | 0.8896 |
| 0.5551 | 3.0 | 810 | 0.4987 | 0.6358 | 0.5517 | 0.5908 | 0.9020 |
| 0.1935 | 4.0 | 1080 | 0.5620 | 0.6159 | 0.4885 | 0.5449 | 0.8935 |
| 0.1935 | 5.0 | 1350 | 0.4922 | 0.6786 | 0.6552 | 0.6667 | 0.9121 |
| 0.0913 | 6.0 | 1620 | 0.5406 | 0.6087 | 0.5632 | 0.5851 | 0.8950 |
| 0.0913 | 7.0 | 1890 | 0.6307 | 0.7425 | 0.7126 | 0.7273 | 0.9222 |
| 0.0702 | 8.0 | 2160 | 0.4425 | 0.6684 | 0.7414 | 0.7030 | 0.9277 |
| 0.0702 | 9.0 | 2430 | 0.6028 | 0.7158 | 0.7529 | 0.7339 | 0.9285 |
| 0.0472 | 10.0 | 2700 | 0.6491 | 0.7303 | 0.7471 | 0.7386 | 0.9246 |
| 0.0472 | 11.0 | 2970 | 0.6442 | 0.7198 | 0.7529 | 0.7360 | 0.9292 |
| 0.0305 | 12.0 | 3240 | 0.5980 | 0.7412 | 0.7241 | 0.7326 | 0.9230 |
| 0.0209 | 13.0 | 3510 | 0.6186 | 0.7232 | 0.7356 | 0.7293 | 0.9238 |
| 0.0209 | 14.0 | 3780 | 0.6791 | 0.7515 | 0.7299 | 0.7405 | 0.9253 |
| 0.0148 | 15.0 | 4050 | 0.6832 | 0.7283 | 0.7241 | 0.7262 | 0.9238 |
| 0.0148 | 16.0 | 4320 | 0.6908 | 0.7412 | 0.7241 | 0.7326 | 0.9238 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.0
- Datasets 2.9.0
- Tokenizers 0.11.0 | 5,135 | [
[
-0.03253173828125,
-0.042633056640625,
0.0106048583984375,
0.00931549072265625,
-0.0014410018920898438,
-0.00914764404296875,
0.006916046142578125,
-0.0149688720703125,
0.0263519287109375,
0.038055419921875,
-0.043853759765625,
-0.04742431640625,
-0.040893554687... |
dvilasuero/alpaca-gigo-detector-setfit | 2023-04-02T19:49:27.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | dvilasuero | null | null | dvilasuero/alpaca-gigo-detector-setfit | 0 | 2 | sentence-transformers | 2023-04-02T19:49:18 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dvilasuero/alpaca-gigo-detector-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dvilasuero/alpaca-gigo-detector-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,565 | [
[
-0.028533935546875,
-0.0599365234375,
0.0305328369140625,
-0.01271820068359375,
-0.02947998046875,
-0.02447509765625,
-0.00897979736328125,
-0.02252197265625,
0.01488494873046875,
0.0321044921875,
-0.041900634765625,
-0.02337646484375,
-0.051849365234375,
0.... |
Fred99774/parailaravlaransfwuber | 2023-04-02T20:04:41.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Fred99774 | null | null | Fred99774/parailaravlaransfwuber | 0 | 2 | diffusers | 2023-04-02T19:58:53 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Parailaravlaransfwuber Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 513 | [
[
-0.02618408203125,
-0.05120849609375,
0.042144775390625,
0.040374755859375,
-0.0168609619140625,
0.0230865478515625,
0.018798828125,
-0.00640869140625,
0.046722412109375,
0.00909423828125,
-0.01554107666015625,
-0.0245361328125,
-0.035797119140625,
-0.008743... |
NimaKL/FireWatch_tiny_75k | 2023-04-04T08:46:28.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"WildFire",
"en",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | NimaKL | null | null | NimaKL/FireWatch_tiny_75k | 0 | 2 | transformers | 2023-04-02T20:48:16 | ---
language:
- en
metrics:
- accuracy
tags:
- WildFire
---
# FireWatch Wild Fire Prediction Model
Predict wild fires based on latitude, longitude, brightness, and FRP. Input data as "latitude, longitude, brightness, FRP".
- LABEL_0 = Unlikely
- LABEL_1 = Likely
| Category | Latitude, Longitude, Brightness, FRP |
|----------|--------------------------------------|
| Likely | -26.76123, 147.15512, 393.02, 203.63 |
| Likely | -26.7598, 147.14514, 361.54, 79.4 |
| Unlikely | -25.70059, 149.48932, 313.9, 5.15 |
| Unlikely | -24.4318, 151.83102, 307.98, 8.79 |
| Unlikely | -23.21878, 148.91298, 314.08, 7.4 |
| Likely | 7.87518, 19.9241, 316.32, 39.63 |
| Unlikely | -20.10942, 148.14326, 314.39, 8.8 |
| Unlikely | 7.87772, 19.9048, 304.14, 13.43 |
| Likely | -20.79866, 124.46834, 366.74, 89.06 |
| 838 | [
[
-0.01091766357421875,
-0.021392822265625,
0.027069091796875,
0.0261993408203125,
-0.02471923828125,
-0.00659942626953125,
0.0212860107421875,
-0.0201416015625,
0.0240325927734375,
0.04388427734375,
-0.069580078125,
-0.057769775390625,
-0.028106689453125,
-0.... |
BenDaouda/FrenchSpeech2Number_ASR | 2023-04-09T14:52:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | BenDaouda | null | null | BenDaouda/FrenchSpeech2Number_ASR | 0 | 2 | transformers | 2023-04-02T21:46:56 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: FrenchSpeech2Number_ASR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FrenchSpeech2Number_ASR
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6089
- Wer: 0.8375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.8437 | 20.0 | 400 | 0.6089 | 0.8375 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.13.3
| 1,440 | [
[
-0.029266357421875,
-0.042633056640625,
0.00380706787109375,
0.0228424072265625,
-0.0193328857421875,
-0.0340576171875,
-0.0201416015625,
-0.0180816650390625,
0.005710601806640625,
0.0310516357421875,
-0.05950927734375,
-0.03753662109375,
-0.050323486328125,
... |
joagonzalez/bert-fine-tuned-cola | 2023-04-02T23:56:04.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | joagonzalez | null | null | joagonzalez/bert-fine-tuned-cola | 0 | 2 | transformers | 2023-04-02T22:42:58 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5764508680057442
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9503
- Matthews Correlation: 0.5765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4827 | 1.0 | 1069 | 0.5936 | 0.4448 |
| 0.3538 | 2.0 | 2138 | 0.5796 | 0.6023 |
| 0.2028 | 3.0 | 3207 | 0.7589 | 0.5779 |
| 0.1219 | 4.0 | 4276 | 0.9503 | 0.5765 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,914 | [
[
-0.0279998779296875,
-0.056304931640625,
0.01067352294921875,
0.01751708984375,
-0.0211334228515625,
-0.01617431640625,
-0.0135498046875,
-0.0157470703125,
0.0250701904296875,
0.0111541748046875,
-0.055084228515625,
-0.032012939453125,
-0.053375244140625,
-0... |
sagar-thacker/distilbert-base-uncased-finetuned-emotion | 2023-04-24T16:44:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | sagar-thacker | null | null | sagar-thacker/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-02T23:59:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9252950431552421
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2182
- Accuracy: 0.9255
- F1: 0.9253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3149 | 0.9005 | 0.8967 |
| No log | 2.0 | 500 | 0.2182 | 0.9255 | 0.9253 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,804 | [
[
-0.036376953125,
-0.04278564453125,
0.0138092041015625,
0.0232696533203125,
-0.0259246826171875,
-0.0203704833984375,
-0.013275146484375,
-0.0102996826171875,
0.01045989990234375,
0.00836181640625,
-0.0562744140625,
-0.051483154296875,
-0.05975341796875,
-0.... |
YeungNLP/bloomz-2b6-zh | 2023-04-03T10:16:43.000Z | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | YeungNLP | null | null | YeungNLP/bloomz-2b6-zh | 3 | 2 | transformers | 2023-04-03T03:28:22 | 项目地址:[LLMPruner:大语言模型裁剪工具](https://github.com/yangjianxin1/LLMPruner)
LLMPruner是一个大语言模型裁剪工具,通过对大语言模型的冗余词表进行裁剪,减少模型参数量,降低显存占用,提升训练速度,并且能够保留预训练中学习到的知识。
本项目对Bloom进行词表裁剪,保留中文token和常用的英文token,词表由250880将至46145,缩减为原来的18.39%。裁剪得到的Bloom模型如下表:
| 裁剪模型 | 原模型 | 参数量比例 |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|--------|
| [YeungNLP/bloom-396m-zh](https://huggingface.co/YeungNLP/bloom-396m-zh) | [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 70.96% |
| [YeungNLP/bloom-820m-zh](https://huggingface.co/YeungNLP/bloom-820m-zh) | [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) | 77.13% |
| [YeungNLP/bloom-1b4-zh](https://huggingface.co/YeungNLP/bloom-1b4-zh) | [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) | 81.14% |
| [YeungNLP/bloom-2b6-zh](https://huggingface.co/YeungNLP/bloom-2b6-zh) | [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 86.48% |
| [YeungNLP/bloom-6b4-zh](https://huggingface.co/YeungNLP/bloom-6b4-zh) | [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) | 90.81% |
| [YeungNLP/bloomz-396m-zh](https://huggingface.co/YeungNLP/bloomz-396m-zh) | [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) | 70.96% |
| [YeungNLP/bloomz-820m-zh](https://huggingface.co/YeungNLP/bloomz-820m-zh) | [bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1) | 77.13% |
| [YeungNLP/bloomz-1b4-zh](https://huggingface.co/YeungNLP/bloomz-1b4-zh) | [bigscience/bloomz-1b7](https://huggingface.co/bigscience/bloomz-1b7) | 81.14% |
| [YeungNLP/bloomz-2b6-zh](https://huggingface.co/YeungNLP/bloomz-2b6-zh) | [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) | 86.48% |
| [YeungNLP/bloomz-6b4-zh](https://huggingface.co/YeungNLP/bloomz-6b4-zh) | [bigscience/bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 90.81% |
| [YeungNLP/bloomz-6b4-mt-zh](https://huggingface.co/YeungNLP/bloomz-6b4-mt-zh) | [bigscience/bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 90.81% |
使用方法:
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
tokenizer = BloomTokenizerFast.from_pretrained('YeungNLP/bloom-1b4-zh')
model = BloomForCausalLM.from_pretrained('YeungNLP/bloom-1b4-zh')
print(tokenizer.batch_decode(model.generate(tokenizer.encode('长风破浪会有时', return_tensors='pt'))))
``` | 2,710 | [
[
-0.029541015625,
-0.02001953125,
0.0301666259765625,
0.037811279296875,
-0.0201873779296875,
0.0016260147094726562,
-0.0065765380859375,
-0.0237274169921875,
0.04541015625,
0.0017309188842773438,
-0.043548583984375,
-0.045013427734375,
-0.0489501953125,
0.00... |
intanm/sa10-clm-20230403-001-3 | 2023-04-03T07:25:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | intanm | null | null | intanm/sa10-clm-20230403-001-3 | 0 | 2 | transformers | 2023-04-03T07:19:47 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sa10-clm-20230403-001-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa10-clm-20230403-001-3
This model is a fine-tuned version of [intanm/clm-20230403-001-3](https://huggingface.co/intanm/clm-20230403-001-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6258
- Accuracy: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 11 | 0.7291 | 0.7143 |
| No log | 2.0 | 22 | 0.6258 | 0.7692 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,410 | [
[
-0.03546142578125,
-0.029266357421875,
0.0125732421875,
0.018951416015625,
-0.02978515625,
-0.04107666015625,
-0.0159454345703125,
-0.0219879150390625,
0.0030193328857421875,
0.0343017578125,
-0.0496826171875,
-0.042510986328125,
-0.05126953125,
-0.007728576... |
Mihara-bot/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-03T07:23:02.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Mihara-bot | null | null | Mihara-bot/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-03T07:22:13 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Mihara-bot -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Mihara-bot -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Mihara-bot
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,696 | [
[
-0.039886474609375,
-0.038726806640625,
0.02130126953125,
0.0238800048828125,
-0.0100860595703125,
-0.0163116455078125,
0.01047515869140625,
-0.0128936767578125,
0.01255035400390625,
0.025482177734375,
-0.0704345703125,
-0.03582763671875,
-0.0281982421875,
-... |
lmazzon70/videomae-base-finetuned-kinetics-finetuned-rwf2000mp4-epochs8-batch8-kb | 2023-04-03T21:40:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | lmazzon70 | null | null | lmazzon70/videomae-base-finetuned-kinetics-finetuned-rwf2000mp4-epochs8-batch8-kb | 0 | 2 | transformers | 2023-04-03T08:30:57 | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-kinetics-finetuned-rwf2000mp4-epochs8-batch8-kb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-kinetics-finetuned-rwf2000mp4-epochs8-batch8-kb
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Accuracy: 0.7453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3514 | 0.06 | 200 | 0.2837 | 0.8875 |
| 0.3156 | 1.06 | 400 | 0.6930 | 0.7625 |
| 0.2273 | 2.06 | 600 | 0.5692 | 0.805 |
| 0.2091 | 3.06 | 800 | 0.3872 | 0.8612 |
| 0.1875 | 4.06 | 1000 | 0.3394 | 0.8725 |
| 0.1206 | 5.06 | 1200 | 0.4416 | 0.8562 |
| 0.1302 | 6.06 | 1400 | 1.0851 | 0.7475 |
| 0.3417 | 7.06 | 1600 | 0.5024 | 0.8638 |
| 0.2545 | 8.06 | 1800 | 0.3819 | 0.9 |
| 0.1787 | 9.06 | 2000 | 0.3864 | 0.8962 |
| 0.0761 | 10.06 | 2200 | 0.5604 | 0.8562 |
| 0.076 | 11.06 | 2400 | 0.5780 | 0.8725 |
| 0.1476 | 12.06 | 2600 | 0.5479 | 0.8725 |
| 0.1274 | 13.06 | 2800 | 0.5843 | 0.87 |
| 0.0382 | 14.06 | 3000 | 0.6739 | 0.8525 |
| 0.0143 | 15.06 | 3200 | 0.5568 | 0.8738 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
| 2,513 | [
[
-0.043670654296875,
-0.0401611328125,
0.01546478271484375,
-0.0022602081298828125,
-0.0191497802734375,
-0.02642822265625,
-0.007099151611328125,
-0.0045318603515625,
0.02215576171875,
0.025360107421875,
-0.05615234375,
-0.05133056640625,
-0.053436279296875,
... |
abusarim/bert-base-banking77-pt2 | 2023-04-03T10:47:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | abusarim | null | null | abusarim/bert-base-banking77-pt2 | 0 | 2 | transformers | 2023-04-03T08:39:15 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.934082588557655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3017
- F1: 0.9341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2195 | 1.0 | 626 | 0.8539 | 0.8461 |
| 0.4195 | 2.0 | 1252 | 0.3744 | 0.9202 |
| 0.1976 | 3.0 | 1878 | 0.3017 | 0.9341 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1,727 | [
[
-0.0293121337890625,
-0.04010009765625,
0.012054443359375,
0.0139312744140625,
-0.0435791015625,
-0.0267486572265625,
-0.00860595703125,
-0.0182952880859375,
-0.004150390625,
0.04180908203125,
-0.04296875,
-0.043487548828125,
-0.052154541015625,
-0.029632568... |
MGanesh29/output | 2023-04-03T10:39:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | MGanesh29 | null | null | MGanesh29/output | 0 | 2 | transformers | 2023-04-03T10:38:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,052 | [
[
-0.0260009765625,
-0.052886962890625,
0.022308349609375,
0.0018415451049804688,
-0.0238037109375,
-0.0411376953125,
-0.009613037109375,
-0.0142364501953125,
0.006591796875,
0.0274505615234375,
-0.0640869140625,
-0.029693603515625,
-0.04736328125,
0.004085540... |
Overfit-GM/bert-base-turkish-uncased-offensive | 2023-04-04T22:21:57.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Overfit-GM | null | null | Overfit-GM/bert-base-turkish-uncased-offensive | 1 | 2 | transformers | 2023-04-03T11:33:49 | ---
license: apache-2.0
language:
- tr
pipeline_tag: text-classification
widget:
- text: >-
Seni lanet olası, senin derdin ne ha?
example_title: Example Text
---
--- | 178 | [
[
-0.032684326171875,
-0.034881591796875,
0.061614990234375,
0.041595458984375,
-0.01141357421875,
-0.01045989990234375,
0.04193115234375,
-0.038116455078125,
0.061004638671875,
0.06243896484375,
-0.03509521484375,
-0.0404052734375,
-0.032562255859375,
0.00118... |
Overfit-GM/bert-base-turkish-128k-uncased-offensive | 2023-04-04T22:23:26.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Overfit-GM | null | null | Overfit-GM/bert-base-turkish-128k-uncased-offensive | 0 | 2 | transformers | 2023-04-03T11:50:12 | ---
license: apache-2.0
language:
- tr
pipeline_tag: text-classification
widget:
- text: >-
Seni lanet olası, senin derdin ne ha?
example_title: Example Text
---
--- | 178 | [
[
-0.03265380859375,
-0.03485107421875,
0.0616455078125,
0.0416259765625,
-0.01143646240234375,
-0.01044464111328125,
0.041961669921875,
-0.038177490234375,
0.06103515625,
0.062469482421875,
-0.03515625,
-0.040435791015625,
-0.032562255859375,
0.00119209289550... |
Overfit-GM/bert-base-turkish-128k-cased-offensive | 2023-04-04T22:23:55.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Overfit-GM | null | null | Overfit-GM/bert-base-turkish-128k-cased-offensive | 0 | 2 | transformers | 2023-04-03T12:06:17 | ---
license: apache-2.0
language:
- tr
pipeline_tag: text-classification
widget:
- text: >-
Seni lanet olası, senin derdin ne ha?
example_title: Example Text
---
--- | 178 | [
[
-0.03265380859375,
-0.03485107421875,
0.0616455078125,
0.0416259765625,
-0.01143646240234375,
-0.01044464111328125,
0.041961669921875,
-0.038177490234375,
0.06103515625,
0.062469482421875,
-0.03515625,
-0.040435791015625,
-0.032562255859375,
0.00119209289550... |
Overfit-GM/convbert-base-turkish-mc4-cased-offensive | 2023-04-04T22:24:43.000Z | [
"transformers",
"pytorch",
"convbert",
"text-classification",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Overfit-GM | null | null | Overfit-GM/convbert-base-turkish-mc4-cased-offensive | 0 | 2 | transformers | 2023-04-03T12:37:58 | ---
license: apache-2.0
language:
- tr
pipeline_tag: text-classification
widget:
- text: >-
Seni lanet olası, senin derdin ne ha?
example_title: Example Text
---
--- | 178 | [
[
-0.03265380859375,
-0.03485107421875,
0.0616455078125,
0.0416259765625,
-0.01143646240234375,
-0.01044464111328125,
0.041961669921875,
-0.038177490234375,
0.06103515625,
0.062469482421875,
-0.03515625,
-0.040435791015625,
-0.032562255859375,
0.00119209289550... |
Overfit-GM/convbert-base-turkish-mc4-uncased-offensive | 2023-04-04T22:24:54.000Z | [
"transformers",
"pytorch",
"convbert",
"text-classification",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Overfit-GM | null | null | Overfit-GM/convbert-base-turkish-mc4-uncased-offensive | 0 | 2 | transformers | 2023-04-03T12:45:26 | ---
license: apache-2.0
language:
- tr
pipeline_tag: text-classification
widget:
- text: >-
Seni lanet olası, senin derdin ne ha?
example_title: Example Text
---
--- | 178 | [
[
-0.03265380859375,
-0.03485107421875,
0.0616455078125,
0.0416259765625,
-0.01143646240234375,
-0.01044464111328125,
0.041961669921875,
-0.038177490234375,
0.06103515625,
0.062469482421875,
-0.03515625,
-0.040435791015625,
-0.032562255859375,
0.00119209289550... |
Overfit-GM/convbert-base-turkish-cased-offensive | 2023-04-04T22:25:17.000Z | [
"transformers",
"pytorch",
"convbert",
"text-classification",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Overfit-GM | null | null | Overfit-GM/convbert-base-turkish-cased-offensive | 0 | 2 | transformers | 2023-04-03T12:55:48 | ---
license: apache-2.0
language:
- tr
pipeline_tag: text-classification
widget:
- text: >-
Seni lanet olası, senin derdin ne ha?
example_title: Example Text
---
--- | 178 | [
[
-0.03265380859375,
-0.03485107421875,
0.0616455078125,
0.0416259765625,
-0.01143646240234375,
-0.01044464111328125,
0.041961669921875,
-0.038177490234375,
0.06103515625,
0.062469482421875,
-0.03515625,
-0.040435791015625,
-0.032562255859375,
0.00119209289550... |
jlara6/distilroberta-base-mrpc-glue-jl | 2023-04-03T14:14:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jlara6 | null | null | jlara6/distilroberta-base-mrpc-glue-jl | 0 | 2 | transformers | 2023-04-03T14:09:51 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mrpc-glue-jl
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8480392156862745
- name: F1
type: f1
value: 0.8864468864468864
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-jl
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5789
- Accuracy: 0.8480
- F1: 0.8864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5452 | 1.09 | 500 | 0.6011 | 0.8382 | 0.8804 |
| 0.3759 | 2.18 | 1000 | 0.5789 | 0.8480 | 0.8864 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,845 | [
[
-0.0316162109375,
-0.0413818359375,
0.00803375244140625,
0.017913818359375,
-0.02703857421875,
-0.0246124267578125,
-0.0078277587890625,
-0.005283355712890625,
0.0007672309875488281,
0.0135650634765625,
-0.048309326171875,
-0.041748046875,
-0.059356689453125,
... |
memepottaboah/pmc-2005-riffusion | 2023-04-03T14:17:50.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | memepottaboah | null | null | memepottaboah/pmc-2005-riffusion | 0 | 2 | diffusers | 2023-04-03T14:11:48 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### PMC-2005-Riffusion Dreambooth model trained by memepottaboah with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 513 | [
[
-0.0248260498046875,
-0.05389404296875,
0.0399169921875,
0.038726806640625,
-0.0277099609375,
0.0287322998046875,
0.0196533203125,
-0.0139923095703125,
0.044342041015625,
0.0186004638671875,
-0.0341796875,
-0.0224456787109375,
-0.03240966796875,
-0.008331298... |
diegoref/testtest-19 | 2023-04-03T16:22:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | diegoref | null | null | diegoref/testtest-19 | 0 | 2 | transformers | 2023-04-03T16:17:18 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: testtest-19
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8627450980392157
- name: F1
type: f1
value: 0.9047619047619047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testtest-19
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5631
- Accuracy: 0.8627
- F1: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.5077 | 0.8137 | 0.8707 |
| 0.5519 | 2.0 | 918 | 0.4666 | 0.8431 | 0.8954 |
| 0.3741 | 3.0 | 1377 | 0.5631 | 0.8627 | 0.9048 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,846 | [
[
-0.031890869140625,
-0.05426025390625,
0.007335662841796875,
0.0183258056640625,
-0.0248565673828125,
-0.031494140625,
-0.015960693359375,
-0.0173797607421875,
0.0146942138671875,
0.014739990234375,
-0.054779052734375,
-0.0400390625,
-0.046905517578125,
-0.0... |
vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qg | 2023-04-03T17:21:01.000Z | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question generation",
"ja",
"dataset:lmqg/qg_jaquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | vocabtrimmer | null | null | vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qg | 0 | 2 | transformers | 2023-04-03T17:15:11 |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: ja
datasets:
- lmqg/qg_jaquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。"
example_title: "Question Generation Example 1"
- text: "『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる<hl>3万5000部<hl>が刷られた。他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。"
example_title: "Question Generation Example 2"
- text: "フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め<hl>30数点<hl>しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。"
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_jaquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 29.3
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 50.52
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 29.08
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 80.84
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 58.77
---
# Model Card of `vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qg`
This model is fine-tuned version of [ckpts/mbart-large-cc25-trimmed-ja](https://huggingface.co/ckpts/mbart-large-cc25-trimmed-ja) for question generation task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [ckpts/mbart-large-cc25-trimmed-ja](https://huggingface.co/ckpts/mbart-large-cc25-trimmed-ja)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qg")
# model prediction
questions = model.generate_q(list_context="フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。", list_answer="30数点")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qg")
output = pipe("ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 80.84 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 55.09 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 42.92 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 35.01 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 29.3 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 29.08 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 58.77 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 50.52 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: ckpts/mbart-large-cc25-trimmed-ja
- max_length: 512
- max_length_output: 32
- epoch: 5
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| 6,177 | [
[
-0.048919677734375,
-0.06414794921875,
0.0182647705078125,
0.0026397705078125,
-0.0257568359375,
-0.0110626220703125,
-0.0268402099609375,
-0.00827789306640625,
0.016143798828125,
0.03131103515625,
-0.055267333984375,
-0.04962158203125,
-0.035491943359375,
0... |
pabloyesteb/ppo-LunarLander-v2 | 2023-09-24T18:08:44.000Z | [
"stable-baselines3",
"tensorboard",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | pabloyesteb | null | null | pabloyesteb/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-04-03T18:15:18 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.37 +/- 16.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
him1411/EDGAR-Tk-Instruct-Large | 2023-05-12T01:57:54.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:him1411/EDGAR10-Q",
"arxiv:2109.08079",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | him1411 | null | null | him1411/EDGAR-Tk-Instruct-Large | 0 | 2 | transformers | 2023-04-03T18:35:08 | ---
license: apache-2.0
datasets:
- him1411/EDGAR10-Q
language:
- en
metrics:
- rouge
---
license: mit
language:
- en
tags:
- finance
- ContextNER
- language models
datasets:
- him1411/EDGAR10-Q
metrics:
- rouge
---
EDGAR-Tk-Instruct-Large
=============
T5 Large model finetuned on [EDGAR10-Q dataset](https://huggingface.co/datasets/him1411/EDGAR10-Q)
You may want to check out
* Our paper: [CONTEXT-NER: Contextual Phrase Generation at Scale](https://arxiv.org/abs/2109.08079/)
* GitHub: [Click Here](https://github.com/him1411/edgar10q-dataset)
Direct Use
=============
It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities. **It should not be directly used for production or work that may directly impact people.**
How to Use
=============
You can very easily load the models with Transformers, instead of downloading them manually. The [Tk-Instruct-Large model](https://huggingface.co/allenai/tk-instruct-large-def-pos) is the backbone of our model. Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("him1411/EDGAR-Tk-Instruct-Large")
model = AutoModelForSeq2SeqLM.from_pretrained("him1411/EDGAR-Tk-Instruct-Large")
```
Or just clone the model repo
```
git lfs install
git clone https://huggingface.co/him1411/EDGAR-Tk-Instruct-Large
```
Inference Example
=============
Here, we provide an example for the "ContextNER" task. Below is an example of one instance.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("him1411/EDGAR-Tk-Instruct-Large")
model = AutoModelForSeq2SeqLM.from_pretrained("him1411/EDGAR-Tk-Instruct-Large")
# Input shows how we have appended instruction from our file for HoC dataset with instance.
input = "14.5 years . The definite lived intangible assets related to the contracts and trade names had estimated weighted average useful lives of 5.9 years and 14.5 years, respectively, at acquisition."
tokenized_input= tokenizer(input)
# Ideal output for this input is 'Definite lived intangible assets weighted average remaining useful life'
output = model(tokenized_input)
```
BibTeX Entry and Citation Info
===============
If you are using our model, please cite our paper:
```bibtex
@article{gupta2021context,
title={Context-NER: Contextual Phrase Generation at Scale},
author={Gupta, Himanshu and Verma, Shreyas and Kumar, Tarun and Mishra, Swaroop and Agrawal, Tamanna and Badugu, Amogh and Bhatt, Himanshu Sharad},
journal={arXiv preprint arXiv:2109.08079},
year={2021}
}
``` | 2,672 | [
[
0.0040435791015625,
-0.0615234375,
0.012908935546875,
0.0157012939453125,
-0.019927978515625,
-0.016510009765625,
-0.01395416259765625,
-0.0245208740234375,
0.01070404052734375,
0.0209197998046875,
-0.026763916015625,
-0.030609130859375,
-0.0390625,
0.001793... |
amannlp/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-03T20:42:54.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | amannlp | null | null | amannlp/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-03T20:42:21 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 462.00 +/- 166.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga amannlp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga amannlp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga amannlp
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,688 | [
[
-0.041595458984375,
-0.03729248046875,
0.021484375,
0.02520751953125,
-0.00957489013671875,
-0.017852783203125,
0.01268768310546875,
-0.01445770263671875,
0.01255035400390625,
0.0238494873046875,
-0.06964111328125,
-0.034881591796875,
-0.0268707275390625,
-0... |
YSKartal/bertweet-base-finetuned-2-ref_disam | 2023-04-05T15:39:31.000Z | [
"transformers",
"tf",
"tensorboard",
"roberta",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | YSKartal | null | null | YSKartal/bertweet-base-finetuned-2-ref_disam | 0 | 2 | transformers | 2023-04-03T22:26:31 | ---
tags:
- generated_from_keras_callback
model-index:
- name: YSKartal/bertweet-base-finetuned-2-ref_disam
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# YSKartal/bertweet-base-finetuned-2-ref_disam
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.0043
- Validation Loss: 6.1571
- Train Accuracy: 0.0468
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16308, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 6.2549 | 6.7603 | 0.0191 | 0 |
| 5.7374 | 6.4441 | 0.0316 | 1 |
| 5.3233 | 6.2700 | 0.0428 | 2 |
| 5.0043 | 6.1571 | 0.0468 | 3 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,898 | [
[
-0.04144287109375,
-0.041839599609375,
0.01221466064453125,
0.00955963134765625,
-0.032379150390625,
-0.02532958984375,
-0.01605224609375,
-0.0150604248046875,
0.010223388671875,
0.00656890869140625,
-0.056640625,
-0.043548583984375,
-0.053253173828125,
-0.0... |
totallynotbrent/brotGPT | 2023-04-13T05:21:15.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"tf",
"safetensors",
"conversational",
"en",
"tl",
"dataset:totallynotbrent/brotai",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | conversational | totallynotbrent | null | null | totallynotbrent/brotGPT | 0 | 2 | transformers | 2023-04-03T22:56:55 | ---
tags:
- tf
- safetensors
- conversational
language:
- en
- tl
license:
- mit
datasets:
- totallynotbrent/brotai
---
# brotGPT
brotGPTbeta is a language model that uses the GPT-3 architecture trained with a dataset of over 215,000 conversational dataset along with a 9.5 billion general dataset to generate Brent-like responses to user inputs. This model can be trained using more variety of data sources to learn and generate responses in real-time.
## License
This project is licensed under the [MIT License](https://opensource.org/licenses/MIT). | 555 | [
[
-0.026611328125,
-0.0780029296875,
0.020843505859375,
0.0213623046875,
-0.0074615478515625,
-0.004528045654296875,
0.0033721923828125,
-0.0311126708984375,
-0.035797119140625,
0.02655029296875,
-0.0196075439453125,
-0.0195465087890625,
-0.02001953125,
-0.018... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.