modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
davis901/roberta-frame-CP | 2023-04-04T04:40:41.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:davis901/autotrain-data-imdb-textclassification",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | davis901 | null | null | davis901/roberta-frame-CP | 0 | 2 | transformers | 2023-04-04T03:16:27 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- davis901/autotrain-data-imdb-textclassification
co2_eq_emissions:
emissions: 3.313265712444502
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 46471115134
- CO2 Emissions (in grams): 3.3133
## Validation Metrics
- Loss: 0.006
- Accuracy: 0.999
- Precision: 0.999
- Recall: 1.000
- AUC: 1.000
- F1: 0.999
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davis901/autotrain-imdb-textclassification-46471115134
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("davis901/autotrain-imdb-textclassification-46471115134", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davis901/autotrain-imdb-textclassification-46471115134", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,190 | [
[
-0.028411865234375,
-0.0236053466796875,
0.0145721435546875,
0.0016269683837890625,
-0.004741668701171875,
0.003940582275390625,
0.0122833251953125,
-0.003787994384765625,
0.008087158203125,
0.01751708984375,
-0.06390380859375,
-0.03448486328125,
-0.06787109375,... |
Kuun/bert-base-vi | 2023-05-16T10:19:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:vietnamese_students_feedback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Kuun | null | null | Kuun/bert-base-vi | 0 | 2 | transformers | 2023-04-04T07:50:16 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- vietnamese_students_feedback
model-index:
- name: bert-base-vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-vi
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the vietnamese_students_feedback dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,082 | [
[
-0.032257080078125,
-0.057708740234375,
0.0147857666015625,
0.0107574462890625,
-0.03399658203125,
-0.0294189453125,
-0.0227203369140625,
-0.01174163818359375,
0.00843048095703125,
0.034637451171875,
-0.0455322265625,
-0.045013427734375,
-0.04058837890625,
-... |
GhifSmile/distilbert-base-uncased-PINA | 2023-04-04T09:30:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | GhifSmile | null | null | GhifSmile/distilbert-base-uncased-PINA | 0 | 2 | transformers | 2023-04-04T08:52:38 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: distilbert-base-uncased-PINA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-PINA
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0745
- Accuracy: 0.7628
- Precision: 0.5795
- Recall: 0.5194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| 2.591 | 1.0 | 234 | 2.2068 | 0.4444 | 0.0523 | 0.0477 |
| 1.9869 | 2.0 | 468 | 1.7959 | 0.5876 | 0.2023 | 0.1887 |
| 1.5443 | 3.0 | 702 | 1.5389 | 0.6378 | 0.2921 | 0.2857 |
| 1.2084 | 4.0 | 936 | 1.3623 | 0.6848 | 0.3983 | 0.3562 |
| 0.9397 | 5.0 | 1170 | 1.2348 | 0.7244 | 0.4999 | 0.4112 |
| 0.7445 | 6.0 | 1404 | 1.1657 | 0.7286 | 0.5053 | 0.4481 |
| 0.6204 | 7.0 | 1638 | 1.1167 | 0.7564 | 0.5773 | 0.4918 |
| 0.5183 | 8.0 | 1872 | 1.0872 | 0.7607 | 0.5841 | 0.5078 |
| 0.4468 | 9.0 | 2106 | 1.0782 | 0.7628 | 0.5785 | 0.5172 |
| 0.4188 | 10.0 | 2340 | 1.0745 | 0.7628 | 0.5795 | 0.5194 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,229 | [
[
-0.035369873046875,
-0.040435791015625,
0.0175933837890625,
0.0120849609375,
-0.01812744140625,
-0.01432037353515625,
-0.0006003379821777344,
-0.0057220458984375,
0.0276641845703125,
0.018798828125,
-0.045074462890625,
-0.05078125,
-0.05364990234375,
-0.0138... |
DataIntelligenceTeam/Bol-4.0-invoicefromclients_LOC_CAD | 2023-04-04T09:56:26.000Z | [
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:sroie",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | DataIntelligenceTeam | null | null | DataIntelligenceTeam/Bol-4.0-invoicefromclients_LOC_CAD | 0 | 2 | transformers | 2023-04-04T09:23:27 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- sroie
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Bol-4.0-invoicefromclients_LOC_CAD
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: sroie
type: sroie
config: discharge
split: test
args: discharge
metrics:
- name: Precision
type: precision
value: 0.524526678141136
- name: Recall
type: recall
value: 0.4697495183044316
- name: F1
type: f1
value: 0.49562919292539137
- name: Accuracy
type: accuracy
value: 0.8690496168260894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bol-4.0-invoicefromclients_LOC_CAD
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7295
- Precision: 0.5245
- Recall: 0.4697
- F1: 0.4956
- Accuracy: 0.8690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.32 | 100 | 1.3578 | 0.3480 | 0.0304 | 0.0560 | 0.7636 |
| No log | 0.63 | 200 | 1.0269 | 0.1777 | 0.0748 | 0.1052 | 0.7962 |
| No log | 0.95 | 300 | 0.8968 | 0.3288 | 0.1869 | 0.2383 | 0.8180 |
| No log | 1.27 | 400 | 0.8574 | 0.3945 | 0.2212 | 0.2835 | 0.8227 |
| 0.8908 | 1.58 | 500 | 0.7533 | 0.3144 | 0.2709 | 0.2910 | 0.8181 |
| 0.8908 | 1.9 | 600 | 0.7001 | 0.3913 | 0.3106 | 0.3463 | 0.8414 |
| 0.8908 | 2.22 | 700 | 0.6915 | 0.4998 | 0.3869 | 0.4361 | 0.8572 |
| 0.8908 | 2.53 | 800 | 0.7375 | 0.4331 | 0.3703 | 0.3993 | 0.8475 |
| 0.8908 | 2.85 | 900 | 0.6590 | 0.4682 | 0.3973 | 0.4299 | 0.8633 |
| 0.353 | 3.16 | 1000 | 0.7389 | 0.5479 | 0.4274 | 0.4802 | 0.8650 |
| 0.353 | 3.48 | 1100 | 0.7387 | 0.5568 | 0.4474 | 0.4962 | 0.8635 |
| 0.353 | 3.8 | 1200 | 0.6881 | 0.5011 | 0.4539 | 0.4763 | 0.8707 |
| 0.353 | 4.11 | 1300 | 0.6881 | 0.5159 | 0.4624 | 0.4877 | 0.8684 |
| 0.353 | 4.43 | 1400 | 0.7308 | 0.5532 | 0.4751 | 0.5112 | 0.8713 |
| 0.1947 | 4.75 | 1500 | 0.7295 | 0.5245 | 0.4697 | 0.4956 | 0.8690 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.2.2
- Tokenizers 0.13.2
| 3,344 | [
[
-0.034912109375,
-0.029815673828125,
0.00933837890625,
0.01267242431640625,
-0.01375579833984375,
-0.010009765625,
0.00962066650390625,
-0.01537322998046875,
0.022125244140625,
0.026458740234375,
-0.0439453125,
-0.055999755859375,
-0.042999267578125,
-0.0183... |
sefaozalpadl/postnashville_antitrans_telegram-46622115298 | 2023-04-04T10:50:33.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:sefaozalpadl/autotrain-data-postnashville_antitrans_telegram",
"co2_eq_emissions",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | sefaozalpadl | null | null | sefaozalpadl/postnashville_antitrans_telegram-46622115298 | 0 | 2 | transformers | 2023-04-04T10:49:24 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- sefaozalpadl/autotrain-data-postnashville_antitrans_telegram
co2_eq_emissions:
emissions: 0.4434488215878769
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 46622115298
- CO2 Emissions (in grams): 0.4434
## Validation Metrics
- Loss: 0.569
- Accuracy: 0.818
- Macro F1: 0.707
- Micro F1: 0.818
- Weighted F1: 0.807
- Macro Precision: 0.777
- Micro Precision: 0.818
- Weighted Precision: 0.814
- Macro Recall: 0.674
- Micro Recall: 0.818
- Weighted Recall: 0.818
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/sefaozalpadl/autotrain-postnashville_antitrans_telegram-46622115298
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("sefaozalpadl/autotrain-postnashville_antitrans_telegram-46622115298", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("sefaozalpadl/autotrain-postnashville_antitrans_telegram-46622115298", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,392 | [
[
-0.03271484375,
-0.03338623046875,
0.0059814453125,
0.013427734375,
-0.00980377197265625,
0.00600433349609375,
0.000492095947265625,
-0.0176544189453125,
0.00792694091796875,
0.00754547119140625,
-0.0499267578125,
-0.034027099609375,
-0.06158447265625,
-0.01... |
gha03703/distilbert-base-uncased-finetuned-emotion | 2023-04-06T01:43:44.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gha03703 | null | null | gha03703/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-04T11:35:55 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.10.1
- Tokenizers 0.11.0
| 1,083 | [
[
-0.043304443359375,
-0.04949951171875,
0.01548004150390625,
0.0250091552734375,
-0.032989501953125,
-0.016998291015625,
-0.01409912109375,
-0.0089111328125,
0.016387939453125,
0.0101165771484375,
-0.058074951171875,
-0.042999267578125,
-0.05780029296875,
-0.... |
ljones/ppo-LunarLander-v2-unit1 | 2023-04-04T16:02:15.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | ljones | null | null | ljones/ppo-LunarLander-v2-unit1 | 0 | 2 | stable-baselines3 | 2023-04-04T13:28:55 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.84 +/- 17.12
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.0002065896987915039,
-0.0271453857421875,
0.01708984375,
0.0233612060546875,
-0.00606536865234375,
0.0027408599853515625,
0.034454345703125,
-0.012115478515625,
0.01983642578125,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.0343017578125,
... |
harouzie/bart-base-qqp-paws | 2023-04-11T12:04:13.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:glue",
"dataset:merve/qqp",
"dataset:paws",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | harouzie | null | null | harouzie/bart-base-qqp-paws | 0 | 2 | transformers | 2023-04-04T14:10:12 | ---
license: mit
datasets:
- glue
- merve/qqp
- paws
language:
- en
metrics:
- rouge
library_name: transformers
pipeline_tag: text2text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | 5,311 | [
[
-0.04803466796875,
-0.0455322265625,
0.032012939453125,
0.00844573974609375,
-0.024383544921875,
-0.0248565673828125,
0.00884246826171875,
-0.047119140625,
0.018524169921875,
0.0498046875,
-0.0556640625,
-0.050628662109375,
-0.04437255859375,
-0.007740020751... |
Overfit-GM/bert-base-turkish-128k-uncased-offensive-mlm | 2023-04-04T22:34:36.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | Overfit-GM | null | null | Overfit-GM/bert-base-turkish-128k-uncased-offensive-mlm | 0 | 2 | transformers | 2023-04-04T14:15:30 | ---
license: apache-2.0
language:
- tr
pipeline_tag: fill-mask
widget:
- text: Sen ne [MASK] çocuğu birisin.
example_title: Example Text
---
--- | 146 | [
[
-0.03265380859375,
-0.03485107421875,
0.0616455078125,
0.0416259765625,
-0.01143646240234375,
-0.01044464111328125,
0.041961669921875,
-0.038177490234375,
0.06103515625,
0.062469482421875,
-0.03515625,
-0.040435791015625,
-0.032562255859375,
0.00119209289550... |
Overfit-GM/convbert-base-turkish-cased-offensive-mlm | 2023-04-04T22:36:09.000Z | [
"transformers",
"pytorch",
"convbert",
"fill-mask",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | Overfit-GM | null | null | Overfit-GM/convbert-base-turkish-cased-offensive-mlm | 0 | 2 | transformers | 2023-04-04T14:39:40 | ---
license: apache-2.0
language:
- tr
pipeline_tag: fill-mask
widget:
- text: Sen ne [MASK] çocuğu birisin.
example_title: Example Text
---
--- | 146 | [
[
-0.032684326171875,
-0.034881591796875,
0.061614990234375,
0.041595458984375,
-0.01141357421875,
-0.01045989990234375,
0.04193115234375,
-0.038116455078125,
0.061004638671875,
0.06243896484375,
-0.03509521484375,
-0.0404052734375,
-0.032562255859375,
0.00118... |
alkiskoudounas/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-04T15:05:54.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | alkiskoudounas | null | null | alkiskoudounas/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-04T15:05:11 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 647.50 +/- 295.43
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alkiskoudounas -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alkiskoudounas -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alkiskoudounas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,709 | [
[
-0.04193115234375,
-0.0361328125,
0.021392822265625,
0.0243988037109375,
-0.01012420654296875,
-0.0172119140625,
0.01287078857421875,
-0.01465606689453125,
0.01326751708984375,
0.024322509765625,
-0.07098388671875,
-0.034759521484375,
-0.027069091796875,
-0.... |
Overfit-GM/electra-base-turkish-mc4-uncased-discriminator-offensive-mlm | 2023-04-04T22:38:15.000Z | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | Overfit-GM | null | null | Overfit-GM/electra-base-turkish-mc4-uncased-discriminator-offensive-mlm | 0 | 2 | transformers | 2023-04-04T15:19:15 | ---
license: apache-2.0
language:
- tr
pipeline_tag: fill-mask
widget:
- text: Sen ne [MASK] çocuğu birisin.
example_title: Example Text
---
--- | 146 | [
[
-0.03265380859375,
-0.03485107421875,
0.0616455078125,
0.0416259765625,
-0.01143646240234375,
-0.01044464111328125,
0.041961669921875,
-0.038177490234375,
0.06103515625,
0.062469482421875,
-0.03515625,
-0.040435791015625,
-0.032562255859375,
0.00119209289550... |
Payoto/bert-base-uncased-sst2 | 2023-04-12T16:39:45.000Z | [
"transformers",
"pytorch",
"optimum_graphcore",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Payoto | null | null | Payoto/bert-base-uncased-sst2 | 0 | 2 | transformers | 2023-04-04T16:42:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2139
- Accuracy: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 2048
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cpu
- Datasets 2.11.0
- Tokenizers 0.12.1
| 1,337 | [
[
-0.021942138671875,
-0.04241943359375,
0.010833740234375,
0.0156402587890625,
-0.045135498046875,
-0.0195465087890625,
-0.02581787109375,
-0.014739990234375,
0.0076141357421875,
0.0226898193359375,
-0.048553466796875,
-0.0307464599609375,
-0.05316162109375,
... |
HasinMDG/XLM_Roberta_Large_IPTC_baseline | 2023-04-04T19:45:59.000Z | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | HasinMDG | null | null | HasinMDG/XLM_Roberta_Large_IPTC_baseline | 0 | 2 | sentence-transformers | 2023-04-04T19:45:13 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/XLM_Roberta_Large_IPTC_baseline
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/XLM_Roberta_Large_IPTC_baseline")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,569 | [
[
-0.0099029541015625,
-0.06292724609375,
0.03216552734375,
-0.0126953125,
-0.01000213623046875,
-0.0148773193359375,
-0.0223388671875,
-0.00399017333984375,
-0.0074615478515625,
0.04486083984375,
-0.03814697265625,
-0.0307464599609375,
-0.05035400390625,
0.01... |
AshtonIsNotHere/albert-large-v2-spoken-squad | 2023-04-06T12:44:25.000Z | [
"transformers",
"pytorch",
"albert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | AshtonIsNotHere | null | null | AshtonIsNotHere/albert-large-v2-spoken-squad | 0 | 2 | transformers | 2023-04-04T19:54:38 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: albert-large-v2-spoken-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2-spoken-squad
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the [Spoken Squad](https://github.com/chiahsuan156/Spoken-SQuAD) dataset.
It achieves the following results on the evaluation set:
- Exact Match: 66.7026
- F1: 79.3491
- Loss: 1.0481
## Model description
Results on Spoken Squad Test Sets
| Test Set | Test Loss | Samples | Exact Match | F1 |
|:-------------:|:---------:|:-------:|:-----------:|:-------:|
| Test | 1.183 | 5351 | 71.2951 | 80.4348 |
| Test WER44 | 6.2158 | 5351 | 45.9727 | 60.8491 |
| Test WER54 | 6.2158 | 5351 | 45.9727 | 60.8491 |
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Exact Match | F1 | Validation Loss |
|:-------------:|:-----:|:----:|:-----------:|:-------:|:---------------:|
| 1.0444 | 1.0 | 2088 | 63.6584 | 77.0975 | 1.0645 |
| 0.8017 | 2.0 | 4176 | 66.3524 | 79.3253 | 0.9756 |
| 0.5426 | 3.0 | 6264 | 66.7026 | 79.3491 | 1.0481 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.11.0
| 1,965 | [
[
-0.036865234375,
-0.038299560546875,
0.014068603515625,
0.016265869140625,
-0.0029449462890625,
-0.019287109375,
-0.015625,
-0.0304718017578125,
0.011016845703125,
0.0287322998046875,
-0.05242919921875,
-0.0506591796875,
-0.04571533203125,
-0.002693176269531... |
carolinainmymind/SpaceInvaders | 2023-04-04T20:05:22.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | carolinainmymind | null | null | carolinainmymind/SpaceInvaders | 0 | 2 | stable-baselines3 | 2023-04-04T20:04:42 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 693.50 +/- 232.49
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga carolinainmymind -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga carolinainmymind -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga carolinainmymind
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,715 | [
[
-0.041656494140625,
-0.036590576171875,
0.0215911865234375,
0.0244140625,
-0.00897979736328125,
-0.017303466796875,
0.0124664306640625,
-0.014495849609375,
0.013214111328125,
0.0245513916015625,
-0.071044921875,
-0.03546142578125,
-0.027252197265625,
-0.0045... |
EugenioRoma/distilroberta-base-mrpc-glue | 2023-04-04T23:21:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | EugenioRoma | null | null | EugenioRoma/distilroberta-base-mrpc-glue | 0 | 2 | transformers | 2023-04-04T20:54:01 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["SpaceX, the private space exploration company founded by Elon Musk, successfully launched the Crew-2 mission to the International Space Station (ISS) on Friday, April 23rd.",
"On Friday, April 23rd, the Crew-2 mission to the International Space Station (ISS) was successfully launched by SpaceX, the private space exploration company co-founded by Elon Musk."]
example_title: Equivalent
- text: ["India reported a record high of 103,558 new COVID-19 cases in a single day on Monday, April 5th. The surge in cases has been attributed to large gatherings and relaxed attitudes towards social distancing and masks.",
"SpaceX, the private space exploration company founded by Elon Musk, successfully launched the Crew-2 mission to the International Space Station (ISS) on Friday, April 23rd."]
example_title: Not Equivalent
model-index:
- name: distilroberta-base-mrpc-glue
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8308823529411765
- name: F1
type: f1
value: 0.8743169398907102
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4531
- Accuracy: 0.8309
- F1: 0.8743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5148 | 1.09 | 500 | 0.4531 | 0.8309 | 0.8743 |
| 0.361 | 2.18 | 1000 | 0.6381 | 0.8162 | 0.8634 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,696 | [
[
-0.0298919677734375,
-0.045806884765625,
0.00716400146484375,
0.01806640625,
-0.0278778076171875,
-0.02105712890625,
-0.0058135986328125,
-0.00740814208984375,
0.010711669921875,
0.010040283203125,
-0.047882080078125,
-0.036285400390625,
-0.058074951171875,
... |
pmfsl/bertimbau-base-finetuned-stsb | 2023-04-04T21:45:25.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | pmfsl | null | null | pmfsl/bertimbau-base-finetuned-stsb | 0 | 2 | transformers | 2023-04-04T21:37:29 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: pmfsl/bertimbau-base-finetuned-stsb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pmfsl/bertimbau-base-finetuned-stsb
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0553
- Validation Loss: 0.1474
- Train Pearsonr: 0.9486
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 2030, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Pearsonr | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5258 | 0.2748 | 0.8880 | 0 |
| 0.1468 | 0.1877 | 0.9214 | 1 |
| 0.0985 | 0.1370 | 0.9419 | 2 |
| 0.0704 | 0.1465 | 0.9456 | 3 |
| 0.0553 | 0.1474 | 0.9486 | 4 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,986 | [
[
-0.046966552734375,
-0.03955078125,
0.01462554931640625,
0.01538848876953125,
-0.033660888671875,
-0.025665283203125,
-0.01922607421875,
-0.013336181640625,
0.01490020751953125,
0.01214599609375,
-0.057037353515625,
-0.051971435546875,
-0.051971435546875,
-0... |
pmfsl/mbert-base-finetuned-pt_br-stsb | 2023-04-04T22:06:18.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | pmfsl | null | null | pmfsl/mbert-base-finetuned-pt_br-stsb | 0 | 2 | transformers | 2023-04-04T21:56:40 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: pmfsl/mbert-base-finetuned-pt_br-stsb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pmfsl/mbert-base-finetuned-pt_br-stsb
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1085
- Validation Loss: 0.2331
- Train Pearsonr: 0.8853
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 2030, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Pearsonr | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.8113 | 0.4476 | 0.7836 | 0 |
| 0.2637 | 0.2973 | 0.8437 | 1 |
| 0.1819 | 0.2807 | 0.8646 | 2 |
| 0.1334 | 0.2370 | 0.8835 | 3 |
| 0.1085 | 0.2331 | 0.8853 | 4 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,979 | [
[
-0.0478515625,
-0.042236328125,
0.0168304443359375,
0.01113128662109375,
-0.03216552734375,
-0.01568603515625,
-0.01812744140625,
-0.01207733154296875,
0.00787353515625,
0.0081939697265625,
-0.04791259765625,
-0.046661376953125,
-0.05145263671875,
-0.0209655... |
rashmikamath01/distillbert-fine-tuned-claimbuster3C | 2023-04-04T23:41:36.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | rashmikamath01 | null | null | rashmikamath01/distillbert-fine-tuned-claimbuster3C | 0 | 2 | transformers | 2023-04-04T23:12:32 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distillbert-fine-tuned-claimbuster3C
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-fine-tuned-claimbuster3C
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4152
- Accuracy: 0.8749
- F1: 0.8748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3364 | 1.0 | 1177 | 0.3138 | 0.8659 | 0.8634 |
| 0.2366 | 2.0 | 2354 | 0.3200 | 0.8766 | 0.8764 |
| 0.1561 | 3.0 | 3531 | 0.4152 | 0.8749 | 0.8748 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,598 | [
[
-0.034149169921875,
-0.039947509765625,
0.0164031982421875,
0.01800537109375,
-0.021087646484375,
-0.0165252685546875,
-0.01087188720703125,
-0.01058197021484375,
-0.00009101629257202148,
0.0169525146484375,
-0.044830322265625,
-0.04290771484375,
-0.060821533203... |
rpanchad/tacl-bert-base-uncased-finetuned-cola | 2023-04-05T21:36:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | rpanchad | null | null | rpanchad/tacl-bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-05T04:00:35 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: tacl-bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.4911698847621163
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tacl-bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [cambridgeltl/tacl-bert-base-uncased](https://huggingface.co/cambridgeltl/tacl-bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6133
- Matthews Correlation: 0.4912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5361 | 1.0 | 713 | 0.5363 | 0.4515 |
| 0.3601 | 2.0 | 1426 | 0.6133 | 0.4912 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,822 | [
[
-0.025146484375,
-0.05072021484375,
0.0134429931640625,
0.01806640625,
-0.025665283203125,
-0.0153961181640625,
-0.0180206298828125,
-0.0188140869140625,
0.02154541015625,
0.0136566162109375,
-0.045196533203125,
-0.03656005859375,
-0.047607421875,
-0.0181274... |
rymaju/gomoku-bert | 2023-04-05T07:53:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:generator",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | rymaju | null | null | rymaju/gomoku-bert | 0 | 2 | transformers | 2023-04-05T05:10:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gomoku-bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gomoku-bert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,056 | [
[
-0.027679443359375,
-0.05511474609375,
0.00713348388671875,
0.00917816162109375,
-0.045867919921875,
-0.02313232421875,
-0.0142669677734375,
-0.0194244384765625,
0.01340484619140625,
0.01556396484375,
-0.055694580078125,
-0.0235748291015625,
-0.05072021484375,
... |
helling100/Regression_bert_10 | 2023-04-05T06:58:44.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | helling100 | null | null | helling100/Regression_bert_10 | 0 | 2 | transformers | 2023-04-05T06:58:30 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Regression_bert_10
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Regression_bert_10
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0535
- Train Mae: 0.2673
- Train Mse: 0.1031
- Train R2-score: 0.6896
- Validation Loss: 0.1142
- Validation Mae: 0.3549
- Validation Mse: 0.1957
- Validation R2-score: 0.9230
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Mae | Train Mse | Train R2-score | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Epoch |
|:----------:|:---------:|:---------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-----:|
| 0.2988 | 0.4759 | 0.3361 | 0.6079 | 0.1967 | 0.3939 | 0.2542 | 0.9026 | 0 |
| 0.1715 | 0.4010 | 0.2357 | 0.6812 | 0.1680 | 0.4014 | 0.2478 | 0.9049 | 1 |
| 0.0903 | 0.3374 | 0.1532 | 0.8384 | 0.1354 | 0.3432 | 0.1971 | 0.9210 | 2 |
| 0.0636 | 0.3139 | 0.1272 | 0.4117 | 0.1538 | 0.4066 | 0.2304 | 0.9034 | 3 |
| 0.0746 | 0.3142 | 0.1294 | 0.9220 | 0.1184 | 0.3589 | 0.2015 | 0.9224 | 4 |
| 0.0604 | 0.2837 | 0.1119 | 0.9439 | 0.1268 | 0.3450 | 0.1994 | 0.9209 | 5 |
| 0.0556 | 0.2660 | 0.1049 | 0.6002 | 0.1193 | 0.3037 | 0.1704 | 0.9265 | 6 |
| 0.0541 | 0.2581 | 0.1007 | 0.8081 | 0.1125 | 0.3350 | 0.1743 | 0.9229 | 7 |
| 0.0532 | 0.2679 | 0.1044 | 0.8917 | 0.1109 | 0.3131 | 0.1757 | 0.9311 | 8 |
| 0.0535 | 0.2673 | 0.1031 | 0.6896 | 0.1142 | 0.3549 | 0.1957 | 0.9230 | 9 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
| 3,138 | [
[
-0.048797607421875,
-0.04498291015625,
0.0241546630859375,
0.0080718994140625,
-0.0198822021484375,
-0.0181121826171875,
-0.003108978271484375,
-0.0168304443359375,
0.02825927734375,
0.0157623291015625,
-0.050750732421875,
-0.050079345703125,
-0.058013916015625,... |
Phoshco/cds-f1 | 2023-04-05T08:29:59.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Phoshco | null | null | Phoshco/cds-f1 | 0 | 2 | transformers | 2023-04-05T07:12:10 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: cds-f1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cds-f1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9905
- F1: 0.8323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8946 | 1.0 | 875 | 0.6121 | 0.809 |
| 0.4589 | 2.0 | 1750 | 0.5888 | 0.8245 |
| 0.2454 | 3.0 | 2625 | 0.6790 | 0.8267 |
| 0.1152 | 4.0 | 3500 | 0.8725 | 0.826 |
| 0.0484 | 5.0 | 4375 | 0.9905 | 0.8323 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,516 | [
[
-0.034912109375,
-0.052215576171875,
0.0183258056640625,
0.0167236328125,
-0.0269317626953125,
-0.0298614501953125,
-0.0170745849609375,
-0.00720977783203125,
0.00957489013671875,
0.026275634765625,
-0.06707763671875,
-0.054168701171875,
-0.04583740234375,
-... |
hackathon-somos-nlp-2023/DiagTrast-xlm-roberta-base | 2023-04-08T10:10:57.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"es",
"dataset:hackathon-somos-nlp-2023/DiagTrast",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-classification | hackathon-somos-nlp-2023 | null | null | hackathon-somos-nlp-2023/DiagTrast-xlm-roberta-base | 2 | 2 | transformers | 2023-04-05T08:03:45 | ---
datasets:
- hackathon-somos-nlp-2023/DiagTrast
language:
- es
metrics:
- accuracy
---
# Model Card for "DiagTrast-xlm-roberta-base"
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) that is a multilingual version of RoBERTa and it is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
DiagTrast-xlm-roberta-base was trained with [hackathon-somos-nlp-2023/DiagTrast](https://huggingface.co/datasets/hackathon-somos-nlp-2023/DiagTrast) dataset to classify statements with each of the 5 selected mental disorders of the DSM-5. While this task is classically approached with neural network-based models, the goal of implementing a transformer model is that instead of basing the classification criteria on keyword search, it is expected to understand natural language through the bidirectional learning of the sentences that the xlm-roberta-base model has.
## Uses
The model can be used to classify statements written by professionals who have detected unusual behaviors or characteristics in their patients that would indicate the presence of a mental disorder; at the moment it only provides support for five of the disorders described in the DSM-5. It should be noted that the model aims to identify the predominant disorder, so it would be part of the professional's job to group the symptoms before entering them into the model for cases in which multiple disorders are presumed to be present at the same time.
### Direct Use
DiagTrast-xlm-roberta-base is already a fine-tuned model so it could be used directly to classify the statements.
### Out-of-Scope Use
This model should not be used as a replacement for a mental health professional because it is always necessary that each situation be evaluated responsibly and using all human intellectual capacity. Initially this model is designed as an auxiliary tool to facilitate the use of the DSM-5 by health professionals.
## Bias, Risks, and Limitations
The main limitation of the model is that it is restricted to the identification of only 5 of the DSM-5 disorders.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
[More Information Needed]
## How to Get Started with the Model
Use the code below to get started with the model.
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model='hackathon-somos-nlp-2023/DiagTrast-xlm-roberta-base')
>>> text = ["Gasta más dinero de lo que tiene, a menudo, su falta de control hace que esté en deudas",
"Le gusta estar solo y le molesta la gente a su alrededor, solo piensa en él",
"Tiene pocas habilidades sociales, ignora normas de convivencia",
"Siempre que está en falta, culpa a los demás de sus problemas" ]
>>> classifier.predict(text)
[{'label': 'Trastornos de la personalidad antisocial',
'score': 0.7664140462875366},
{'label': 'Trastornos de la personalidad esquizotípica',
'score': 0.9502732157707214},
{'label': 'Trastornos de la personalidad antisocial',
'score': 0.9722056984901428},
{'label': 'Trastornos de la personalidad antisocial',
'score': 0.49087557196617126}]
```
## Training Details
### Training Data
We use the [hackathon-somos-nlp-2023/DiagTrast](https://huggingface.co/datasets/hackathon-somos-nlp-2023/DiagTrast) dataset, it was split with 90% of records for the training set and 10% for the test set using the 'datasets' library of hugging face.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
[More Information Needed]
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Team members
- [Alberto Martín Garrido](https://huggingface.co/Stremie)
- [Edgar Mencia]()
- [Miguel Ángel Solís Orozco](https://huggingface.co/homosapienssapiens)
- [Jose Carlos Vílchez Villegas](https://huggingface.co/JCarlos) | 6,200 | [
[
-0.04388427734375,
-0.049560546875,
0.031036376953125,
0.01486968994140625,
-0.004322052001953125,
-0.01416015625,
-0.0113525390625,
-0.039825439453125,
0.020355224609375,
0.036651611328125,
-0.06231689453125,
-0.04833984375,
-0.060882568359375,
0.0153808593... |
headlesstech/semantic_xlmr | 2023-06-15T11:56:26.000Z | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"dpr",
"endpoints_compatible",
"region:us"
] | sentence-similarity | headlesstech | null | null | headlesstech/semantic_xlmr | 0 | 2 | sentence-transformers | 2023-04-05T08:17:46 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- dpr
widget:
- source_sentence: "আমি বাংলায় গান গাই"
sentences:
- "I sing in Bangla"
- "I sing in Bengali"
- "I sing in English"
- "আমি গান গাই না "
example_title: "Singing"
---
# `semantic_xlmr`
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like **clustering** or **semantic search**.
<!--- Describe your model here -->
## Model Details
- Model name: semantic_xlmr
- Model version: 1.0
- Architecture: Sentence Transformer
- Language: Multilingual ( fine-tuned for Bengali Language)
## Training
The model was fine-tuned using **Multilingual Knowledge Distillation** method. We took `paraphrase-distilroberta-base-v2` as the teacher model and `xlm-roberta-large` as the student model.

## Intended Use:
- **Primary Use Case:** Semantic similarity, clustering, and semantic searches
- **Potential Use Cases:** Document retrieval, information retrieval, recommendation systems, chatbot systems , FAQ system
## Usage
### Using Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["I sing in bengali", "আমি বাংলায় গান গাই"]
model = SentenceTransformer('headlesstech/semantic_xlmr')
embeddings = model.encode(sentences)
print(embeddings)
```
### Using HuggingFace Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["I sing in bengali", "আমি বাংলায় গান গাই"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('headlesstech/semantic_xlmr')
model = AutoModel.from_pretrained('headlesstech/semantic_xlmr')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
| 3,544 | [
[
-0.01537322998046875,
-0.043975830078125,
0.01018524169921875,
0.02001953125,
-0.02716064453125,
-0.01434326171875,
-0.009124755859375,
0.006923675537109375,
0.00881195068359375,
0.04119873046875,
-0.036590576171875,
-0.03948974609375,
-0.06103515625,
0.0089... |
harvinder676/distilbert-base-uncased-finetuned-emotion | 2023-04-05T09:48:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | harvinder676 | null | null | harvinder676/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-05T09:15:42 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2141
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8235 | 1.0 | 250 | 0.3075 | 0.9155 | 0.9138 |
| 0.2391 | 2.0 | 500 | 0.2141 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,498 | [
[
-0.037750244140625,
-0.04278564453125,
0.0185394287109375,
0.02593994140625,
-0.0282440185546875,
-0.0203857421875,
-0.0135650634765625,
-0.006946563720703125,
0.0091094970703125,
0.00789642333984375,
-0.05633544921875,
-0.050048828125,
-0.061553955078125,
-... |
Phoshco/cdsb-f1 | 2023-04-05T12:40:50.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Phoshco | null | null | Phoshco/cdsb-f1 | 0 | 2 | transformers | 2023-04-05T09:20:26 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: cdsb-f1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cdsb-f1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0640
- F1: 0.811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0072 | 1.0 | 875 | 0.6216 | 0.7935 |
| 0.4733 | 2.0 | 1750 | 0.6216 | 0.7995 |
| 0.2533 | 3.0 | 2625 | 0.7790 | 0.8108 |
| 0.1096 | 4.0 | 3500 | 0.9936 | 0.8123 |
| 0.0395 | 5.0 | 4375 | 1.0640 | 0.811 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,517 | [
[
-0.03692626953125,
-0.049713134765625,
0.01556396484375,
0.0180206298828125,
-0.0271759033203125,
-0.031463623046875,
-0.0128326416015625,
-0.0091552734375,
0.01039886474609375,
0.0284423828125,
-0.0655517578125,
-0.05340576171875,
-0.043701171875,
-0.021530... |
Tengisbold/xlm-roberta-base-finetuned | 2023-04-05T09:56:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Tengisbold | null | null | Tengisbold/xlm-roberta-base-finetuned | 0 | 2 | transformers | 2023-04-05T09:48:40 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7914
- Accuracy: 0.1667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8324 | 1.0 | 10 | 1.7914 | 0.1667 |
| 1.7471 | 2.0 | 20 | 1.7462 | 0.1667 |
| 1.4988 | 3.0 | 30 | 1.5929 | 0.1667 |
| 1.5468 | 4.0 | 40 | 1.4534 | 0.1583 |
| 1.2911 | 5.0 | 50 | 1.4256 | 0.1583 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,583 | [
[
-0.033447265625,
-0.04736328125,
0.0211944580078125,
0.0003349781036376953,
-0.0200958251953125,
-0.0287017822265625,
-0.01424407958984375,
-0.01413726806640625,
0.006069183349609375,
0.03887939453125,
-0.0572509765625,
-0.049285888671875,
-0.056060791015625,
... |
carnival13/dist_ret_hpqa | 2023-04-09T12:01:08.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | carnival13 | null | null | carnival13/dist_ret_hpqa | 0 | 2 | transformers | 2023-04-05T09:49:09 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dist_ret_hpqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dist_ret_hpqa
This model is a fine-tuned version of [nlpproject2023/small-bert](https://huggingface.co/nlpproject2023/small-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0951
- Accuracy: 0.9760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1464 | 0.99 | 3500 | 0.0951 | 0.9760 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,353 | [
[
-0.032318115234375,
-0.042724609375,
0.0129852294921875,
0.009185791015625,
-0.025634765625,
-0.0465087890625,
-0.0189971923828125,
-0.026123046875,
0.016693115234375,
0.02294921875,
-0.051055908203125,
-0.0347900390625,
-0.03924560546875,
-0.011581420898437... |
eswat/a2c-AntBulletEnv-v0 | 2023-04-05T10:03:49.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | eswat | null | null | eswat/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-04-05T10:02:32 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1362.87 +/- 94.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 790 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
VijaiKM/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-05T10:45:01.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | VijaiKM | null | null | VijaiKM/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-05T10:36:44 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 14.50 +/- 12.34
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VijaiKM -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VijaiKM -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga VijaiKM
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,687 | [
[
-0.04150390625,
-0.0374755859375,
0.0219879150390625,
0.0255584716796875,
-0.0107269287109375,
-0.0184478759765625,
0.01146697998046875,
-0.01398468017578125,
0.01349639892578125,
0.0252685546875,
-0.06927490234375,
-0.036041259765625,
-0.026947021484375,
-0... |
eswat/a2c-PandaReachDense-v2 | 2023-04-05T11:44:23.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | eswat | null | null | eswat/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-04-05T10:59:09 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.18 +/- 0.59
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.04742431640625,
-0.004787445068359375,
0.0469970703125,
-0.00018846988677978516,
-0.006023406982421875,
0.033172607421875,
-0.0249481201171875,
0.028045654296875,
0.042694091796875,
-0.06256103515625,
-0.0289764404296875,
-0.03277587890625... |
anna-t/a2c-AntBulletEnv-v0 | 2023-04-05T11:13:42.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | anna-t | null | null | anna-t/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-04-05T11:12:29 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 886.19 +/- 147.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 790 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
thomas2112/Thomas_huggingface | 2023-04-05T11:50:32.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | thomas2112 | null | null | thomas2112/Thomas_huggingface | 0 | 2 | stable-baselines3 | 2023-04-05T11:28:07 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.03 +/- 22.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.0002048015594482422,
-0.027130126953125,
0.017059326171875,
0.0233154296875,
-0.006061553955078125,
0.002758026123046875,
0.034423828125,
-0.01212310791015625,
0.0198516845703125,
0.06494140625,
-0.04315185546875,
-0.03521728515625,
-0.0343017578125,
-0.0... |
anna-t/a2c-PandaReachDense-v2 | 2023-04-05T12:25:35.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | anna-t | null | null | anna-t/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-04-05T11:33:17 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.75 +/- 0.20
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.04742431640625,
-0.004787445068359375,
0.0469970703125,
-0.00018846988677978516,
-0.006023406982421875,
0.033172607421875,
-0.0249481201171875,
0.028045654296875,
0.042694091796875,
-0.06256103515625,
-0.0289764404296875,
-0.03277587890625... |
Dabe/dqn-LunarLander-v2-2 | 2023-04-05T12:15:53.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Dabe | null | null | Dabe/dqn-LunarLander-v2-2 | 0 | 2 | stable-baselines3 | 2023-04-05T12:11:39 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 105.21 +/- 93.66
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00821685791015625,
-0.0285491943359375,
0.01519012451171875,
0.02593994140625,
-0.005405426025390625,
-0.0007224082946777344,
0.03875732421875,
-0.010650634765625,
0.0254058837890625,
0.057373046875,
-0.055938720703125,
-0.03961181640625,
-0.0258636474609375,... |
Phoshco/cdsb | 2023-04-05T13:58:55.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Phoshco | null | null | Phoshco/cdsb | 0 | 2 | transformers | 2023-04-05T12:43:47 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: cdsb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cdsb
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0340
- Accuracy: 0.8117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0603 | 1.0 | 875 | 0.6192 | 0.7905 |
| 0.486 | 2.0 | 1750 | 0.5969 | 0.8013 |
| 0.2728 | 3.0 | 2625 | 0.7097 | 0.8047 |
| 0.1275 | 4.0 | 3500 | 0.9190 | 0.809 |
| 0.053 | 5.0 | 4375 | 1.0340 | 0.8117 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,538 | [
[
-0.034149169921875,
-0.051025390625,
0.0153045654296875,
0.01654052734375,
-0.0257720947265625,
-0.03167724609375,
-0.01358795166015625,
-0.01128387451171875,
0.01143646240234375,
0.02764892578125,
-0.05963134765625,
-0.0574951171875,
-0.047393798828125,
-0.... |
lmazzon70/videomae-large-finetuned-kinetics-finetuned-rwf2000-epochs8-batch8-kl-torch2 | 2023-04-07T01:03:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | lmazzon70 | null | null | lmazzon70/videomae-large-finetuned-kinetics-finetuned-rwf2000-epochs8-batch8-kl-torch2 | 0 | 2 | transformers | 2023-04-05T14:51:58 | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-large-finetuned-kinetics-finetuned-rwf2000-epochs8-batch8-kl-torch2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-large-finetuned-kinetics-finetuned-rwf2000-epochs8-batch8-kl-torch2
This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6146
- Accuracy: 0.7212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.361 | 0.06 | 200 | 0.2425 | 0.895 |
| 0.3449 | 1.06 | 400 | 0.6639 | 0.68 |
| 0.2435 | 2.06 | 600 | 0.9180 | 0.6663 |
| 0.2001 | 3.06 | 800 | 0.5656 | 0.7662 |
| 0.1405 | 4.06 | 1000 | 0.3859 | 0.86 |
| 0.1845 | 5.06 | 1200 | 0.3825 | 0.8675 |
| 0.1586 | 6.06 | 1400 | 1.4446 | 0.6687 |
| 0.2013 | 7.06 | 1600 | 0.4730 | 0.8562 |
| 0.2113 | 8.06 | 1800 | 0.3328 | 0.8862 |
| 0.245 | 9.06 | 2000 | 0.3519 | 0.8938 |
| 0.1767 | 10.06 | 2200 | 0.4004 | 0.895 |
| 0.1688 | 11.06 | 2400 | 0.6468 | 0.86 |
| 0.2823 | 12.06 | 2600 | 0.6006 | 0.8575 |
| 0.0928 | 13.06 | 2800 | 0.5516 | 0.875 |
| 0.0079 | 14.06 | 3000 | 0.5855 | 0.87 |
| 0.0325 | 15.06 | 3200 | 0.4921 | 0.8925 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
| 2,564 | [
[
-0.04296875,
-0.0372314453125,
0.0191497802734375,
-0.003475189208984375,
-0.0217132568359375,
-0.0244598388671875,
-0.01352691650390625,
-0.01081085205078125,
0.0218505859375,
0.0215606689453125,
-0.054107666015625,
-0.053131103515625,
-0.0526123046875,
-0.... |
alkiskoudounas/ppo-SnowballTarget1 | 2023-04-05T17:13:06.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | alkiskoudounas | null | null | alkiskoudounas/ppo-SnowballTarget1 | 0 | 2 | ml-agents | 2023-04-05T17:13:00 | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: alkiskoudounas/ppo-SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 994 | [
[
-0.016204833984375,
-0.02801513671875,
0.007778167724609375,
0.015960693359375,
-0.0231781005859375,
0.0163726806640625,
0.0222320556640625,
-0.00588226318359375,
0.0269775390625,
0.037994384765625,
-0.053436279296875,
-0.056671142578125,
-0.04156494140625,
... |
jordyvl/bert_jordyvl_rvl_cdip_100_examples_per_class_2023-04-05 | 2023-04-05T17:33:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jordyvl | null | null | jordyvl/bert_jordyvl_rvl_cdip_100_examples_per_class_2023-04-05 | 0 | 2 | transformers | 2023-04-05T17:25:56 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_jordyvl_rvl_cdip_100_examples_per_class_2023-04-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_jordyvl_rvl_cdip_100_examples_per_class_2023-04-05
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7061
- Accuracy: 0.1725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 12 | 2.7061 | 0.1725 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.9.0
- Tokenizers 0.12.1
| 1,499 | [
[
-0.0338134765625,
-0.04730224609375,
0.00357818603515625,
0.0189056396484375,
-0.0297393798828125,
-0.0355224609375,
-0.01861572265625,
-0.016845703125,
0.00669097900390625,
0.0252838134765625,
-0.047149658203125,
-0.043182373046875,
-0.049041748046875,
-0.0... |
lst-nectec/HoogBERTa-POS-lst20 | 2023-04-05T20:03:14.000Z | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"th",
"dataset:lst20",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | lst-nectec | null | null | lst-nectec/HoogBERTa-POS-lst20 | 0 | 2 | transformers | 2023-04-05T18:00:45 | ---
datasets:
- lst20
language:
- th
widget:
- text: วัน ที่ _ 12 _ มีนาคม นี้ _ ฉัน จะ ไป เที่ยว วัดพระแก้ว _ ที่ กรุงเทพ
library_name: transformers
---
# HoogBERTa
This repository includes the Thai pretrained language representation (HoogBERTa_base) fine-tuned for **Part-of-Speech Tagging (POS) Task**.
# Documentation
## Prerequisite
Since we use subword-nmt BPE encoding, input needs to be pre-tokenize using [BEST](https://huggingface.co/datasets/best2009) standard before inputting into HoogBERTa
```
pip install attacut
```
## Getting Start
To initialize the model from hub, use the following commands
```python
from transformers import RobertaTokenizerFast, RobertaForTokenClassification
from attacut import tokenize
import torch
tokenizer = RobertaTokenizerFast.from_pretrained("new5558/HoogBERTa-POS-lst20")
model = RobertaForTokenClassification.from_pretrained("new5558/HoogBERTa-POS-lst20")
```
To do POS Tagging, use the following commands
```python
from transformers import pipeline
nlp = pipeline('token-classification', model=model, tokenizer=tokenizer, aggregation_strategy="none")
sentence = "วันที่ 12 มีนาคมนี้ ฉันจะไปเที่ยววัดพระแก้ว ที่กรุงเทพ"
all_sent = []
sentences = sentence.split(" ")
for sent in sentences:
all_sent.append(" ".join(tokenize(sent)).replace("_","[!und:]"))
sentence = " _ ".join(all_sent)
print(nlp(sentence))
```
For batch processing,
```python
from transformers import pipeline
nlp = pipeline('token-classification', model=model, tokenizer=tokenizer, aggregation_strategy="none")
sentenceL = ["วันที่ 12 มีนาคมนี้","ฉันจะไปเที่ยววัดพระแก้ว ที่กรุงเทพ"]
inputList = []
for sentX in sentenceL:
sentences = sentX.split(" ")
all_sent = []
for sent in sentences:
all_sent.append(" ".join(tokenize(sent)).replace("_","[!und:]"))
sentence = " _ ".join(all_sent)
inputList.append(sentence)
print(nlp(inputList))
```
# Huggingface Models
1. `HoogBERTaEncoder`
- [HoogBERTa](https://huggingface.co/new5558/HoogBERTa): `Feature Extraction` and `Mask Language Modeling`
2. `HoogBERTaMuliTaskTagger`:
- [HoogBERTa-NER-lst20](https://huggingface.co/new5558/HoogBERTa-NER-lst20): `Named-entity recognition (NER)` based on LST20
- [HoogBERTa-POS-lst20](https://huggingface.co/new5558/HoogBERTa-POS-lst20): `Part-of-speech tagging (POS)` based on LST20
- [HoogBERTa-SENTENCE-lst20](https://huggingface.co/new5558/HoogBERTa-SENTENCE-lst20): `Clause Boundary Classification` based on LST20
# Citation
Please cite as:
``` bibtex
@inproceedings{porkaew2021hoogberta,
title = {HoogBERTa: Multi-task Sequence Labeling using Thai Pretrained Language Representation},
author = {Peerachet Porkaew, Prachya Boonkwan and Thepchai Supnithi},
booktitle = {The Joint International Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP 2021)},
year = {2021},
address={Online}
}
```
Download full-text [PDF](https://drive.google.com/file/d/1hwdyIssR5U_knhPE2HJigrc0rlkqWeLF/view?usp=sharing)
Check out the code on [Github](https://github.com/lstnlp/HoogBERTa) | 3,064 | [
[
-0.0284576416015625,
-0.054595947265625,
0.01216888427734375,
0.0309295654296875,
-0.02838134765625,
0.0034084320068359375,
-0.02313232421875,
-0.0308990478515625,
0.0233154296875,
0.045806884765625,
-0.0234832763671875,
-0.050506591796875,
-0.053375244140625,
... |
jimmyhezhang/distilbert-base-uncased-finetuned-emotion | 2023-04-05T20:41:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jimmyhezhang | null | null | jimmyhezhang/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-05T19:05:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240733671679012
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2123
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7932 | 1.0 | 250 | 0.2895 | 0.915 | 0.9138 |
| 0.238 | 2.0 | 500 | 0.2123 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.03790283203125,
-0.041656494140625,
0.01509857177734375,
0.0218963623046875,
-0.02642822265625,
-0.0185699462890625,
-0.0133819580078125,
-0.00872802734375,
0.01068115234375,
0.00855255126953125,
-0.056427001953125,
-0.05157470703125,
-0.059539794921875,
... |
Tingli/bert-base-banking77-pt2 | 2023-04-05T21:26:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Tingli | null | null | Tingli/bert-base-banking77-pt2 | 0 | 2 | transformers | 2023-04-05T20:28:58 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9292103144277876
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2982
- F1: 0.9292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0831 | 1.0 | 626 | 0.8018 | 0.8336 |
| 0.381 | 2.0 | 1252 | 0.3600 | 0.9206 |
| 0.1832 | 3.0 | 1878 | 0.2982 | 0.9292 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
| 1,728 | [
[
-0.0296783447265625,
-0.039093017578125,
0.0111541748046875,
0.013824462890625,
-0.0426025390625,
-0.0266265869140625,
-0.00921630859375,
-0.01776123046875,
-0.004192352294921875,
0.0408935546875,
-0.043304443359375,
-0.043548583984375,
-0.05267333984375,
-0... |
paragon-analytics/ADRv1 | 2023-05-11T13:04:29.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | paragon-analytics | null | null | paragon-analytics/ADRv1 | 1 | 2 | transformers | 2023-04-05T21:14:49 | ---
license: "mit"
widget:
- text: "Took the pill, 12 hours later my muscles started to really hurt, then my ribs started to burn so bad I couldn't breath."
---
This model takes text (narrative of reasctions to medications) as input and returns a predicted severity score for the reaction (LABEL_1 is severe reaction). Please do NOT use for medical diagnosis.
Example usage:
```python
import torch
import tensorflow as tf
from transformers import RobertaTokenizer, RobertaModel
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("paragon-analytics/ADRv1")
model = AutoModelForSequenceClassification.from_pretrained("paragon-analytics/ADRv1")
def adr_predict(x):
encoded_input = tokenizer(x, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = tf.nn.softmax(scores)
return scores.numpy()[1]
sentence = "I have severe pain."
adr_predict(sentence)
```
| 1,079 | [
[
0.016204833984375,
-0.055816650390625,
0.043121337890625,
0.0101470947265625,
-0.00872039794921875,
-0.0193634033203125,
-0.0025177001953125,
-0.00412750244140625,
0.0188446044921875,
0.038421630859375,
-0.025360107421875,
-0.05914306640625,
-0.06817626953125,
... |
gsvr30/distilbert-base-uncased-finetuned-cola | 2023-04-06T01:42:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gsvr30 | null | null | gsvr30/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-06T01:33:45 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5274949902750498
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8492
- Matthews Correlation: 0.5275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5255 | 1.0 | 535 | 0.5222 | 0.4356 |
| 0.3437 | 2.0 | 1070 | 0.5142 | 0.4906 |
| 0.2331 | 3.0 | 1605 | 0.5600 | 0.5052 |
| 0.174 | 4.0 | 2140 | 0.7818 | 0.5059 |
| 0.1332 | 5.0 | 2675 | 0.8492 | 0.5275 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,042 | [
[
-0.0229034423828125,
-0.04986572265625,
0.01374053955078125,
0.018341064453125,
-0.0208282470703125,
-0.0092315673828125,
-0.0049896240234375,
-0.0037136077880859375,
0.02386474609375,
0.011322021484375,
-0.04541015625,
-0.036041259765625,
-0.062408447265625,
... |
trendfollower/distilbert-base-uncased-finetuned-emotion | 2023-04-06T06:00:09.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | trendfollower | null | null | trendfollower/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-06T02:32:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.93
- name: F1
type: f1
value: 0.9300768549546928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1662
- Accuracy: 0.93
- F1: 0.9301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.2997 | 0.91 | 0.9095 |
| No log | 2.0 | 126 | 0.2031 | 0.924 | 0.9242 |
| No log | 3.0 | 189 | 0.1826 | 0.9275 | 0.9278 |
| 0.264 | 4.0 | 252 | 0.1668 | 0.93 | 0.9301 |
| 0.264 | 5.0 | 315 | 0.1662 | 0.93 | 0.9301 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,054 | [
[
-0.03704833984375,
-0.038360595703125,
0.01100921630859375,
0.019073486328125,
-0.024017333984375,
-0.018798828125,
-0.0095977783203125,
-0.01021575927734375,
0.01323699951171875,
0.00989532470703125,
-0.05767822265625,
-0.05267333984375,
-0.05938720703125,
... |
ricardotalavera/platzi-distilroberta-base-mrpc-glue-ricardo-talavera | 2023-04-06T03:44:46.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ricardotalavera | null | null | ricardotalavera/platzi-distilroberta-base-mrpc-glue-ricardo-talavera | 0 | 2 | transformers | 2023-04-06T03:15:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-ricardo-talavera
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8627450980392157
- name: F1
type: f1
value: 0.9
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-ricardo-talavera
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6639
- Accuracy: 0.8627
- F1: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.19 | 1.09 | 500 | 0.6639 | 0.8627 | 0.9 |
| 0.1962 | 2.18 | 1000 | 0.6639 | 0.8627 | 0.9 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,826 | [
[
-0.027984619140625,
-0.04345703125,
0.009429931640625,
0.02325439453125,
-0.029022216796875,
-0.0234832763671875,
-0.00849151611328125,
-0.005519866943359375,
0.0105743408203125,
0.006805419921875,
-0.049560546875,
-0.0443115234375,
-0.05816650390625,
-0.005... |
xb0129/ProsusAI | 2023-04-06T05:56:47.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | xb0129 | null | null | xb0129/ProsusAI | 0 | 2 | transformers | 2023-04-06T05:40:08 | ---
tags:
- generated_from_keras_callback
model-index:
- name: xb0129/ProsusAI
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xb0129/ProsusAI
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1466
- Validation Loss: 0.3007
- Train Accuracy: 0.9125
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1640, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.9203 | 0.3484 | 0.9033 | 0 |
| 0.2724 | 0.3182 | 0.9117 | 1 |
| 0.1466 | 0.3007 | 0.9125 | 2 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,776 | [
[
-0.046051025390625,
-0.0350341796875,
0.0200042724609375,
0.0012378692626953125,
-0.0270233154296875,
-0.0277252197265625,
-0.00812530517578125,
-0.0224151611328125,
0.01085662841796875,
0.010833740234375,
-0.057525634765625,
-0.039825439453125,
-0.0514221191406... |
kanak8278/electra-base-ner-food-recipe-v2 | 2023-04-06T18:32:04.000Z | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | kanak8278 | null | null | kanak8278/electra-base-ner-food-recipe-v2 | 0 | 2 | transformers | 2023-04-06T07:58:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electra-base-ner-food-recipe-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-ner-food-recipe-v2
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1500
- Precision: 0.7191
- Recall: 0.8739
- F1: 0.7890
- Accuracy: 0.9568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.5 | 400 | 0.4360 | 0.4354 | 0.7533 | 0.5519 | 0.8775 |
| 0.5627 | 1.01 | 800 | 0.2274 | 0.6971 | 0.8525 | 0.7670 | 0.9508 |
| 0.2799 | 1.51 | 1200 | 0.1791 | 0.6728 | 0.8762 | 0.7612 | 0.9492 |
| 0.1983 | 2.01 | 1600 | 0.1652 | 0.6958 | 0.8757 | 0.7755 | 0.9535 |
| 0.1821 | 2.51 | 2000 | 0.1610 | 0.7171 | 0.8766 | 0.7889 | 0.9568 |
| 0.1821 | 3.02 | 2400 | 0.1550 | 0.7001 | 0.8757 | 0.7782 | 0.9539 |
| 0.1726 | 3.52 | 2800 | 0.1537 | 0.7211 | 0.8744 | 0.7904 | 0.9573 |
| 0.1674 | 4.02 | 3200 | 0.1510 | 0.7170 | 0.8739 | 0.7877 | 0.9565 |
| 0.1682 | 4.52 | 3600 | 0.1501 | 0.7147 | 0.8744 | 0.7865 | 0.9564 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,286 | [
[
-0.033111572265625,
-0.0372314453125,
0.00887298583984375,
-0.01161956787109375,
-0.01087188720703125,
-0.02471923828125,
0.0014543533325195312,
-0.01532745361328125,
0.02362060546875,
0.027923583984375,
-0.0379638671875,
-0.04669189453125,
-0.0438232421875,
... |
romainf/distilbert-base-uncased-imdb-500 | 2023-04-06T08:39:31.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | romainf | null | null | romainf/distilbert-base-uncased-imdb-500 | 0 | 2 | transformers | 2023-04-06T08:32:06 | This model is the 500th step checkpoint of distilbert-base-uncased fine-tuned on imdb dataset with the following training arguments :
```
training_args = TrainingArguments(
output_dir="bert_results_imdb",
learning_rate=1e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
warmup_ratio = 0.06,
max_steps = 5000,
optim = 'adamw_torch',
save_strategy = 'steps',
evaluation_strategy='steps',
load_best_model_at_end=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb["train"],
eval_dataset=tokenized_imdb["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
``` | 742 | [
[
-0.05108642578125,
-0.03948974609375,
0.01617431640625,
0.016082763671875,
-0.036895751953125,
0.0027942657470703125,
0.0164337158203125,
0.01232147216796875,
-0.0021419525146484375,
0.0282440185546875,
-0.08258056640625,
-0.0276641845703125,
-0.0570068359375,
... |
romainf/distilbert-base-uncased-imdb-1000 | 2023-04-06T09:10:48.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | romainf | null | null | romainf/distilbert-base-uncased-imdb-1000 | 0 | 2 | transformers | 2023-04-06T08:33:05 | This model is the 1000th step checkpoint of distilbert-base-uncased fine-tuned on imdb dataset with the following training arguments :
```
training_args = TrainingArguments(
output_dir="bert_results_imdb",
learning_rate=1e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
warmup_ratio = 0.06,
max_steps = 5000,
optim = 'adamw_torch',
save_strategy = 'steps',
evaluation_strategy='steps',
load_best_model_at_end=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb["train"],
eval_dataset=tokenized_imdb["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
``` | 743 | [
[
-0.050567626953125,
-0.043121337890625,
0.015228271484375,
0.0159912109375,
-0.03759765625,
0.0018205642700195312,
0.01715087890625,
0.01403045654296875,
-0.0013523101806640625,
0.0301666259765625,
-0.08050537109375,
-0.0257720947265625,
-0.058349609375,
-0.... |
romainf/distilbert-base-uncased-imdb-2000 | 2023-04-06T09:11:02.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | romainf | null | null | romainf/distilbert-base-uncased-imdb-2000 | 0 | 2 | transformers | 2023-04-06T08:35:59 | This model is the 2000th step checkpoint of distilbert-base-uncased fine-tuned on imdb dataset with the following training arguments :
```
training_args = TrainingArguments(
output_dir="bert_results_imdb",
learning_rate=1e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
warmup_ratio = 0.06,
max_steps = 5000,
optim = 'adamw_torch',
save_strategy = 'steps',
evaluation_strategy='steps',
load_best_model_at_end=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb["train"],
eval_dataset=tokenized_imdb["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
``` | 743 | [
[
-0.05023193359375,
-0.039703369140625,
0.01416778564453125,
0.01557159423828125,
-0.036712646484375,
0.0004634857177734375,
0.0170440673828125,
0.01187896728515625,
-0.005435943603515625,
0.0309600830078125,
-0.08203125,
-0.0225830078125,
-0.059814453125,
-0... |
romainf/distilbert-base-uncased-imdb-3000 | 2023-04-06T09:11:17.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | romainf | null | null | romainf/distilbert-base-uncased-imdb-3000 | 0 | 2 | transformers | 2023-04-06T08:40:03 | This model is the 3000th step checkpoint of distilbert-base-uncased fine-tuned on imdb dataset with the following training arguments :
```
training_args = TrainingArguments(
output_dir="bert_results_imdb",
learning_rate=1e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
warmup_ratio = 0.06,
max_steps = 5000,
optim = 'adamw_torch',
save_strategy = 'steps',
evaluation_strategy='steps',
load_best_model_at_end=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb["train"],
eval_dataset=tokenized_imdb["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
``` | 743 | [
[
-0.05255126953125,
-0.03765869140625,
0.015228271484375,
0.0189361572265625,
-0.035186767578125,
0.0034465789794921875,
0.01800537109375,
0.01316070556640625,
-0.00634765625,
0.029327392578125,
-0.083984375,
-0.022735595703125,
-0.056793212890625,
-0.0020999... |
romainf/distilbert-base-uncased-imdb-4000 | 2023-04-06T09:11:30.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | romainf | null | null | romainf/distilbert-base-uncased-imdb-4000 | 0 | 2 | transformers | 2023-04-06T08:41:40 | This model is the 4000th step checkpoint of distilbert-base-uncased fine-tuned on imdb dataset with the following training arguments :
```
training_args = TrainingArguments(
output_dir="bert_results_imdb",
learning_rate=1e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
warmup_ratio = 0.06,
max_steps = 5000,
optim = 'adamw_torch',
save_strategy = 'steps',
evaluation_strategy='steps',
load_best_model_at_end=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb["train"],
eval_dataset=tokenized_imdb["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
``` | 743 | [
[
-0.053924560546875,
-0.035247802734375,
0.01397705078125,
0.016632080078125,
-0.0401611328125,
0.0052337646484375,
0.0180206298828125,
0.00917816162109375,
-0.0080108642578125,
0.0311431884765625,
-0.08087158203125,
-0.02667236328125,
-0.058837890625,
0.0015... |
romainf/distilbert-base-uncased-imdb-5000 | 2023-04-06T09:10:04.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | romainf | null | null | romainf/distilbert-base-uncased-imdb-5000 | 0 | 2 | transformers | 2023-04-06T08:42:23 | This model is the 5000th step checkpoint of distilbert-base-uncased fine-tuned on imdb dataset with the following training arguments :
```
training_args = TrainingArguments(
output_dir="bert_results_imdb",
learning_rate=1e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
warmup_ratio = 0.06,
max_steps = 5000,
optim = 'adamw_torch',
save_strategy = 'steps',
evaluation_strategy='steps',
load_best_model_at_end=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb["train"],
eval_dataset=tokenized_imdb["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
``` | 743 | [
[
-0.052154541015625,
-0.032257080078125,
0.01232147216796875,
0.01708984375,
-0.041168212890625,
0.0037994384765625,
0.0213165283203125,
0.00897216796875,
-0.006816864013671875,
0.0305938720703125,
-0.08050537109375,
-0.028350830078125,
-0.058837890625,
0.002... |
Almondpeanuts/distilbert-base-uncased-finetuned-emotion | 2023-04-07T17:20:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Almondpeanuts | null | null | Almondpeanuts/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-06T08:47:10 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9246304960684365
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2178
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8094 | 1.0 | 250 | 0.3110 | 0.906 | 0.9031 |
| 0.2477 | 2.0 | 500 | 0.2178 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.038238525390625,
-0.041412353515625,
0.0158843994140625,
0.0216217041015625,
-0.0259246826171875,
-0.0193634033203125,
-0.0128326416015625,
-0.00923919677734375,
0.0105743408203125,
0.00858306884765625,
-0.05670166015625,
-0.0516357421875,
-0.059539794921875,... |
Dragonoverlord3000/distilbert-base-uncased-finetuned-emotion | 2023-04-06T10:05:34.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Dragonoverlord3000 | null | null | Dragonoverlord3000/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-06T09:00:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9268815480023925
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2182
- Accuracy: 0.927
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8043 | 1.0 | 250 | 0.3076 | 0.9105 | 0.9087 |
| 0.2453 | 2.0 | 500 | 0.2182 | 0.927 | 0.9269 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cpu
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,844 | [
[
-0.037933349609375,
-0.041351318359375,
0.01373291015625,
0.02191162109375,
-0.0266876220703125,
-0.01934814453125,
-0.013702392578125,
-0.00948333740234375,
0.00971221923828125,
0.00885772705078125,
-0.056976318359375,
-0.050811767578125,
-0.060150146484375,
... |
VijaiKM/ppo-Huggy | 2023-04-06T09:50:30.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | VijaiKM | null | null | VijaiKM/ppo-Huggy | 0 | 2 | ml-agents | 2023-04-06T09:49:26 | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: VijaiKM/ppo-Huggy_v1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 935 | [
[
-0.0304412841796875,
-0.03594970703125,
0.0160369873046875,
0.01324462890625,
-0.01561737060546875,
0.01010894775390625,
0.0226593017578125,
-0.01456451416015625,
0.049407958984375,
0.03955078125,
-0.0465087890625,
-0.046051025390625,
-0.03790283203125,
-0.0... |
tf-tpu/roberta-base-epochs-500-no-wd | 2023-04-20T01:17:38.000Z | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"dataset:wikitext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | tf-tpu | null | null | tf-tpu/roberta-base-epochs-500-no-wd | 0 | 2 | transformers | 2023-04-06T13:27:38 | ---
license: mit
mask_token: '[MASK]'
tags:
- generated_from_keras_callback
model-index:
- name: tf-tpu/roberta-base-epochs-500-no-wd
results: []
widget:
- text: Goal of my life is to [MASK].
datasets:
- wikitext
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-tpu/roberta-base-epochs-500-no-wd
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7074
- Train Accuracy: 0.1221
- Validation Loss: 0.7739
- Validation Accuracy: 0.1213
- Epoch: 499
## Model description
The model was trained on the [WikiText dataset](https://huggingface.co/datasets/wikitext) (v1). Training details can be found [here](https://github.com/huggingface/transformers/tree/examples/main/examples/tensorflow/tpu/language-modeling).
## Intended uses & limitations
More information needed
## Training and evaluation data
[WikiText (v1)](https://huggingface.co/datasets/wikitext)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 0.0001, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 278825, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 14675, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: mixed_bfloat16
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 8.3284 | 0.0211 | 7.1523 | 0.0266 | 0 |
| 6.3670 | 0.0318 | 5.7812 | 0.0342 | 1 |
| 5.6051 | 0.0380 | 5.4414 | 0.0420 | 2 |
| 5.3602 | 0.0433 | 5.2734 | 0.0432 | 3 |
| 5.2285 | 0.0444 | 5.1562 | 0.0442 | 4 |
| 5.1371 | 0.0446 | 5.1133 | 0.0436 | 5 |
| 5.0673 | 0.0446 | 5.0703 | 0.0442 | 6 |
| 5.0132 | 0.0447 | 4.9883 | 0.0442 | 7 |
| 4.9642 | 0.0448 | 4.9219 | 0.0441 | 8 |
| 4.9217 | 0.0448 | 4.9258 | 0.0440 | 9 |
| 4.8871 | 0.0448 | 4.8867 | 0.0439 | 10 |
| 4.8548 | 0.0449 | 4.8672 | 0.0439 | 11 |
| 4.8277 | 0.0449 | 4.8047 | 0.0445 | 12 |
| 4.8033 | 0.0449 | 4.8477 | 0.0437 | 13 |
| 4.7807 | 0.0449 | 4.7617 | 0.0439 | 14 |
| 4.7592 | 0.0449 | 4.7773 | 0.0437 | 15 |
| 4.7388 | 0.0449 | 4.7539 | 0.0441 | 16 |
| 4.7225 | 0.0449 | 4.7266 | 0.0439 | 17 |
| 4.7052 | 0.0449 | 4.6914 | 0.0450 | 18 |
| 4.6917 | 0.0449 | 4.7188 | 0.0444 | 19 |
| 4.6789 | 0.0449 | 4.6914 | 0.0444 | 20 |
| 4.6689 | 0.0449 | 4.7031 | 0.0439 | 21 |
| 4.6570 | 0.0449 | 4.7031 | 0.0437 | 22 |
| 4.6486 | 0.0450 | 4.6758 | 0.0446 | 23 |
| 4.6393 | 0.0449 | 4.6914 | 0.0441 | 24 |
| 4.5898 | 0.0449 | 4.4688 | 0.0452 | 25 |
| 4.3024 | 0.0472 | 3.8730 | 0.0551 | 26 |
| 3.1689 | 0.0693 | 2.4375 | 0.0835 | 27 |
| 2.3780 | 0.0844 | 2.0498 | 0.0922 | 28 |
| 2.0789 | 0.0907 | 1.8604 | 0.0958 | 29 |
| 1.9204 | 0.0940 | 1.7549 | 0.0982 | 30 |
| 1.8162 | 0.0961 | 1.6836 | 0.0983 | 31 |
| 1.7370 | 0.0978 | 1.5869 | 0.1014 | 32 |
| 1.6723 | 0.0991 | 1.5381 | 0.1029 | 33 |
| 1.6215 | 0.1002 | 1.5283 | 0.1015 | 34 |
| 1.5753 | 0.1012 | 1.4736 | 0.1037 | 35 |
| 1.5295 | 0.1022 | 1.4238 | 0.1052 | 36 |
| 1.4944 | 0.1030 | 1.4141 | 0.1059 | 37 |
| 1.4631 | 0.1037 | 1.3721 | 0.1053 | 38 |
| 1.4363 | 0.1043 | 1.3467 | 0.1060 | 39 |
| 1.4098 | 0.1049 | 1.3213 | 0.1076 | 40 |
| 1.3867 | 0.1054 | 1.3018 | 0.1071 | 41 |
| 1.3658 | 0.1058 | 1.2832 | 0.1083 | 42 |
| 1.3469 | 0.1063 | 1.2637 | 0.1081 | 43 |
| 1.3288 | 0.1067 | 1.2598 | 0.1082 | 44 |
| 1.3111 | 0.1071 | 1.2334 | 0.1096 | 45 |
| 1.2962 | 0.1075 | 1.2490 | 0.1084 | 46 |
| 1.2816 | 0.1078 | 1.2168 | 0.1093 | 47 |
| 1.2672 | 0.1081 | 1.2070 | 0.1090 | 48 |
| 1.2537 | 0.1084 | 1.1680 | 0.1106 | 49 |
| 1.2411 | 0.1087 | 1.1904 | 0.1094 | 50 |
| 1.2285 | 0.1090 | 1.1709 | 0.1103 | 51 |
| 1.2180 | 0.1093 | 1.1602 | 0.1122 | 52 |
| 1.2075 | 0.1095 | 1.1396 | 0.1117 | 53 |
| 1.1973 | 0.1098 | 1.1191 | 0.1124 | 54 |
| 1.1876 | 0.1100 | 1.1260 | 0.1123 | 55 |
| 1.1782 | 0.1102 | 1.1289 | 0.1111 | 56 |
| 1.1698 | 0.1104 | 1.1211 | 0.1117 | 57 |
| 1.1596 | 0.1106 | 1.0977 | 0.1125 | 58 |
| 1.1530 | 0.1108 | 1.1172 | 0.1118 | 59 |
| 1.1462 | 0.1110 | 1.0703 | 0.1126 | 60 |
| 1.1370 | 0.1112 | 1.0830 | 0.1140 | 61 |
| 1.1309 | 0.1113 | 1.0762 | 0.1119 | 62 |
| 1.1234 | 0.1115 | 1.0625 | 0.1137 | 63 |
| 1.1162 | 0.1117 | 1.0781 | 0.1127 | 64 |
| 1.1114 | 0.1118 | 1.0474 | 0.1138 | 65 |
| 1.1036 | 0.1120 | 1.0703 | 0.1134 | 66 |
| 1.0984 | 0.1121 | 1.0366 | 0.1139 | 67 |
| 1.0931 | 0.1122 | 1.0513 | 0.1134 | 68 |
| 1.0860 | 0.1124 | 1.0264 | 0.1137 | 69 |
| 1.0807 | 0.1126 | 1.0215 | 0.1148 | 70 |
| 1.0758 | 0.1127 | 1.0269 | 0.1143 | 71 |
| 1.0704 | 0.1129 | 1.0356 | 0.1141 | 72 |
| 1.0656 | 0.1129 | 1.0195 | 0.1144 | 73 |
| 1.0607 | 0.1131 | 1.0093 | 0.1146 | 74 |
| 1.0559 | 0.1132 | 0.9956 | 0.1155 | 75 |
| 1.0517 | 0.1133 | 0.9995 | 0.1139 | 76 |
| 1.0462 | 0.1134 | 0.9839 | 0.1151 | 77 |
| 1.0422 | 0.1135 | 0.9868 | 0.1153 | 78 |
| 1.0372 | 0.1137 | 0.9995 | 0.1151 | 79 |
| 1.0340 | 0.1137 | 1.0059 | 0.1153 | 80 |
| 1.0296 | 0.1138 | 0.9961 | 0.1152 | 81 |
| 1.0272 | 0.1138 | 1.0132 | 0.1138 | 82 |
| 1.0211 | 0.1140 | 0.9575 | 0.1150 | 83 |
| 1.0182 | 0.1141 | 0.9868 | 0.1150 | 84 |
| 1.0146 | 0.1142 | 0.9678 | 0.1164 | 85 |
| 1.0111 | 0.1143 | 0.9839 | 0.1161 | 86 |
| 1.0083 | 0.1144 | 0.9722 | 0.1162 | 87 |
| 1.0039 | 0.1144 | 0.9619 | 0.1167 | 88 |
| 1.0017 | 0.1145 | 0.9575 | 0.1151 | 89 |
| 0.9973 | 0.1146 | 0.9624 | 0.1149 | 90 |
| 0.9947 | 0.1147 | 0.9570 | 0.1157 | 91 |
| 0.9921 | 0.1148 | 0.9360 | 0.1166 | 92 |
| 0.9884 | 0.1149 | 0.9546 | 0.1156 | 93 |
| 0.9851 | 0.1149 | 0.9536 | 0.1149 | 94 |
| 0.9829 | 0.1150 | 0.9575 | 0.1163 | 95 |
| 0.9795 | 0.1151 | 0.9561 | 0.1156 | 96 |
| 0.9773 | 0.1151 | 0.9438 | 0.1163 | 97 |
| 0.9740 | 0.1152 | 0.9512 | 0.1169 | 98 |
| 0.9712 | 0.1153 | 0.9375 | 0.1159 | 99 |
| 0.9678 | 0.1154 | 0.9453 | 0.1166 | 100 |
| 0.9660 | 0.1154 | 0.9507 | 0.1169 | 101 |
| 0.9636 | 0.1155 | 0.9507 | 0.1161 | 102 |
| 0.9609 | 0.1155 | 0.9727 | 0.1164 | 103 |
| 0.9589 | 0.1156 | 0.9395 | 0.1176 | 104 |
| 0.9561 | 0.1157 | 0.9346 | 0.1173 | 105 |
| 0.9537 | 0.1157 | 0.9331 | 0.1168 | 106 |
| 0.9515 | 0.1158 | 0.9434 | 0.1161 | 107 |
| 0.9488 | 0.1158 | 0.9131 | 0.1176 | 108 |
| 0.9471 | 0.1159 | 0.9360 | 0.1174 | 109 |
| 0.9449 | 0.1159 | 0.9175 | 0.1164 | 110 |
| 0.9422 | 0.1160 | 0.9121 | 0.1167 | 111 |
| 0.9412 | 0.1160 | 0.8970 | 0.1165 | 112 |
| 0.9379 | 0.1161 | 0.9111 | 0.1175 | 113 |
| 0.9362 | 0.1161 | 0.9048 | 0.1176 | 114 |
| 0.9345 | 0.1162 | 0.9082 | 0.1169 | 115 |
| 0.9317 | 0.1163 | 0.9277 | 0.1169 | 116 |
| 0.9295 | 0.1164 | 0.9292 | 0.1169 | 117 |
| 0.9287 | 0.1163 | 0.9243 | 0.1169 | 118 |
| 0.9266 | 0.1163 | 0.8892 | 0.1170 | 119 |
| 0.9233 | 0.1165 | 0.9058 | 0.1174 | 120 |
| 0.9221 | 0.1165 | 0.9106 | 0.1175 | 121 |
| 0.9205 | 0.1166 | 0.8979 | 0.1173 | 122 |
| 0.9181 | 0.1167 | 0.8989 | 0.1174 | 123 |
| 0.9180 | 0.1166 | 0.9053 | 0.1172 | 124 |
| 0.9158 | 0.1167 | 0.8877 | 0.1176 | 125 |
| 0.9135 | 0.1168 | 0.9160 | 0.1169 | 126 |
| 0.9116 | 0.1167 | 0.8940 | 0.1180 | 127 |
| 0.9095 | 0.1168 | 0.8945 | 0.1173 | 128 |
| 0.9081 | 0.1168 | 0.9126 | 0.1166 | 129 |
| 0.9064 | 0.1169 | 0.8872 | 0.1177 | 130 |
| 0.9053 | 0.1169 | 0.9175 | 0.1172 | 131 |
| 0.9035 | 0.1170 | 0.8989 | 0.1180 | 132 |
| 0.9023 | 0.1170 | 0.8965 | 0.1179 | 133 |
| 0.8999 | 0.1170 | 0.8979 | 0.1181 | 134 |
| 0.8981 | 0.1171 | 0.8799 | 0.1186 | 135 |
| 0.8976 | 0.1171 | 0.8984 | 0.1174 | 136 |
| 0.8957 | 0.1172 | 0.8857 | 0.1181 | 137 |
| 0.8948 | 0.1172 | 0.9019 | 0.1172 | 138 |
| 0.8929 | 0.1172 | 0.8804 | 0.1180 | 139 |
| 0.8915 | 0.1173 | 0.8848 | 0.1183 | 140 |
| 0.8898 | 0.1173 | 0.8911 | 0.1177 | 141 |
| 0.8894 | 0.1173 | 0.9033 | 0.1173 | 142 |
| 0.8869 | 0.1174 | 0.8853 | 0.1184 | 143 |
| 0.8863 | 0.1174 | 0.8921 | 0.1184 | 144 |
| 0.8848 | 0.1175 | 0.8848 | 0.1177 | 145 |
| 0.8838 | 0.1175 | 0.8896 | 0.1177 | 146 |
| 0.8822 | 0.1175 | 0.8945 | 0.1181 | 147 |
| 0.8804 | 0.1176 | 0.8843 | 0.1177 | 148 |
| 0.8794 | 0.1175 | 0.8774 | 0.1181 | 149 |
| 0.8780 | 0.1176 | 0.875 | 0.1178 | 150 |
| 0.8756 | 0.1176 | 0.8862 | 0.1170 | 151 |
| 0.8747 | 0.1177 | 0.8730 | 0.1178 | 152 |
| 0.8737 | 0.1177 | 0.8696 | 0.1195 | 153 |
| 0.8736 | 0.1177 | 0.8726 | 0.1184 | 154 |
| 0.8716 | 0.1178 | 0.8647 | 0.1186 | 155 |
| 0.8705 | 0.1178 | 0.8804 | 0.1179 | 156 |
| 0.8695 | 0.1178 | 0.8652 | 0.1190 | 157 |
| 0.8675 | 0.1179 | 0.8804 | 0.1197 | 158 |
| 0.8670 | 0.1179 | 0.8462 | 0.1192 | 159 |
| 0.8656 | 0.1180 | 0.8594 | 0.1188 | 160 |
| 0.8649 | 0.1180 | 0.8535 | 0.1188 | 161 |
| 0.8633 | 0.1181 | 0.8555 | 0.1185 | 162 |
| 0.8622 | 0.1180 | 0.8633 | 0.1173 | 163 |
| 0.8603 | 0.1181 | 0.8667 | 0.1177 | 164 |
| 0.8598 | 0.1181 | 0.8813 | 0.1185 | 165 |
| 0.8591 | 0.1181 | 0.8862 | 0.1176 | 166 |
| 0.8580 | 0.1181 | 0.8853 | 0.1177 | 167 |
| 0.8573 | 0.1181 | 0.8691 | 0.1181 | 168 |
| 0.8558 | 0.1182 | 0.8481 | 0.1176 | 169 |
| 0.8541 | 0.1182 | 0.8652 | 0.1187 | 170 |
| 0.8541 | 0.1183 | 0.8477 | 0.1198 | 171 |
| 0.8522 | 0.1183 | 0.8721 | 0.1190 | 172 |
| 0.8516 | 0.1183 | 0.8965 | 0.1173 | 173 |
| 0.8506 | 0.1183 | 0.8574 | 0.1173 | 174 |
| 0.8496 | 0.1183 | 0.8452 | 0.1188 | 175 |
| 0.8487 | 0.1184 | 0.8545 | 0.1183 | 176 |
| 0.8478 | 0.1184 | 0.8594 | 0.1191 | 177 |
| 0.8466 | 0.1184 | 0.8608 | 0.1187 | 178 |
| 0.8456 | 0.1184 | 0.8472 | 0.1186 | 179 |
| 0.8451 | 0.1185 | 0.8672 | 0.1178 | 180 |
| 0.8429 | 0.1185 | 0.8364 | 0.1196 | 181 |
| 0.8420 | 0.1185 | 0.8525 | 0.1187 | 182 |
| 0.8419 | 0.1186 | 0.8525 | 0.1196 | 183 |
| 0.8406 | 0.1186 | 0.8521 | 0.1193 | 184 |
| 0.8391 | 0.1186 | 0.8560 | 0.1188 | 185 |
| 0.8396 | 0.1186 | 0.8413 | 0.1188 | 186 |
| 0.8378 | 0.1186 | 0.8628 | 0.1185 | 187 |
| 0.8374 | 0.1186 | 0.8374 | 0.1195 | 188 |
| 0.8364 | 0.1187 | 0.8691 | 0.1189 | 189 |
| 0.8348 | 0.1187 | 0.8457 | 0.1196 | 190 |
| 0.8354 | 0.1187 | 0.8286 | 0.1191 | 191 |
| 0.8334 | 0.1187 | 0.8486 | 0.1187 | 192 |
| 0.8325 | 0.1188 | 0.8535 | 0.1182 | 193 |
| 0.8322 | 0.1188 | 0.8574 | 0.1199 | 194 |
| 0.8314 | 0.1188 | 0.8472 | 0.1202 | 195 |
| 0.8307 | 0.1188 | 0.8584 | 0.1186 | 196 |
| 0.8294 | 0.1189 | 0.8345 | 0.1197 | 197 |
| 0.8285 | 0.1189 | 0.8491 | 0.1181 | 198 |
| 0.8275 | 0.1189 | 0.8472 | 0.1193 | 199 |
| 0.8265 | 0.1189 | 0.8521 | 0.1185 | 200 |
| 0.8262 | 0.1190 | 0.8501 | 0.1195 | 201 |
| 0.8247 | 0.1190 | 0.8491 | 0.1194 | 202 |
| 0.8245 | 0.1190 | 0.8389 | 0.1191 | 203 |
| 0.8237 | 0.1190 | 0.8491 | 0.1184 | 204 |
| 0.8229 | 0.1190 | 0.8525 | 0.1193 | 205 |
| 0.8215 | 0.1190 | 0.8345 | 0.1199 | 206 |
| 0.8213 | 0.1190 | 0.8511 | 0.1206 | 207 |
| 0.8204 | 0.1191 | 0.8296 | 0.1195 | 208 |
| 0.8193 | 0.1192 | 0.8516 | 0.1183 | 209 |
| 0.8195 | 0.1191 | 0.8672 | 0.1181 | 210 |
| 0.8188 | 0.1191 | 0.8267 | 0.1197 | 211 |
| 0.8177 | 0.1192 | 0.8408 | 0.1185 | 212 |
| 0.8167 | 0.1192 | 0.8447 | 0.1191 | 213 |
| 0.8153 | 0.1192 | 0.8374 | 0.1191 | 214 |
| 0.8158 | 0.1192 | 0.8438 | 0.1198 | 215 |
| 0.8149 | 0.1192 | 0.8286 | 0.1191 | 216 |
| 0.8141 | 0.1193 | 0.8389 | 0.1202 | 217 |
| 0.8133 | 0.1192 | 0.8491 | 0.1202 | 218 |
| 0.8127 | 0.1193 | 0.8730 | 0.1185 | 219 |
| 0.8118 | 0.1193 | 0.8198 | 0.1183 | 220 |
| 0.8115 | 0.1193 | 0.8164 | 0.1200 | 221 |
| 0.8095 | 0.1194 | 0.8340 | 0.1195 | 222 |
| 0.8090 | 0.1194 | 0.8071 | 0.1208 | 223 |
| 0.8089 | 0.1194 | 0.8101 | 0.1195 | 224 |
| 0.8081 | 0.1194 | 0.8311 | 0.1184 | 225 |
| 0.8081 | 0.1194 | 0.8413 | 0.1198 | 226 |
| 0.8065 | 0.1195 | 0.8379 | 0.1202 | 227 |
| 0.8064 | 0.1194 | 0.8398 | 0.1196 | 228 |
| 0.8045 | 0.1195 | 0.8159 | 0.1199 | 229 |
| 0.8045 | 0.1195 | 0.8350 | 0.1187 | 230 |
| 0.8049 | 0.1195 | 0.8369 | 0.1191 | 231 |
| 0.8037 | 0.1195 | 0.8159 | 0.1201 | 232 |
| 0.8024 | 0.1196 | 0.8213 | 0.1186 | 233 |
| 0.8023 | 0.1196 | 0.8384 | 0.1187 | 234 |
| 0.8011 | 0.1196 | 0.8262 | 0.1201 | 235 |
| 0.8006 | 0.1196 | 0.8252 | 0.1195 | 236 |
| 0.8005 | 0.1196 | 0.8267 | 0.1196 | 237 |
| 0.7989 | 0.1196 | 0.8389 | 0.1199 | 238 |
| 0.7989 | 0.1196 | 0.8394 | 0.1185 | 239 |
| 0.7983 | 0.1197 | 0.8110 | 0.1208 | 240 |
| 0.7978 | 0.1197 | 0.8066 | 0.1208 | 241 |
| 0.7969 | 0.1197 | 0.8257 | 0.1185 | 242 |
| 0.7954 | 0.1197 | 0.8242 | 0.1189 | 243 |
| 0.7962 | 0.1197 | 0.8291 | 0.1197 | 244 |
| 0.7951 | 0.1197 | 0.8320 | 0.1187 | 245 |
| 0.7944 | 0.1198 | 0.8389 | 0.1184 | 246 |
| 0.7927 | 0.1198 | 0.8184 | 0.1187 | 247 |
| 0.7933 | 0.1198 | 0.8242 | 0.1199 | 248 |
| 0.7935 | 0.1198 | 0.8369 | 0.1192 | 249 |
| 0.7916 | 0.1199 | 0.8242 | 0.1202 | 250 |
| 0.7913 | 0.1198 | 0.8223 | 0.1182 | 251 |
| 0.7902 | 0.1199 | 0.8232 | 0.1192 | 252 |
| 0.7915 | 0.1199 | 0.8159 | 0.1206 | 253 |
| 0.7897 | 0.1198 | 0.8281 | 0.1195 | 254 |
| 0.7894 | 0.1199 | 0.8140 | 0.1193 | 255 |
| 0.7884 | 0.1200 | 0.8379 | 0.1204 | 256 |
| 0.7882 | 0.1199 | 0.8271 | 0.1194 | 257 |
| 0.7872 | 0.1199 | 0.8188 | 0.1198 | 258 |
| 0.7866 | 0.1200 | 0.8174 | 0.1198 | 259 |
| 0.7857 | 0.1200 | 0.8379 | 0.1198 | 260 |
| 0.7859 | 0.1200 | 0.8174 | 0.1204 | 261 |
| 0.7859 | 0.1200 | 0.8228 | 0.1199 | 262 |
| 0.7844 | 0.1200 | 0.8237 | 0.1201 | 263 |
| 0.7844 | 0.1200 | 0.8311 | 0.1185 | 264 |
| 0.7834 | 0.1201 | 0.8193 | 0.1193 | 265 |
| 0.7834 | 0.1201 | 0.8276 | 0.1191 | 266 |
| 0.7833 | 0.1200 | 0.8291 | 0.1194 | 267 |
| 0.7821 | 0.1201 | 0.8335 | 0.1195 | 268 |
| 0.7818 | 0.1201 | 0.8350 | 0.1199 | 269 |
| 0.7812 | 0.1201 | 0.8223 | 0.1184 | 270 |
| 0.7809 | 0.1201 | 0.8330 | 0.1202 | 271 |
| 0.7794 | 0.1202 | 0.8193 | 0.1196 | 272 |
| 0.7793 | 0.1201 | 0.8237 | 0.1201 | 273 |
| 0.7787 | 0.1202 | 0.8389 | 0.1206 | 274 |
| 0.7786 | 0.1202 | 0.8286 | 0.1208 | 275 |
| 0.7788 | 0.1202 | 0.8325 | 0.1202 | 276 |
| 0.7777 | 0.1202 | 0.8301 | 0.1194 | 277 |
| 0.7771 | 0.1202 | 0.8164 | 0.1207 | 278 |
| 0.7762 | 0.1202 | 0.8154 | 0.1194 | 279 |
| 0.7757 | 0.1202 | 0.8242 | 0.1196 | 280 |
| 0.7751 | 0.1203 | 0.8140 | 0.1215 | 281 |
| 0.7751 | 0.1203 | 0.8193 | 0.1197 | 282 |
| 0.7746 | 0.1203 | 0.8008 | 0.1186 | 283 |
| 0.7746 | 0.1203 | 0.8105 | 0.1193 | 284 |
| 0.7733 | 0.1203 | 0.8223 | 0.1206 | 285 |
| 0.7733 | 0.1204 | 0.8125 | 0.1199 | 286 |
| 0.7720 | 0.1204 | 0.8228 | 0.1201 | 287 |
| 0.7721 | 0.1204 | 0.8164 | 0.1203 | 288 |
| 0.7719 | 0.1203 | 0.8359 | 0.1205 | 289 |
| 0.7713 | 0.1203 | 0.8145 | 0.1204 | 290 |
| 0.7703 | 0.1204 | 0.8057 | 0.1202 | 291 |
| 0.7698 | 0.1204 | 0.8174 | 0.1204 | 292 |
| 0.7697 | 0.1204 | 0.8091 | 0.1210 | 293 |
| 0.7686 | 0.1204 | 0.8154 | 0.1195 | 294 |
| 0.7690 | 0.1204 | 0.8242 | 0.1204 | 295 |
| 0.7679 | 0.1205 | 0.7979 | 0.1208 | 296 |
| 0.7680 | 0.1205 | 0.8105 | 0.1194 | 297 |
| 0.7673 | 0.1205 | 0.8003 | 0.1215 | 298 |
| 0.7672 | 0.1205 | 0.7925 | 0.1212 | 299 |
| 0.7661 | 0.1205 | 0.8115 | 0.1191 | 300 |
| 0.7654 | 0.1205 | 0.8188 | 0.1206 | 301 |
| 0.7657 | 0.1205 | 0.8140 | 0.1202 | 302 |
| 0.7644 | 0.1206 | 0.8228 | 0.1199 | 303 |
| 0.7651 | 0.1205 | 0.7954 | 0.1213 | 304 |
| 0.7640 | 0.1206 | 0.7861 | 0.1206 | 305 |
| 0.7633 | 0.1206 | 0.8223 | 0.1194 | 306 |
| 0.7632 | 0.1206 | 0.8037 | 0.1201 | 307 |
| 0.7628 | 0.1206 | 0.8120 | 0.1196 | 308 |
| 0.7633 | 0.1206 | 0.8101 | 0.1198 | 309 |
| 0.7612 | 0.1206 | 0.8296 | 0.1203 | 310 |
| 0.7613 | 0.1206 | 0.8105 | 0.1195 | 311 |
| 0.7614 | 0.1206 | 0.8203 | 0.1201 | 312 |
| 0.7606 | 0.1207 | 0.7900 | 0.1201 | 313 |
| 0.7597 | 0.1207 | 0.8057 | 0.1201 | 314 |
| 0.7600 | 0.1207 | 0.8237 | 0.1189 | 315 |
| 0.7584 | 0.1207 | 0.8315 | 0.1198 | 316 |
| 0.7592 | 0.1207 | 0.8228 | 0.1198 | 317 |
| 0.7678 | 0.1205 | 0.8008 | 0.1205 | 318 |
| 0.7598 | 0.1207 | 0.8091 | 0.1216 | 319 |
| 0.7579 | 0.1208 | 0.8174 | 0.1202 | 320 |
| 0.7572 | 0.1207 | 0.8232 | 0.1196 | 321 |
| 0.7565 | 0.1207 | 0.8018 | 0.1192 | 322 |
| 0.7556 | 0.1208 | 0.7949 | 0.1207 | 323 |
| 0.7555 | 0.1208 | 0.8105 | 0.1200 | 324 |
| 0.7555 | 0.1208 | 0.7925 | 0.1208 | 325 |
| 0.7553 | 0.1208 | 0.7847 | 0.1201 | 326 |
| 0.7544 | 0.1208 | 0.8022 | 0.1208 | 327 |
| 0.7542 | 0.1208 | 0.8096 | 0.1203 | 328 |
| 0.7540 | 0.1208 | 0.7949 | 0.1209 | 329 |
| 0.7536 | 0.1209 | 0.8184 | 0.1205 | 330 |
| 0.7536 | 0.1208 | 0.8013 | 0.1209 | 331 |
| 0.7531 | 0.1209 | 0.8149 | 0.1197 | 332 |
| 0.7523 | 0.1209 | 0.8110 | 0.1197 | 333 |
| 0.7521 | 0.1209 | 0.7998 | 0.1208 | 334 |
| 0.7519 | 0.1209 | 0.7798 | 0.1211 | 335 |
| 0.7505 | 0.1209 | 0.8076 | 0.1202 | 336 |
| 0.7504 | 0.1210 | 0.7974 | 0.1217 | 337 |
| 0.7506 | 0.1210 | 0.7910 | 0.1206 | 338 |
| 0.7493 | 0.1209 | 0.7969 | 0.1209 | 339 |
| 0.7498 | 0.1209 | 0.8105 | 0.1205 | 340 |
| 0.7493 | 0.1209 | 0.8145 | 0.1204 | 341 |
| 0.7491 | 0.1210 | 0.8062 | 0.1209 | 342 |
| 0.7485 | 0.1210 | 0.8091 | 0.1199 | 343 |
| 0.7480 | 0.1210 | 0.8101 | 0.1201 | 344 |
| 0.7482 | 0.1209 | 0.7993 | 0.1203 | 345 |
| 0.7468 | 0.1210 | 0.7939 | 0.1213 | 346 |
| 0.7473 | 0.1210 | 0.8140 | 0.1201 | 347 |
| 0.7468 | 0.1210 | 0.8066 | 0.1201 | 348 |
| 0.7460 | 0.1211 | 0.7964 | 0.1208 | 349 |
| 0.7460 | 0.1210 | 0.8184 | 0.1206 | 350 |
| 0.7446 | 0.1211 | 0.8047 | 0.1199 | 351 |
| 0.7453 | 0.1211 | 0.8091 | 0.1197 | 352 |
| 0.7449 | 0.1211 | 0.7969 | 0.1201 | 353 |
| 0.7441 | 0.1211 | 0.7905 | 0.1210 | 354 |
| 0.7437 | 0.1211 | 0.8018 | 0.1207 | 355 |
| 0.7439 | 0.1211 | 0.8013 | 0.1203 | 356 |
| 0.7437 | 0.1211 | 0.8130 | 0.1204 | 357 |
| 0.7426 | 0.1211 | 0.8013 | 0.1205 | 358 |
| 0.7419 | 0.1211 | 0.8003 | 0.1199 | 359 |
| 0.7421 | 0.1212 | 0.8081 | 0.1200 | 360 |
| 0.7417 | 0.1212 | 0.7964 | 0.1199 | 361 |
| 0.7408 | 0.1212 | 0.8027 | 0.1203 | 362 |
| 0.7404 | 0.1212 | 0.8052 | 0.1207 | 363 |
| 0.7402 | 0.1212 | 0.7993 | 0.1204 | 364 |
| 0.7412 | 0.1212 | 0.7896 | 0.1207 | 365 |
| 0.7404 | 0.1212 | 0.8071 | 0.1208 | 366 |
| 0.7398 | 0.1212 | 0.8037 | 0.1196 | 367 |
| 0.7389 | 0.1212 | 0.7949 | 0.1194 | 368 |
| 0.7399 | 0.1212 | 0.8125 | 0.1211 | 369 |
| 0.7389 | 0.1212 | 0.8101 | 0.1201 | 370 |
| 0.7380 | 0.1212 | 0.7983 | 0.1207 | 371 |
| 0.7380 | 0.1213 | 0.7969 | 0.1210 | 372 |
| 0.7373 | 0.1212 | 0.7822 | 0.1204 | 373 |
| 0.7367 | 0.1213 | 0.8164 | 0.1204 | 374 |
| 0.7370 | 0.1213 | 0.7920 | 0.1205 | 375 |
| 0.7366 | 0.1213 | 0.7842 | 0.1205 | 376 |
| 0.7362 | 0.1213 | 0.7905 | 0.1205 | 377 |
| 0.7359 | 0.1213 | 0.8105 | 0.1200 | 378 |
| 0.7360 | 0.1213 | 0.8037 | 0.1203 | 379 |
| 0.7352 | 0.1213 | 0.7974 | 0.1203 | 380 |
| 0.7350 | 0.1213 | 0.8140 | 0.1203 | 381 |
| 0.7341 | 0.1213 | 0.7891 | 0.1217 | 382 |
| 0.7349 | 0.1214 | 0.7891 | 0.1208 | 383 |
| 0.7340 | 0.1214 | 0.7739 | 0.1208 | 384 |
| 0.7339 | 0.1214 | 0.7871 | 0.1210 | 385 |
| 0.7334 | 0.1214 | 0.7856 | 0.1205 | 386 |
| 0.7337 | 0.1214 | 0.7856 | 0.1201 | 387 |
| 0.7330 | 0.1214 | 0.7817 | 0.1203 | 388 |
| 0.7334 | 0.1214 | 0.8193 | 0.1215 | 389 |
| 0.7319 | 0.1214 | 0.7788 | 0.1208 | 390 |
| 0.7319 | 0.1214 | 0.8042 | 0.1203 | 391 |
| 0.7315 | 0.1214 | 0.7935 | 0.1211 | 392 |
| 0.7312 | 0.1214 | 0.7959 | 0.1198 | 393 |
| 0.7310 | 0.1215 | 0.7993 | 0.1207 | 394 |
| 0.7300 | 0.1214 | 0.8057 | 0.1208 | 395 |
| 0.7302 | 0.1215 | 0.8008 | 0.1202 | 396 |
| 0.7306 | 0.1214 | 0.7817 | 0.1212 | 397 |
| 0.7293 | 0.1215 | 0.7827 | 0.1207 | 398 |
| 0.7288 | 0.1215 | 0.8115 | 0.1202 | 399 |
| 0.7296 | 0.1215 | 0.7998 | 0.1206 | 400 |
| 0.7290 | 0.1215 | 0.7983 | 0.1208 | 401 |
| 0.7284 | 0.1215 | 0.7842 | 0.1219 | 402 |
| 0.7280 | 0.1215 | 0.7896 | 0.1221 | 403 |
| 0.7282 | 0.1215 | 0.7935 | 0.1199 | 404 |
| 0.7266 | 0.1215 | 0.7891 | 0.1208 | 405 |
| 0.7276 | 0.1216 | 0.7808 | 0.1209 | 406 |
| 0.7275 | 0.1215 | 0.7842 | 0.1204 | 407 |
| 0.7266 | 0.1216 | 0.7930 | 0.1210 | 408 |
| 0.7262 | 0.1215 | 0.8042 | 0.1204 | 409 |
| 0.7258 | 0.1216 | 0.8071 | 0.1217 | 410 |
| 0.7253 | 0.1216 | 0.7920 | 0.1198 | 411 |
| 0.7258 | 0.1216 | 0.7979 | 0.1211 | 412 |
| 0.7256 | 0.1215 | 0.8066 | 0.1200 | 413 |
| 0.7246 | 0.1216 | 0.7749 | 0.1213 | 414 |
| 0.7246 | 0.1216 | 0.7861 | 0.1214 | 415 |
| 0.7238 | 0.1216 | 0.8101 | 0.1204 | 416 |
| 0.7244 | 0.1216 | 0.7939 | 0.1213 | 417 |
| 0.7243 | 0.1216 | 0.7896 | 0.1219 | 418 |
| 0.7233 | 0.1216 | 0.7891 | 0.1216 | 419 |
| 0.7238 | 0.1217 | 0.7930 | 0.1216 | 420 |
| 0.7231 | 0.1216 | 0.7935 | 0.1210 | 421 |
| 0.7235 | 0.1216 | 0.7949 | 0.1191 | 422 |
| 0.7226 | 0.1216 | 0.7925 | 0.1203 | 423 |
| 0.7222 | 0.1217 | 0.7910 | 0.1204 | 424 |
| 0.7220 | 0.1217 | 0.7720 | 0.1211 | 425 |
| 0.7218 | 0.1216 | 0.7979 | 0.1207 | 426 |
| 0.7205 | 0.1217 | 0.7798 | 0.1205 | 427 |
| 0.7215 | 0.1217 | 0.7954 | 0.1218 | 428 |
| 0.7210 | 0.1217 | 0.7817 | 0.1208 | 429 |
| 0.7195 | 0.1217 | 0.7871 | 0.1215 | 430 |
| 0.7206 | 0.1217 | 0.7778 | 0.1211 | 431 |
| 0.7209 | 0.1217 | 0.7715 | 0.1212 | 432 |
| 0.7195 | 0.1218 | 0.7974 | 0.1214 | 433 |
| 0.7191 | 0.1218 | 0.7954 | 0.1202 | 434 |
| 0.7185 | 0.1218 | 0.7866 | 0.1211 | 435 |
| 0.7185 | 0.1218 | 0.7881 | 0.1220 | 436 |
| 0.7187 | 0.1218 | 0.7910 | 0.1214 | 437 |
| 0.7180 | 0.1218 | 0.7949 | 0.1201 | 438 |
| 0.7183 | 0.1218 | 0.7847 | 0.1210 | 439 |
| 0.7177 | 0.1218 | 0.7744 | 0.1214 | 440 |
| 0.7176 | 0.1218 | 0.7754 | 0.1209 | 441 |
| 0.7176 | 0.1218 | 0.7764 | 0.1213 | 442 |
| 0.7170 | 0.1218 | 0.7812 | 0.1203 | 443 |
| 0.7170 | 0.1218 | 0.7935 | 0.1206 | 444 |
| 0.7171 | 0.1218 | 0.7959 | 0.1204 | 445 |
| 0.7165 | 0.1218 | 0.7979 | 0.1208 | 446 |
| 0.7164 | 0.1218 | 0.7930 | 0.1215 | 447 |
| 0.7164 | 0.1219 | 0.8003 | 0.1210 | 448 |
| 0.7157 | 0.1219 | 0.7764 | 0.1203 | 449 |
| 0.7154 | 0.1219 | 0.7935 | 0.1208 | 450 |
| 0.7150 | 0.1219 | 0.8047 | 0.1212 | 451 |
| 0.7147 | 0.1219 | 0.7847 | 0.1208 | 452 |
| 0.7153 | 0.1218 | 0.7817 | 0.1199 | 453 |
| 0.7146 | 0.1219 | 0.7886 | 0.1210 | 454 |
| 0.7150 | 0.1219 | 0.7920 | 0.1218 | 455 |
| 0.7144 | 0.1219 | 0.7793 | 0.1211 | 456 |
| 0.7143 | 0.1219 | 0.7676 | 0.1209 | 457 |
| 0.7140 | 0.1219 | 0.7920 | 0.1210 | 458 |
| 0.7143 | 0.1219 | 0.7925 | 0.1203 | 459 |
| 0.7137 | 0.1219 | 0.7886 | 0.1227 | 460 |
| 0.7135 | 0.1219 | 0.7964 | 0.1206 | 461 |
| 0.7128 | 0.1219 | 0.7969 | 0.1207 | 462 |
| 0.7125 | 0.1219 | 0.7837 | 0.1208 | 463 |
| 0.7134 | 0.1219 | 0.7788 | 0.1219 | 464 |
| 0.7125 | 0.1219 | 0.7759 | 0.1210 | 465 |
| 0.7127 | 0.1219 | 0.8013 | 0.1207 | 466 |
| 0.7129 | 0.1219 | 0.7812 | 0.1214 | 467 |
| 0.7118 | 0.1219 | 0.8052 | 0.1217 | 468 |
| 0.7114 | 0.1220 | 0.7847 | 0.1208 | 469 |
| 0.7107 | 0.1220 | 0.7646 | 0.1219 | 470 |
| 0.7111 | 0.1220 | 0.7939 | 0.1204 | 471 |
| 0.7115 | 0.1219 | 0.7861 | 0.1214 | 472 |
| 0.7111 | 0.1220 | 0.7744 | 0.1215 | 473 |
| 0.7106 | 0.1220 | 0.7695 | 0.1209 | 474 |
| 0.7109 | 0.1220 | 0.7573 | 0.1208 | 475 |
| 0.7099 | 0.1220 | 0.8003 | 0.1201 | 476 |
| 0.7107 | 0.1220 | 0.7725 | 0.1222 | 477 |
| 0.7101 | 0.1220 | 0.7881 | 0.1206 | 478 |
| 0.7096 | 0.1220 | 0.8027 | 0.1201 | 479 |
| 0.7094 | 0.1221 | 0.7861 | 0.1204 | 480 |
| 0.7094 | 0.1221 | 0.7798 | 0.1214 | 481 |
| 0.7097 | 0.1221 | 0.7837 | 0.1205 | 482 |
| 0.7096 | 0.1220 | 0.7793 | 0.1210 | 483 |
| 0.7082 | 0.1220 | 0.7627 | 0.1217 | 484 |
| 0.7092 | 0.1220 | 0.7954 | 0.1219 | 485 |
| 0.7086 | 0.1221 | 0.7837 | 0.1206 | 486 |
| 0.7087 | 0.1221 | 0.7856 | 0.1213 | 487 |
| 0.7079 | 0.1221 | 0.7876 | 0.1206 | 488 |
| 0.7082 | 0.1221 | 0.7778 | 0.1210 | 489 |
| 0.7083 | 0.1221 | 0.7905 | 0.1205 | 490 |
| 0.7084 | 0.1221 | 0.7842 | 0.1212 | 491 |
| 0.7075 | 0.1221 | 0.7793 | 0.1210 | 492 |
| 0.7074 | 0.1221 | 0.7749 | 0.1215 | 493 |
| 0.7075 | 0.1221 | 0.7764 | 0.1201 | 494 |
| 0.7078 | 0.1220 | 0.7842 | 0.1216 | 495 |
| 0.7079 | 0.1221 | 0.7900 | 0.1211 | 496 |
| 0.7085 | 0.1221 | 0.7744 | 0.1212 | 497 |
| 0.7075 | 0.1221 | 0.7725 | 0.1213 | 498 |
| 0.7074 | 0.1221 | 0.7739 | 0.1213 | 499 |
### Framework versions
- Transformers 4.27.0.dev0
- TensorFlow 2.9.1
- Tokenizers 0.13.2 | 42,080 | [
[
-0.0509033203125,
-0.03594970703125,
0.0251007080078125,
0.007152557373046875,
-0.00231170654296875,
0.005435943603515625,
0.0012378692626953125,
0.0010385513305664062,
0.0540771484375,
0.023895263671875,
-0.041046142578125,
-0.047882080078125,
-0.04562377929687... |
Shruthi-S/nlp-sexism-detection | 2023-04-07T15:12:56.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Shruthi-S | null | null | Shruthi-S/nlp-sexism-detection | 0 | 2 | transformers | 2023-04-06T14:36:03 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nlp-sexism-detection
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlp-sexism-detection
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Tokenizers 0.13.3
| 1,066 | [
[
-0.03350830078125,
-0.0517578125,
0.0181884765625,
0.0167694091796875,
-0.034576416015625,
-0.0330810546875,
-0.011993408203125,
-0.0205078125,
0.006011962890625,
0.0269317626953125,
-0.051025390625,
-0.05401611328125,
-0.06005859375,
0.0034694671630859375,
... |
helenai/declare-lab-flan-alpaca-large-ov | 2023-04-06T14:45:57.000Z | [
"transformers",
"openvino",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | helenai | null | null | helenai/declare-lab-flan-alpaca-large-ov | 0 | 2 | transformers | 2023-04-06T14:42:09 | ---
language:
- en
tags:
- openvino
---
# declare-lab/flan-alpaca-large
This is the [declare-lab/flan-alpaca-large](https://huggingface.co/declare-lab/flan-alpaca-large) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForSeq2SeqLM
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/declare-lab-flan-alpaca-large-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForSeq2SeqLM.from_pretrained(model_id)
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
result = pipe("hello world")
print(result)
```
| 797 | [
[
-0.0297698974609375,
-0.064208984375,
0.0218658447265625,
0.011749267578125,
-0.01299285888671875,
-0.0367431640625,
-0.0098876953125,
-0.021759033203125,
0.02850341796875,
0.041107177734375,
-0.045257568359375,
-0.0261993408203125,
-0.0406494140625,
-0.0100... |
stablediffusion9527/distilgpt2-finetuned-wikitext2 | 2023-04-06T15:22:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | stablediffusion9527 | null | null | stablediffusion9527/distilgpt2-finetuned-wikitext2 | 0 | 2 | transformers | 2023-04-06T14:50:35 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,368 | [
[
-0.035064697265625,
-0.04205322265625,
0.01244354248046875,
0.0139923095703125,
-0.0261993408203125,
-0.0309600830078125,
-0.0048828125,
-0.0105133056640625,
-0.0082550048828125,
0.01375579833984375,
-0.05804443359375,
-0.0254364013671875,
-0.05816650390625,
... |
stablediffusion9527/distilroberta-base-finetuned-wikitext2 | 2023-04-06T15:53:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | stablediffusion9527 | null | null | stablediffusion9527/distilroberta-base-finetuned-wikitext2 | 0 | 2 | transformers | 2023-04-06T15:23:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0852 | 1.0 | 2406 | 1.9234 |
| 1.992 | 2.0 | 4812 | 1.8828 |
| 1.9603 | 3.0 | 7218 | 1.8223 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,400 | [
[
-0.030487060546875,
-0.042236328125,
0.00693511962890625,
0.0188751220703125,
-0.023590087890625,
-0.0268096923828125,
-0.00399017333984375,
-0.0089874267578125,
0.0016803741455078125,
0.0182037353515625,
-0.053009033203125,
-0.03045654296875,
-0.05267333984375,... |
alikanakar/whisper-synthesized-turkish-8-hour | 2023-04-07T08:31:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | alikanakar | null | null | alikanakar/whisper-synthesized-turkish-8-hour | 0 | 2 | transformers | 2023-04-06T17:32:51 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-synthesized-turkish-8-hour
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-synthesized-turkish-8-hour
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2300
- Wer: 23.0527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.2682 | 0.52 | 100 | 0.5845 | 99.7901 |
| 0.4591 | 1.04 | 200 | 0.3895 | 21.4541 |
| 0.2482 | 1.56 | 300 | 0.2241 | 12.2145 |
| 0.1554 | 2.08 | 400 | 0.2092 | 11.7825 |
| 0.096 | 2.6 | 500 | 0.2035 | 13.9057 |
| 0.0765 | 3.12 | 600 | 0.2052 | 11.2517 |
| 0.0424 | 3.65 | 700 | 0.2024 | 13.4490 |
| 0.0403 | 4.17 | 800 | 0.2094 | 12.0849 |
| 0.0216 | 4.69 | 900 | 0.2049 | 13.1959 |
| 0.0201 | 5.21 | 1000 | 0.2079 | 12.1034 |
| 0.0101 | 5.73 | 1100 | 0.2073 | 12.5663 |
| 0.0131 | 6.25 | 1200 | 0.2093 | 16.7757 |
| 0.0088 | 6.77 | 1300 | 0.2121 | 16.5165 |
| 0.0073 | 7.29 | 1400 | 0.2142 | 15.3314 |
| 0.0036 | 7.81 | 1500 | 0.2183 | 13.7020 |
| 0.0047 | 8.33 | 1600 | 0.2159 | 16.1647 |
| 0.0038 | 8.85 | 1700 | 0.2166 | 13.7514 |
| 0.0027 | 9.38 | 1800 | 0.2172 | 19.9975 |
| 0.0028 | 9.9 | 1900 | 0.2183 | 18.2385 |
| 0.0015 | 10.42 | 2000 | 0.2196 | 17.4238 |
| 0.0023 | 10.94 | 2100 | 0.2192 | 14.7019 |
| 0.0012 | 11.46 | 2200 | 0.2216 | 15.9919 |
| 0.0017 | 11.98 | 2300 | 0.2215 | 19.6334 |
| 0.001 | 12.5 | 2400 | 0.2219 | 20.5160 |
| 0.0014 | 13.02 | 2500 | 0.2236 | 21.7813 |
| 0.0011 | 13.54 | 2600 | 0.2242 | 23.0897 |
| 0.0009 | 14.06 | 2700 | 0.2276 | 25.0401 |
| 0.001 | 14.58 | 2800 | 0.2269 | 18.7014 |
| 0.001 | 15.1 | 2900 | 0.2265 | 20.8554 |
| 0.0008 | 15.62 | 3000 | 0.2272 | 19.7013 |
| 0.0009 | 16.15 | 3100 | 0.2277 | 26.5831 |
| 0.0007 | 16.67 | 3200 | 0.2290 | 24.3427 |
| 0.0008 | 17.19 | 3300 | 0.2285 | 20.7011 |
| 0.0007 | 17.71 | 3400 | 0.2288 | 21.8738 |
| 0.0007 | 18.23 | 3500 | 0.2290 | 20.7258 |
| 0.0006 | 18.75 | 3600 | 0.2295 | 21.1641 |
| 0.0006 | 19.27 | 3700 | 0.2297 | 23.7625 |
| 0.0007 | 19.79 | 3800 | 0.2301 | 24.4044 |
| 0.0006 | 20.31 | 3900 | 0.2299 | 22.9786 |
| 0.0006 | 20.83 | 4000 | 0.2300 | 23.0527 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 3,816 | [
[
-0.04144287109375,
-0.04205322265625,
0.0124359130859375,
0.006988525390625,
-0.00875091552734375,
-0.0142669677734375,
-0.0019521713256835938,
-0.00514984130859375,
0.040771484375,
0.0239105224609375,
-0.04779052734375,
-0.047760009765625,
-0.04608154296875,
... |
Addwater/Pyramids | 2023-04-06T18:07:38.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | Addwater | null | null | Addwater/Pyramids | 0 | 2 | ml-agents | 2023-04-06T18:07:32 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Addwater/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 947 | [
[
-0.0287933349609375,
-0.0191497802734375,
0.001739501953125,
0.0279083251953125,
-0.010589599609375,
0.006099700927734375,
0.0291900634765625,
-0.00354766845703125,
0.03424072265625,
0.038238525390625,
-0.035186767578125,
-0.051025390625,
-0.035888671875,
-0... |
andli28/ppo-SnowballTarget | 2023-04-06T20:34:54.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | andli28 | null | null | andli28/ppo-SnowballTarget | 0 | 2 | ml-agents | 2023-04-06T20:31:44 | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to ~~https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget~~ https://singularite.itch.io/snowballtarget
2. Step 1: Find your model_id: andli28/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,033 | [
[
-0.013702392578125,
-0.02655029296875,
0.007625579833984375,
0.01457977294921875,
-0.0219879150390625,
0.0172576904296875,
0.020843505859375,
-0.007656097412109375,
0.0254058837890625,
0.037841796875,
-0.05194091796875,
-0.058135986328125,
-0.03948974609375,
... |
ricardotalavera/aak-distilroberta-base-mrpc-glue-ricardo-talavera | 2023-04-06T22:41:00.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ricardotalavera | null | null | ricardotalavera/aak-distilroberta-base-mrpc-glue-ricardo-talavera | 0 | 2 | transformers | 2023-04-06T20:33:32 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: aak-distilroberta-base-mrpc-glue-ricardo-talavera
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aak-distilroberta-base-mrpc-glue-ricardo-talavera
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 15.1968
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0253 | 0.28 | 500 | 12.0296 | 0.0 |
| 0.0 | 0.56 | 1000 | 12.9787 | 0.0 |
| 0.0 | 0.84 | 1500 | 13.5657 | 0.0 |
| 0.0 | 1.11 | 2000 | 13.9849 | 0.0 |
| 0.0 | 1.39 | 2500 | 14.3131 | 0.0 |
| 0.0 | 1.67 | 3000 | 14.5808 | 0.0 |
| 0.0 | 1.95 | 3500 | 14.8001 | 0.0 |
| 0.0 | 2.23 | 4000 | 14.9771 | 0.0 |
| 0.0 | 2.51 | 4500 | 15.1107 | 0.0 |
| 0.0 | 2.79 | 5000 | 15.1968 | 0.0 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,946 | [
[
-0.03253173828125,
-0.04168701171875,
0.0034847259521484375,
0.01074981689453125,
-0.02276611328125,
-0.0223541259765625,
-0.0005979537963867188,
-0.009063720703125,
0.01399993896484375,
0.0128326416015625,
-0.049224853515625,
-0.048004150390625,
-0.055816650390... |
DunnBC22/codet5-small-Generate_Docstrings_for_Python-Condensed | 2023-05-12T00:50:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"en",
"dataset:calum/the-stack-smol-python-docstrings",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | DunnBC22 | null | null | DunnBC22/codet5-small-Generate_Docstrings_for_Python-Condensed | 2 | 2 | transformers | 2023-04-06T23:31:10 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-small-Generate_Docstrings_for_Python-Condensed
results: []
datasets:
- calum/the-stack-smol-python-docstrings
language:
- en
pipeline_tag: text2text-generation
---
# codet5-small-Generate_Docstrings_for_Python-Condensed
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1444
- Rouge1: 0.3828
- Rouge2: 0.2214
- Rougel: 0.3583
- Rougelsum: 0.3661
- Gen Len: 12.6656
## Model description
This model is trained to predict the docstring (the output) for a function (the input).
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Generate%20Docstrings/Smol%20Dataset/Code_T5_Project-Small%20Checkpoint.ipynb
For this model, I trimmed some of the longer samples to quicken the pace of training on consumer hardware.
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: calum/the-stack-smol-python-docstrings (from HuggingFace Datasets; https://huggingface.co/datasets/calum/the-stack-smol-python-docstrings)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.9064 | 1.0 | 965 | 2.3096 | 0.3695 | 0.2098 | 0.3464 | 0.3529 | 11.7285 |
| 2.4836 | 2.0 | 1930 | 2.2051 | 0.38 | 0.2176 | 0.3554 | 0.3635 | 12.9401 |
| 2.3669 | 3.0 | 2895 | 2.1548 | 0.3842 | 0.2219 | 0.3595 | 0.3674 | 13.0029 |
| 2.3254 | 4.0 | 3860 | 2.1444 | 0.3828 | 0.2214 | 0.3583 | 0.3661 | 12.6656 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.12.1 | 2,375 | [
[
-0.0305023193359375,
-0.04486083984375,
0.021484375,
-0.0006132125854492188,
-0.00400543212890625,
-0.018157958984375,
-0.0189361572265625,
-0.019927978515625,
0.005199432373046875,
0.0228424072265625,
-0.05255126953125,
-0.04669189453125,
-0.042236328125,
0... |
rithwik-db/embedded-e5-large-500-correct | 2023-04-07T01:58:41.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | rithwik-db | null | null | rithwik-db/embedded-e5-large-500-correct | 0 | 2 | sentence-transformers | 2023-04-07T01:58:29 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# rithwik-db/embedded-e5-large-500-correct
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('rithwik-db/embedded-e5-large-500-correct')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rithwik-db/embedded-e5-large-500-correct')
model = AutoModel.from_pretrained('rithwik-db/embedded-e5-large-500-correct')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/embedded-e5-large-500-correct)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 12211 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 3,934 | [
[
-0.026947021484375,
-0.065673828125,
0.0190887451171875,
0.024688720703125,
-0.0178680419921875,
-0.035064697265625,
-0.0259857177734375,
-0.00778961181640625,
0.0211181640625,
0.027740478515625,
-0.051666259765625,
-0.04913330078125,
-0.047149658203125,
0.0... |
ricardotalavera/aak-distilroberta-base-cpc-ricardo-talavera | 2023-04-07T03:35:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ricardotalavera | null | null | ricardotalavera/aak-distilroberta-base-cpc-ricardo-talavera | 0 | 2 | transformers | 2023-04-07T03:31:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: aak-distilroberta-base-cpc-ricardo-talavera
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aak-distilroberta-base-cpc-ricardo-talavera
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 13.1594
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0232 | 0.82 | 500 | 11.9723 | 0.0 |
| 0.0 | 1.63 | 1000 | 12.7880 | 0.0 |
| 0.0 | 2.45 | 1500 | 13.1594 | 0.0 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,500 | [
[
-0.029144287109375,
-0.044464111328125,
0.00778961181640625,
0.0179290771484375,
-0.0308074951171875,
-0.0293121337890625,
-0.0083465576171875,
-0.01129913330078125,
0.005069732666015625,
0.01337432861328125,
-0.04669189453125,
-0.049285888671875,
-0.05609130859... |
ricardotalavera/aak-bert-base-cased-cpc-ricardo-talavera | 2023-04-07T15:47:50.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ricardotalavera | null | null | ricardotalavera/aak-bert-base-cased-cpc-ricardo-talavera | 0 | 2 | transformers | 2023-04-07T03:38:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: aak-bert-base-cased-cpc-ricardo-talavera
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aak-bert-base-cased-cpc-ricardo-talavera
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 13.5686
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.026 | 0.82 | 500 | 11.6700 | 0.0 |
| 0.0001 | 1.63 | 1000 | 12.4978 | 0.0 |
| 0.0 | 2.45 | 1500 | 12.9780 | 0.0 |
| 0.0 | 3.26 | 2000 | 13.2911 | 0.0 |
| 0.0 | 4.08 | 2500 | 13.4842 | 0.0 |
| 0.0 | 4.89 | 3000 | 13.5686 | 0.0 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,674 | [
[
-0.03424072265625,
-0.04443359375,
0.0082550048828125,
0.0104217529296875,
-0.029388427734375,
-0.03314208984375,
-0.0117950439453125,
-0.01300811767578125,
0.0128173828125,
0.0225067138671875,
-0.057464599609375,
-0.053619384765625,
-0.047332763671875,
-0.0... |
Neko-Institute-of-Science/LLaMA-13B-4bit-32g | 2023-04-15T19:28:50.000Z | [
"transformers",
"llama",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Neko-Institute-of-Science | null | null | Neko-Institute-of-Science/LLaMA-13B-4bit-32g | 0 | 2 | transformers | 2023-04-07T04:38:38 | ```
13B (act-order true-sequential groupsize)
wikitext2 5.0906524658203125 (stock 16bit)
wikitext2 5.153766632080078 (32g)
wikitext2 5.198880672454834 (128)
wikitext2 5.266944408416748 (128 no-act)
wikitext2 5.271687984466553 (128 no-t no-act)
ptb-new 9.080504417419434 (stock 16bit)
ptb-new 9.149489402770996 (32g)
ptb-new 9.268823623657227 (128)
ptb-new 9.45678424835205 (128 no-act)
ptb-new 9.497363090515137 (128 no-t no-act)
c4-new 6.798543930053711 (stock 16bit)
c4-new 6.866276264190674 (32g)
c4-new 6.910022735595703 (128)
c4-new 6.955390930175781 (128 no-act)
c4-new 6.956299781799316 (128 no-t no-act)
``` | 616 | [
[
-0.0308837890625,
-0.03216552734375,
0.01325225830078125,
0.055389404296875,
-0.0249176025390625,
0.01250457763671875,
0.028472900390625,
-0.0255889892578125,
0.05029296875,
0.025787353515625,
-0.039947509765625,
-0.035430908203125,
-0.0623779296875,
0.00181... |
Svetlana0303/Regression_albert_11_aug_MSEloss | 2023-04-07T05:35:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_albert_11_aug_MSEloss | 0 | 2 | transformers | 2023-04-07T05:12:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Regression_albert_11_aug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Regression_albert_11_aug
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Mse: 0.2285
- Mae: 0.3670
- R2: 0.4927
- Accuracy: 0.7067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| No log | 1.0 | 263 | 0.2010 | 0.2010 | 0.3575 | 0.5311 | 0.7367 |
| 0.2435 | 2.0 | 526 | 0.1490 | 0.1490 | 0.2495 | 0.6523 | 0.8733 |
| 0.2435 | 3.0 | 789 | 0.0972 | 0.0972 | 0.2068 | 0.7732 | 0.9067 |
| 0.0906 | 4.0 | 1052 | 0.1115 | 0.1115 | 0.2082 | 0.7399 | 0.9067 |
| 0.0906 | 5.0 | 1315 | 0.0904 | 0.0904 | 0.1684 | 0.7890 | 0.9 |
| 0.0421 | 6.0 | 1578 | 0.0791 | 0.0791 | 0.1542 | 0.8153 | 0.93 |
| 0.0421 | 7.0 | 1841 | 0.0843 | 0.0843 | 0.1415 | 0.8034 | 0.9133 |
| 0.0274 | 8.0 | 2104 | 0.0694 | 0.0694 | 0.1152 | 0.8380 | 0.9333 |
| 0.0274 | 9.0 | 2367 | 0.0742 | 0.0742 | 0.1435 | 0.8269 | 0.93 |
| 0.0213 | 10.0 | 2630 | 0.0659 | 0.0659 | 0.1022 | 0.8463 | 0.9367 |
| 0.0213 | 11.0 | 2893 | 0.0600 | 0.0600 | 0.1054 | 0.8599 | 0.9433 |
| 0.0127 | 12.0 | 3156 | 0.0540 | 0.0540 | 0.0988 | 0.8739 | 0.9433 |
| 0.0127 | 13.0 | 3419 | 0.0479 | 0.0479 | 0.0854 | 0.8883 | 0.9567 |
| 0.0077 | 14.0 | 3682 | 0.0517 | 0.0517 | 0.0848 | 0.8793 | 0.95 |
| 0.0077 | 15.0 | 3945 | 0.0405 | 0.0405 | 0.0851 | 0.9054 | 0.9633 |
| 0.0051 | 16.0 | 4208 | 0.0430 | 0.0430 | 0.0742 | 0.8996 | 0.9533 |
| 0.0051 | 17.0 | 4471 | 0.0368 | 0.0368 | 0.0721 | 0.9142 | 0.96 |
| 0.0036 | 18.0 | 4734 | 0.0352 | 0.0352 | 0.0709 | 0.9180 | 0.96 |
| 0.0036 | 19.0 | 4997 | 0.0345 | 0.0345 | 0.0654 | 0.9195 | 0.9567 |
| 0.0029 | 20.0 | 5260 | 0.0366 | 0.0366 | 0.0671 | 0.9146 | 0.96 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 3,148 | [
[
-0.042327880859375,
-0.041656494140625,
0.01494598388671875,
0.00867462158203125,
0.0006008148193359375,
-0.00742340087890625,
0.0044708251953125,
-0.004894256591796875,
0.0433349609375,
0.0261077880859375,
-0.047271728515625,
-0.051513671875,
-0.048614501953125... |
zhuqi/dqn-SpaceInvadersNoFrameskip-v4-10M | 2023-04-07T06:08:10.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | zhuqi | null | null | zhuqi/dqn-SpaceInvadersNoFrameskip-v4-10M | 0 | 2 | stable-baselines3 | 2023-04-07T06:06:43 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 797.00 +/- 333.66
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zhuqi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zhuqi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zhuqi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,683 | [
[
-0.0413818359375,
-0.036346435546875,
0.0214996337890625,
0.0250701904296875,
-0.01045989990234375,
-0.017578125,
0.01342010498046875,
-0.0143890380859375,
0.01215362548828125,
0.0247344970703125,
-0.07037353515625,
-0.0352783203125,
-0.0272369384765625,
-0.... |
Xiao888/distilbert-base-uncased-finetuned-emotion | 2023-04-07T20:25:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Xiao888 | null | null | Xiao888/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-07T07:03:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.94
- name: F1
type: f1
value: 0.9401807321145588
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1509
- Accuracy: 0.94
- F1: 0.9402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4834 | 1.0 | 1000 | 0.1853 | 0.927 | 0.9270 |
| 0.1454 | 2.0 | 2000 | 0.1509 | 0.94 | 0.9402 |
### Framework versions
- Transformers 4.11.3
- Pytorch 2.0.0+cu118
- Datasets 2.9.0
- Tokenizers 0.10.3
| 1,799 | [
[
-0.036865234375,
-0.041748046875,
0.0132598876953125,
0.022491455078125,
-0.0253143310546875,
-0.019378662109375,
-0.01329803466796875,
-0.00839996337890625,
0.0099945068359375,
0.0081329345703125,
-0.05633544921875,
-0.051422119140625,
-0.06097412109375,
-0... |
AIventurer/distilbert-base-uncased-finetuned-emotion | 2023-04-07T09:33:43.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | AIventurer | null | null | AIventurer/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-07T09:24:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.920911250148335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2305
- Accuracy: 0.921
- F1: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8591 | 1.0 | 250 | 0.3430 | 0.897 | 0.8930 |
| 0.264 | 2.0 | 500 | 0.2305 | 0.921 | 0.9209 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.11.0
| 1,845 | [
[
-0.0380859375,
-0.0408935546875,
0.01480865478515625,
0.020904541015625,
-0.0263671875,
-0.0190582275390625,
-0.01311492919921875,
-0.00878143310546875,
0.01061248779296875,
0.0084075927734375,
-0.056488037109375,
-0.051971435546875,
-0.0601806640625,
-0.008... |
Alegzandra/xlm-roberta-base-cased-finetuned-on-REDv2_EN | 2023-04-07T10:26:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Alegzandra | null | null | Alegzandra/xlm-roberta-base-cased-finetuned-on-REDv2_EN | 0 | 2 | transformers | 2023-04-07T09:51:05 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-cased-finetuned-on-REDv2_EN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-cased-finetuned-on-REDv2_EN
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3020
- F1: 0.6551
- Roc Auc: 0.7921
- Accuracy: 0.5414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 255 | 0.3231 | 0.4506 | 0.6522 | 0.3112 |
| 0.3531 | 2.0 | 511 | 0.2683 | 0.6117 | 0.7446 | 0.4899 |
| 0.3531 | 3.0 | 766 | 0.2630 | 0.6603 | 0.7842 | 0.5617 |
| 0.2223 | 4.0 | 1022 | 0.2579 | 0.6567 | 0.7812 | 0.5709 |
| 0.2223 | 5.0 | 1277 | 0.2603 | 0.6707 | 0.7930 | 0.5764 |
| 0.1589 | 6.0 | 1533 | 0.2799 | 0.6475 | 0.7826 | 0.5488 |
| 0.1589 | 7.0 | 1788 | 0.2833 | 0.6538 | 0.7883 | 0.5562 |
| 0.1163 | 8.0 | 2044 | 0.2936 | 0.6655 | 0.7951 | 0.5580 |
| 0.1163 | 9.0 | 2299 | 0.2949 | 0.6678 | 0.7978 | 0.5727 |
| 0.0943 | 9.98 | 2550 | 0.3020 | 0.6551 | 0.7921 | 0.5414 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,256 | [
[
-0.038604736328125,
-0.043426513671875,
0.0155181884765625,
0.0017642974853515625,
-0.01525115966796875,
-0.01904296875,
-0.00923919677734375,
-0.01165008544921875,
0.022064208984375,
0.03375244140625,
-0.05523681640625,
-0.0504150390625,
-0.057830810546875,
... |
OnurSahh/teknofest_nlp_finetuned_tddi | 2023-04-08T09:53:37.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | OnurSahh | null | null | OnurSahh/teknofest_nlp_finetuned_tddi | 0 | 2 | transformers | 2023-04-07T10:43:12 | TEKNOFEST_train.ipynb was used for fine tuning a turkish bert model. The goal is sentiment analysis for turkish text.
https://github.com/OnurSahh/Teknofest_NLP_Acikhack2023
OUTPUT
Label / Offensive or not / Target
Label_0 = OFFENSIVE and INSULT
Label_1 = OFFENSIVE and RACIST
Label_2 = OFFENSIVE and SEXIST
Label_3 = OFFENSIVE and PROFANITY
Label_4 = NOT OFFENSIVE and OTHER
Label_5 = OFFENSIVE and OTHER
| 435 | [
[
-0.041595458984375,
-0.04901123046875,
0.00861358642578125,
0.0297088623046875,
-0.049072265625,
-0.00894927978515625,
-0.015411376953125,
-0.03515625,
0.026611328125,
0.0024623870849609375,
-0.047882080078125,
-0.05548095703125,
-0.036468505859375,
0.019546... |
KeruiZhao/distilbert-base-uncased-finetuned-cola | 2023-04-07T12:30:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | KeruiZhao | null | null | KeruiZhao/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-07T11:40:25 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5363967157085073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8120
- Matthews Correlation: 0.5364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.5222 | 0.4210 |
| 0.3466 | 2.0 | 1070 | 0.5042 | 0.4832 |
| 0.2335 | 3.0 | 1605 | 0.5640 | 0.5173 |
| 0.1812 | 4.0 | 2140 | 0.7634 | 0.5200 |
| 0.1334 | 5.0 | 2675 | 0.8120 | 0.5364 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,042 | [
[
-0.02191162109375,
-0.050537109375,
0.01091766357421875,
0.01806640625,
-0.02264404296875,
-0.00841522216796875,
-0.0064697265625,
-0.0037479400634765625,
0.023040771484375,
0.00992584228515625,
-0.046051025390625,
-0.03594970703125,
-0.06353759765625,
-0.00... |
justinsiow/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-07T12:35:43.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | justinsiow | null | null | justinsiow/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-07T12:34:57 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 608.00 +/- 131.50
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga justinsiow -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga justinsiow -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga justinsiow
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,697 | [
[
-0.04168701171875,
-0.0364990234375,
0.0221099853515625,
0.02508544921875,
-0.0099029541015625,
-0.0179443359375,
0.01213836669921875,
-0.01457977294921875,
0.01328277587890625,
0.024810791015625,
-0.0704345703125,
-0.035430908203125,
-0.02740478515625,
-0.0... |
InfiniteDarkness/bert-fine-tuned-cola | 2023-04-07T17:36:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | InfiniteDarkness | null | null | InfiniteDarkness/bert-fine-tuned-cola | 0 | 2 | transformers | 2023-04-07T15:25:23 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5678267214677118
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8424
- Matthews Correlation: 0.5678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4779 | 1.0 | 1069 | 0.6219 | 0.4808 |
| 0.3375 | 2.0 | 2138 | 0.6739 | 0.5705 |
| 0.1886 | 3.0 | 3207 | 0.8424 | 0.5678 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,840 | [
[
-0.0269775390625,
-0.057342529296875,
0.0088043212890625,
0.01873779296875,
-0.021392822265625,
-0.0170135498046875,
-0.01447296142578125,
-0.0165557861328125,
0.0233001708984375,
0.01004791259765625,
-0.05389404296875,
-0.03131103515625,
-0.05413818359375,
... |
fionaxzf/gpt_model | 2023-04-07T16:40:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | fionaxzf | null | null | fionaxzf/gpt_model | 0 | 2 | transformers | 2023-04-07T16:08:19 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4923
- Accuracy: 0.77
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 32 | 0.5366 | 0.766 |
| No log | 2.0 | 64 | 0.4923 | 0.77 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
| 1,370 | [
[
-0.0289306640625,
-0.053375244140625,
0.0254058837890625,
0.00823211669921875,
-0.035430908203125,
-0.03564453125,
-0.01491546630859375,
-0.0179901123046875,
-0.0032672882080078125,
0.0184173583984375,
-0.049163818359375,
-0.042755126953125,
-0.054443359375,
... |
kanak8278/xlnet-large-cased-ner-food-combined-v2 | 2023-04-11T12:38:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | kanak8278 | null | null | kanak8278/xlnet-large-cased-ner-food-combined-v2 | 0 | 2 | transformers | 2023-04-07T19:00:01 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlnet-large-cased-ner-food-combined-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-large-cased-ner-food-combined-v2
This model is a fine-tuned version of [xlnet-large-cased](https://huggingface.co/xlnet-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0681
- Precision: 0.8554
- Recall: 0.8743
- F1: 0.8647
- Accuracy: 0.9769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2606 | 1.12 | 500 | 0.0822 | 0.7976 | 0.8664 | 0.8306 | 0.9712 |
| 0.0837 | 2.25 | 1000 | 0.0955 | 0.7657 | 0.8764 | 0.8173 | 0.9683 |
| 0.0706 | 3.37 | 1500 | 0.0732 | 0.8322 | 0.8714 | 0.8513 | 0.9750 |
| 0.0631 | 4.49 | 2000 | 0.0681 | 0.8554 | 0.8743 | 0.8647 | 0.9769 |
| 0.0549 | 5.62 | 2500 | 0.0713 | 0.8356 | 0.8868 | 0.8604 | 0.9754 |
| 0.0521 | 6.74 | 3000 | 0.0700 | 0.8425 | 0.8863 | 0.8639 | 0.9759 |
| 0.0493 | 7.87 | 3500 | 0.0721 | 0.8444 | 0.8859 | 0.8647 | 0.9763 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,080 | [
[
-0.032440185546875,
-0.0298004150390625,
0.01221466064453125,
-0.0009756088256835938,
-0.01030731201171875,
-0.0221405029296875,
-0.007266998291015625,
-0.011810302734375,
0.0328369140625,
0.03216552734375,
-0.044281005859375,
-0.0447998046875,
-0.05038452148437... |
Synho/sagemaker-distilbert-emotion | 2023-04-07T19:32:55.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Synho | null | null | Synho/sagemaker-distilbert-emotion | 0 | 2 | transformers | 2023-04-07T19:30:48 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: test
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2548
- Accuracy: 0.917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9253 | 1.0 | 500 | 0.2548 | 0.917 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1,708 | [
[
-0.034332275390625,
-0.04583740234375,
0.0230865478515625,
0.0111083984375,
-0.02496337890625,
-0.0251922607421875,
-0.01137542724609375,
-0.005886077880859375,
0.01081085205078125,
0.00782012939453125,
-0.06640625,
-0.04766845703125,
-0.06243896484375,
-0.0... |
Viswes/ppo-SnowballTargetTESTCOLAB1 | 2023-04-07T19:33:01.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | Viswes | null | null | Viswes/ppo-SnowballTargetTESTCOLAB1 | 0 | 2 | ml-agents | 2023-04-07T19:32:56 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Viswes/ppo-SnowballTargetTESTCOLAB1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 965 | [
[
-0.0223846435546875,
-0.0195770263671875,
-0.004535675048828125,
0.0257568359375,
-0.015533447265625,
0.005596160888671875,
0.02655029296875,
-0.00270843505859375,
0.0298614501953125,
0.033935546875,
-0.037689208984375,
-0.054351806640625,
-0.037261962890625,
... |
platzi/platzi-distilroberta-base-mrpc-glue-nelson-silvera | 2023-04-08T16:22:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-nelson-silvera | 0 | 2 | transformers | 2023-04-07T23:59:07 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text:
- >-
Yucaipa owned Dominick 's before selling the chain to Safeway in 1998
for $ 2.5 billion.
- >-
Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to
Safeway for $ 1.8 billion in 1998.
example_title: Not Equivalent
- text:
- >-
Revenue in the first quarter of the year dropped 15 percent from the
same period a year earlier.
- >-
With the scandal hanging over Stewart's company revenue the first
quarter of the year dropped 15 percent from the same period a year
earlier.
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-nelson-silvera
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8088235294117647
- name: F1
type: f1
value: 0.8733766233766234
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-nelson-silvera
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5589
- Accuracy: 0.8088
- F1: 0.8734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5119 | 1.09 | 500 | 0.5589 | 0.8088 | 0.8734 |
| 0.3448 | 2.18 | 1000 | 0.6190 | 0.8407 | 0.8794 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,520 | [
[
-0.0308837890625,
-0.0394287109375,
0.013275146484375,
0.01412200927734375,
-0.0264434814453125,
-0.02581787109375,
-0.006134033203125,
0.001251220703125,
0.003021240234375,
0.011749267578125,
-0.049407958984375,
-0.0465087890625,
-0.059844970703125,
-0.0040... |
Telstema/distilbert-base-uncased-finetuned-sst2 | 2023-04-15T20:32:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Telstema | null | null | Telstema/distilbert-base-uncased-finetuned-sst2 | 0 | 2 | transformers | 2023-04-08T02:06:02 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [Telstema/distilbert-base-uncased-finetuned-sst2](https://huggingface.co/Telstema/distilbert-base-uncased-finetuned-sst2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.7080
- eval_accuracy: 0.7218
- eval_runtime: 13.1083
- eval_samples_per_second: 10.146
- eval_steps_per_second: 0.687
- epoch: 2.0
- step: 100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.909275911638729e-06
- train_batch_size: 8
- eval_batch_size: 16
- seed: 23
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,337 | [
[
-0.022369384765625,
-0.053924560546875,
0.01491546630859375,
0.0103759765625,
-0.04010009765625,
-0.0196533203125,
-0.0157318115234375,
-0.0122833251953125,
0.00617218017578125,
0.024169921875,
-0.042083740234375,
-0.04296875,
-0.05279541015625,
-0.011154174... |
hanifnoerr/Kemenkeu-Sentiment-Classifier | 2023-04-08T06:29:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"id",
"doi:10.57967/hf/0520",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | hanifnoerr | null | null | hanifnoerr/Kemenkeu-Sentiment-Classifier | 0 | 2 | transformers | 2023-04-08T02:58:04 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: Kemenkeu-Sentiment-Classifier
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.66
- name: F1
type: f1
value: 0.6368
language:
- id
pipeline_tag: text-classification
widget:
- text: sudah beli makan buat sahur?
example_title: "contoh tidak relevan"
- text: Mengawal APBN, Indonesia Maju
example_title: "contoh kalimat"
---
# Kemenkeu-Sentiment-Classifier
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the MoF-DAC Mini Challenge#1 dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.66
- F1: 0.6368
Leaderboard score:
- Public score: 0.63733
- Private score: 0.65733
## Model description & limitations
- This model can be used to classify text with four possible outputs [netral, tdk-relevan, negatif, and positif]
- only for specific cases related to the Ministry Of Finance Indonesia
## How to use
You can use this model directly with a pipeline
```python
pretrained_name = "hanifnoerr/Kemenkeu-Sentiment-Classifier"
class_model = pipeline(tokenizer=pretrained_name, model=pretrained_name)
test_data = "Mengawal APBN, Indonesia Maju"
class_model(test_data)
```
## Training and evaluation data
The following hyperparameters were used during training:
- learning_rate: 1e-05
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0131 | 1.0 | 500 | 0.8590 | 0.644 | 0.5964 |
| 0.7133 | 2.0 | 1000 | 0.8639 | 0.63 | 0.5924 |
| 0.5261 | 3.0 | 1500 | 0.9002 | 0.66 | 0.6368 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3 | 2,130 | [
[
-0.042266845703125,
-0.0538330078125,
0.00274658203125,
0.005199432373046875,
-0.0321044921875,
-0.0156097412109375,
-0.01139068603515625,
-0.0005593299865722656,
0.0063629150390625,
0.0258331298828125,
-0.05548095703125,
-0.057220458984375,
-0.0645751953125,
... |
tielur/jeep-or-toyota | 2023-04-08T05:19:08.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | tielur | null | null | tielur/jeep-or-toyota | 0 | 2 | transformers | 2023-04-08T05:18:58 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: jeep-or-toyota
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666746139526
---
# jeep-or-toyota
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### jeep

#### toyota
 | 725 | [
[
-0.05389404296875,
-0.031585693359375,
0.021514892578125,
0.03546142578125,
-0.031158447265625,
0.0189971923828125,
0.01152801513671875,
-0.038238525390625,
0.03619384765625,
-0.00591278076171875,
-0.03643798828125,
-0.03668212890625,
-0.047882080078125,
-0.... |
Umesh/pulf-classifier | 2023-04-08T18:33:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Umesh | null | null | Umesh/pulf-classifier | 0 | 2 | transformers | 2023-04-08T06:21:26 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
model-index:
- name: pulf-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pulf-classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0233
- Accuracy: 0.9943
- F1-score: 0.9887
- Recall: 0.9910
- Precision: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score | Recall | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:------:|:---------:|
| 0.0243 | 1.0 | 8772 | 0.0228 | 0.9930 | 0.9861 | 0.9877 | 0.9846 |
| 0.0183 | 2.0 | 17544 | 0.0243 | 0.9937 | 0.9875 | 0.9927 | 0.9825 |
| 0.0124 | 3.0 | 26316 | 0.0233 | 0.9943 | 0.9887 | 0.9910 | 0.9863 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,688 | [
[
-0.0312347412109375,
-0.032958984375,
0.00998687744140625,
0.0129241943359375,
-0.0171966552734375,
-0.02947998046875,
-0.01212310791015625,
-0.01519012451171875,
-0.008880615234375,
0.0251617431640625,
-0.04437255859375,
-0.044647216796875,
-0.0447998046875,
... |
soshi398/distilbert-base-uncased-finetuned-emotion | 2023-04-08T08:50:45.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | soshi398 | null | null | soshi398/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-08T06:37:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9335
- name: F1
type: f1
value: 0.933606028609809
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1623
- Accuracy: 0.9335
- F1: 0.9336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1791 | 1.0 | 250 | 0.1764 | 0.9335 | 0.9330 |
| 0.1135 | 2.0 | 500 | 0.1623 | 0.9335 | 0.9336 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,927 | [
[
-0.0367431640625,
-0.041259765625,
0.01384735107421875,
0.02178955078125,
-0.0274200439453125,
-0.01812744140625,
-0.013214111328125,
-0.007587432861328125,
0.01282501220703125,
0.00838470458984375,
-0.056884765625,
-0.051025390625,
-0.059417724609375,
-0.00... |
Svetlana0303/Regression_albert_12_NO_aug | 2023-04-08T16:35:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_albert_12_NO_aug | 0 | 2 | transformers | 2023-04-08T09:05:18 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Regression_albert_12_NO_aug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Regression_albert_12_NO_aug
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6997
- Mse: 0.6997
- Mae: 0.7013
- R2: -0.2883
- Accuracy: 0.4211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:--------:|
| No log | 1.0 | 33 | 0.3797 | 0.3797 | 0.5648 | -0.1345 | 0.3514 |
| No log | 2.0 | 66 | 0.4018 | 0.4018 | 0.5029 | -0.2005 | 0.4865 |
| No log | 3.0 | 99 | 0.4384 | 0.4384 | 0.5738 | -0.3100 | 0.4054 |
| No log | 4.0 | 132 | 0.6817 | 0.6817 | 0.6523 | -1.0370 | 0.5405 |
| No log | 5.0 | 165 | 0.4155 | 0.4155 | 0.4750 | -0.2415 | 0.5676 |
| No log | 6.0 | 198 | 0.5695 | 0.5695 | 0.5599 | -0.7017 | 0.5405 |
| No log | 7.0 | 231 | 0.5646 | 0.5646 | 0.5588 | -0.6869 | 0.5405 |
| No log | 8.0 | 264 | 0.5240 | 0.5240 | 0.5330 | -0.5656 | 0.5676 |
| No log | 9.0 | 297 | 0.4613 | 0.4613 | 0.4798 | -0.3783 | 0.5676 |
| No log | 10.0 | 330 | 0.6285 | 0.6285 | 0.6172 | -0.8778 | 0.5135 |
| No log | 11.0 | 363 | 0.6012 | 0.6012 | 0.5600 | -0.7964 | 0.5676 |
| No log | 12.0 | 396 | 0.4417 | 0.4417 | 0.4767 | -0.3198 | 0.5405 |
| No log | 13.0 | 429 | 0.5486 | 0.5486 | 0.5349 | -0.6392 | 0.5676 |
| No log | 14.0 | 462 | 0.5328 | 0.5328 | 0.5174 | -0.5919 | 0.5676 |
| No log | 15.0 | 495 | 0.5442 | 0.5442 | 0.5165 | -0.6259 | 0.5405 |
| 0.2088 | 16.0 | 528 | 0.4587 | 0.4587 | 0.4619 | -0.3705 | 0.5405 |
| 0.2088 | 17.0 | 561 | 0.5056 | 0.5056 | 0.4970 | -0.5107 | 0.5405 |
| 0.2088 | 18.0 | 594 | 0.4787 | 0.4787 | 0.4744 | -0.4304 | 0.5405 |
| 0.2088 | 19.0 | 627 | 0.4349 | 0.4349 | 0.4531 | -0.2995 | 0.5676 |
| 0.2088 | 20.0 | 660 | 0.4605 | 0.4605 | 0.4642 | -0.3759 | 0.5676 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 3,177 | [
[
-0.037445068359375,
-0.04443359375,
0.01204681396484375,
0.006793975830078125,
-0.00008624792098999023,
-0.00653839111328125,
0.003307342529296875,
-0.007709503173828125,
0.043487548828125,
0.024383544921875,
-0.04364013671875,
-0.057098388671875,
-0.04959106445... |
arumugamkasi/distilbert-base-uncased-finetuned-emotion | 2023-04-16T09:05:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | arumugamkasi | null | null | arumugamkasi/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-08T09:22:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2216
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8277 | 1.0 | 250 | 0.3140 | 0.9075 | 0.9055 |
| 0.2487 | 2.0 | 500 | 0.2216 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,486 | [
[
-0.038238525390625,
-0.04425048828125,
0.0209197998046875,
0.0251312255859375,
-0.02825927734375,
-0.019683837890625,
-0.01409149169921875,
-0.0076141357421875,
0.00966644287109375,
0.00722503662109375,
-0.0560302734375,
-0.050201416015625,
-0.061614990234375,
... |
TheYuriLover/llama-13b-pretrained-sft-do2-4bit-128g-TRITON | 2023-04-08T19:03:11.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | TheYuriLover | null | null | TheYuriLover/llama-13b-pretrained-sft-do2-4bit-128g-TRITON | 2 | 2 | transformers | 2023-04-08T09:49:57 | ---
license: other
---
This is the gptq 4bit quantization of this model:
https://huggingface.co/dvruette/llama-13b-pretrained-sft-do2
This quantization was made by using this repository:
https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 128)
CUDA_VISIBLE_DEVICES=0 python llama.py ./llama-13b-pretrained-sft-do2-4bit-128g-TRITON c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors llama-13b-pretrained-sft-do2-4bit-128g-TRITON.safetensors
To use the triton model on oobabooga's webui, you can refer to this post to get rid of all the errors you can encounter:
https://github.com/oobabooga/text-generation-webui/issues/734
| 775 | [
[
-0.00901031494140625,
-0.048858642578125,
0.046051025390625,
0.01546478271484375,
-0.0421142578125,
0.0175933837890625,
0.01410675048828125,
-0.0212249755859375,
-0.0146026611328125,
0.027313232421875,
-0.032318115234375,
-0.035491943359375,
-0.0236053466796875,... |
hoang14/distilbert-base-uncased-finetuned-emotion | 2023-04-08T13:16:34.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | hoang14 | null | null | hoang14/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-08T11:15:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9345
- name: F1
type: f1
value: 0.9346363382551217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1638
- Accuracy: 0.9345
- F1: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.2778 | 0.9105 | 0.9083 |
| 0.5097 | 2.0 | 500 | 0.1806 | 0.9215 | 0.9217 |
| 0.5097 | 3.0 | 750 | 0.1638 | 0.9345 | 0.9346 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,919 | [
[
-0.0369873046875,
-0.04144287109375,
0.01349639892578125,
0.020355224609375,
-0.0243377685546875,
-0.0186004638671875,
-0.0116729736328125,
-0.0104522705078125,
0.01165008544921875,
0.0087127685546875,
-0.056396484375,
-0.050994873046875,
-0.0595703125,
-0.0... |
intanm/clickbait-classifier-20230408-001 | 2023-04-08T11:53:23.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:id_clickbait",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | intanm | null | null | intanm/clickbait-classifier-20230408-001 | 1 | 2 | transformers | 2023-04-08T11:32:31 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- id_clickbait
metrics:
- accuracy
model-index:
- name: clickbait-classifier-20230408-001
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: id_clickbait
type: id_clickbait
config: annotated
split: train
args: annotated
metrics:
- name: Accuracy
type: accuracy
value: 0.7991666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clickbait-classifier-20230408-001
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the id_clickbait dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7645
- Accuracy: 0.7992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4419 | 1.0 | 675 | 0.3934 | 0.8275 |
| 0.3611 | 2.0 | 1350 | 0.4369 | 0.8367 |
| 0.2017 | 3.0 | 2025 | 0.5936 | 0.8258 |
| 0.1369 | 4.0 | 2700 | 0.9894 | 0.8058 |
| 0.0941 | 5.0 | 3375 | 1.1425 | 0.82 |
| 0.0428 | 6.0 | 4050 | 1.3502 | 0.7958 |
| 0.0236 | 7.0 | 4725 | 1.4706 | 0.8058 |
| 0.0197 | 8.0 | 5400 | 1.6508 | 0.7975 |
| 0.0041 | 9.0 | 6075 | 1.7922 | 0.7967 |
| 0.0037 | 10.0 | 6750 | 1.7645 | 0.7992 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,255 | [
[
-0.030364990234375,
-0.0222015380859375,
0.00470733642578125,
0.00565338134765625,
-0.0208892822265625,
-0.02691650390625,
-0.00827789306640625,
-0.01342010498046875,
0.0051727294921875,
0.020599365234375,
-0.0347900390625,
-0.051513671875,
-0.0550537109375,
... |
ysige/distilbert-base-uncased-finetuned-emotion | 2023-04-09T12:53:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ysige | null | null | ysige/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-08T11:35:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9219748629797122
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2181
- Accuracy: 0.922
- F1: 0.9220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7994 | 1.0 | 250 | 0.3069 | 0.906 | 0.9035 |
| 0.2443 | 2.0 | 500 | 0.2181 | 0.922 | 0.9220 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cpu
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,844 | [
[
-0.038726806640625,
-0.04180908203125,
0.015167236328125,
0.0222015380859375,
-0.0268096923828125,
-0.0196685791015625,
-0.01366424560546875,
-0.009552001953125,
0.0096893310546875,
0.008758544921875,
-0.056976318359375,
-0.05084228515625,
-0.05987548828125,
... |
DonMakar/bert-base-banking77-pt2 | 2023-05-16T19:48:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | DonMakar | null | null | DonMakar/bert-base-banking77-pt2 | 0 | 2 | transformers | 2023-04-08T12:25:04 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5379
- F1: 0.5426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 7 | 1.8289 | 0.3806 |
| No log | 2.0 | 14 | 1.6058 | 0.5768 |
| No log | 3.0 | 21 | 1.5379 | 0.5426 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,440 | [
[
-0.0287628173828125,
-0.044189453125,
0.01079559326171875,
0.01424407958984375,
-0.041046142578125,
-0.0228424072265625,
-0.00989532470703125,
-0.020294189453125,
-0.0014791488647460938,
0.0413818359375,
-0.04583740234375,
-0.04095458984375,
-0.052581787109375,
... |
MAsterIt/Classify_2.0-Stable | 2023-04-08T17:34:56.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:Mulik/autotrain-data-classify-2.0",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | MAsterIt | null | null | MAsterIt/Classify_2.0-Stable | 0 | 2 | transformers | 2023-04-08T16:36:15 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: Tv for Television
datasets:
- Mulik/autotrain-data-classify-2.0
co2_eq_emissions:
emissions: 0.3718001513913416
license: mit
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
---
# Model Trained
- Problem type: Multi-class Classification
- Model ID: 47871116935
- CO2 Emissions (in grams): 0.3718
## Validation Metrics
- Loss: 1.388
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Mulik/Classify-2.0_Stable
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Mulik/autotrain-classify-2.0-47871116935", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Mulik/Classify-2.0_Stable", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,329 | [
[
-0.034210205078125,
-0.0261077880859375,
0.010986328125,
0.004840850830078125,
0.00017261505126953125,
0.0019426345825195312,
0.0017375946044921875,
-0.01305389404296875,
-0.0003440380096435547,
0.00868988037109375,
-0.042449951171875,
-0.036407470703125,
-0.060... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.