modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
douglch/dqn-SpaceInvadersNoFrameskip-v4 | 2023-05-24T07:43:39.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | douglch | null | null | douglch/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-24T07:42:55 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 662.00 +/- 198.67
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga douglch -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga douglch -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga douglch
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,689 | [
[
-0.04107666015625,
-0.036865234375,
0.022674560546875,
0.0252227783203125,
-0.0095062255859375,
-0.0184326171875,
0.01251220703125,
-0.01383209228515625,
0.0136260986328125,
0.02490234375,
-0.07086181640625,
-0.03558349609375,
-0.0276641845703125,
-0.0048408... |
tatsu-lab/alpaca-farm-reward-model-human-wdiff | 2023-05-31T04:13:29.000Z | [
"transformers",
"pytorch",
"reward_model",
"endpoints_compatible",
"region:us"
] | null | tatsu-lab | null | null | tatsu-lab/alpaca-farm-reward-model-human-wdiff | 1 | 2 | transformers | 2023-05-24T08:07:04 | Please see https://github.com/tatsu-lab/alpaca_farm#downloading-pre-tuned-alpacafarm-models for details on this model. | 118 | [
[
-0.051513671875,
-0.05340576171875,
0.029205322265625,
0.0187225341796875,
-0.0279998779296875,
-0.00044798851013183594,
0.01513671875,
-0.054473876953125,
0.0426025390625,
0.056427001953125,
-0.07025146484375,
-0.041229248046875,
-0.019989013671875,
0.00078... |
Sandrro/greenery_finder_model_v2 | 2023-05-24T11:01:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Sandrro | null | null | Sandrro/greenery_finder_model_v2 | 0 | 2 | transformers | 2023-05-24T10:11:21 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: greenery_finder_model_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# greenery_finder_model_v2
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1768
- F1: 0.9700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1936 | 1.0 | 896 | 0.2916 | 0.9500 |
| 0.3054 | 2.0 | 1792 | 0.1344 | 0.9700 |
| 0.1174 | 3.0 | 2688 | 0.1948 | 0.9700 |
| 0.0417 | 4.0 | 3584 | 0.1929 | 0.9700 |
| 0.1048 | 5.0 | 4480 | 0.1768 | 0.9700 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.1.0.dev20230523+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,620 | [
[
-0.02105712890625,
-0.04364013671875,
0.0159149169921875,
-0.0109100341796875,
-0.023651123046875,
-0.0232696533203125,
-0.021331787109375,
-0.0198516845703125,
0.0135650634765625,
0.021148681640625,
-0.056243896484375,
-0.039154052734375,
-0.0455322265625,
... |
kzhu/demo | 2023-05-24T14:18:25.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | kzhu | null | null | kzhu/demo | 0 | 2 | transformers | 2023-05-24T10:23:54 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: kzhu/demo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kzhu/demo
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2912
- Validation Loss: 0.4064
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5067 | 0.4163 | 0 |
| 0.2912 | 0.4064 | 1 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,311 | [
[
-0.04046630859375,
-0.049530029296875,
0.0247039794921875,
0.004177093505859375,
-0.044647216796875,
-0.036346435546875,
-0.0164947509765625,
-0.0163726806640625,
0.00836181640625,
0.0191802978515625,
-0.05572509765625,
-0.04913330078125,
-0.047210693359375,
... |
tatsu-lab/alpaca-farm-reward-model-sim-wdiff | 2023-05-31T04:11:49.000Z | [
"transformers",
"pytorch",
"reward_model",
"endpoints_compatible",
"region:us"
] | null | tatsu-lab | null | null | tatsu-lab/alpaca-farm-reward-model-sim-wdiff | 0 | 2 | transformers | 2023-05-24T10:25:13 | Please see https://github.com/tatsu-lab/alpaca_farm#downloading-pre-tuned-alpacafarm-models for details on this model. | 118 | [
[
-0.051513671875,
-0.05340576171875,
0.029205322265625,
0.0187225341796875,
-0.0279998779296875,
-0.00044798851013183594,
0.01513671875,
-0.054473876953125,
0.0426025390625,
0.056427001953125,
-0.07025146484375,
-0.041229248046875,
-0.019989013671875,
0.00078... |
Middelz2/roberta-large-aphasia-picture-description-10e | 2023-05-24T12:58:10.000Z | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | Middelz2 | null | null | Middelz2/roberta-large-aphasia-picture-description-10e | 0 | 2 | transformers | 2023-05-24T10:45:20 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Middelz2/roberta-large-aphasia-picture-description-10e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Middelz2/roberta-large-aphasia-picture-description-10e
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0392
- Validation Loss: 0.9399
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5600 | 1.2996 | 0 |
| 1.3034 | 1.2214 | 1 |
| 1.2276 | 1.1589 | 2 |
| 1.1964 | 1.0836 | 3 |
| 1.1387 | 1.0659 | 4 |
| 1.1209 | 1.0436 | 5 |
| 1.0559 | 1.0221 | 6 |
| 1.0564 | 0.9269 | 7 |
| 1.0227 | 0.9755 | 8 |
| 1.0392 | 0.9399 | 9 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,718 | [
[
-0.036468505859375,
-0.054412841796875,
0.029327392578125,
-0.0005960464477539062,
-0.03045654296875,
-0.040283203125,
-0.0274505615234375,
-0.0255126953125,
0.01294708251953125,
0.01456451416015625,
-0.05718994140625,
-0.041656494140625,
-0.07086181640625,
... |
YakovElm/Hyperledger5Classic_Unbalance | 2023-05-24T11:48:21.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger5Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T11:47:04 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger5Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger5Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0929
- Train Accuracy: 0.9675
- Validation Loss: 0.7805
- Validation Accuracy: 0.8091
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4062 | 0.8506 | 0.4205 | 0.8361 | 0 |
| 0.3717 | 0.8568 | 0.4169 | 0.8309 | 1 |
| 0.2991 | 0.8755 | 0.4455 | 0.8008 | 2 |
| 0.1925 | 0.9204 | 0.6156 | 0.8205 | 3 |
| 0.0929 | 0.9675 | 0.7805 | 0.8091 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,963 | [
[
-0.04705810546875,
-0.03369140625,
0.01320648193359375,
0.0104522705078125,
-0.0307159423828125,
-0.01849365234375,
-0.007587432861328125,
-0.020263671875,
0.0154571533203125,
0.019439697265625,
-0.05474853515625,
-0.04742431640625,
-0.052032470703125,
-0.02... |
kzhu/bert-fine-tuned-cola | 2023-05-24T14:04:55.000Z | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | kzhu | null | null | kzhu/bert-fine-tuned-cola | 0 | 2 | transformers | 2023-05-24T12:23:54 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-fine-tuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2912
- Validation Loss: 0.4064
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5067 | 0.4163 | 0 |
| 0.2912 | 0.4064 | 1 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,333 | [
[
-0.037322998046875,
-0.06011962890625,
0.014739990234375,
0.01306915283203125,
-0.032806396484375,
-0.0209503173828125,
-0.0177764892578125,
-0.0202484130859375,
0.013824462890625,
0.00969696044921875,
-0.0556640625,
-0.034515380859375,
-0.052093505859375,
-... |
emresvd/u141 | 2023-05-24T12:44:19.000Z | [
"keras",
"region:us"
] | null | emresvd | null | null | emresvd/u141 | 0 | 2 | keras | 2023-05-24T12:44:14 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 841 | [
[
-0.037261962890625,
-0.040008544921875,
0.03192138671875,
0.00817108154296875,
-0.043243408203125,
-0.017730712890625,
0.01097869873046875,
-0.003368377685546875,
0.0204620361328125,
0.030548095703125,
-0.04376220703125,
-0.05120849609375,
-0.040008544921875,
... |
YakovElm/Hyperledger10Classic_Unbalance | 2023-05-24T12:48:34.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger10Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T12:47:44 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger10Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger10Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2891
- Train Accuracy: 0.8893
- Validation Loss: 0.3834
- Validation Accuracy: 0.8423
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3712 | 0.8762 | 0.3778 | 0.8600 | 0 |
| 0.3430 | 0.8838 | 0.3757 | 0.8600 | 1 |
| 0.3360 | 0.8834 | 0.3762 | 0.8600 | 2 |
| 0.3265 | 0.8834 | 0.3813 | 0.8600 | 3 |
| 0.2891 | 0.8893 | 0.3834 | 0.8423 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,965 | [
[
-0.046478271484375,
-0.036529541015625,
0.01288604736328125,
0.01114654541015625,
-0.0286407470703125,
-0.019378662109375,
-0.0113677978515625,
-0.0185699462890625,
0.020721435546875,
0.020843505859375,
-0.053497314453125,
-0.042755126953125,
-0.05364990234375,
... |
YakovElm/Hyperledger15Classic_Unbalance | 2023-05-24T13:49:17.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger15Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T13:48:12 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger15Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger15Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0959
- Train Accuracy: 0.9640
- Validation Loss: 0.5428
- Validation Accuracy: 0.8029
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3155 | 0.8959 | 0.3312 | 0.8807 | 0 |
| 0.2908 | 0.9031 | 0.3234 | 0.8807 | 1 |
| 0.2564 | 0.9038 | 0.3389 | 0.8579 | 2 |
| 0.1958 | 0.9229 | 0.4862 | 0.8797 | 3 |
| 0.0959 | 0.9640 | 0.5428 | 0.8029 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,965 | [
[
-0.0462646484375,
-0.038177490234375,
0.01110076904296875,
0.01247406005859375,
-0.031219482421875,
-0.020233154296875,
-0.0115814208984375,
-0.01873779296875,
0.01800537109375,
0.0182037353515625,
-0.05584716796875,
-0.045928955078125,
-0.051727294921875,
-... |
ogimgio/K-12BERT-reward-neurallinguisticpioneers | 2023-05-26T16:27:33.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ogimgio | null | null | ogimgio/K-12BERT-reward-neurallinguisticpioneers | 0 | 2 | transformers | 2023-05-24T13:53:46 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: K-12BERT-reward-neurallinguisticpioneers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# K-12BERT-reward-neurallinguisticpioneers
This model is a fine-tuned version of [vasugoel/K-12BERT](https://huggingface.co/vasugoel/K-12BERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0129 | 1.0 | 244 | 0.4501 |
| 0.5275 | 2.0 | 488 | 0.5272 |
| 0.3624 | 3.0 | 732 | 0.3435 |
| 0.3053 | 4.0 | 976 | 0.2740 |
| 0.2485 | 5.0 | 1220 | 0.2465 |
| 0.2157 | 6.0 | 1464 | 0.2992 |
| 0.1942 | 7.0 | 1708 | 0.2495 |
| 0.1751 | 8.0 | 1952 | 0.2605 |
| 0.175 | 9.0 | 2196 | 0.2192 |
| 0.1553 | 10.0 | 2440 | 0.2790 |
| 0.1449 | 11.0 | 2684 | 0.2566 |
| 0.1472 | 12.0 | 2928 | 0.2547 |
| 0.1443 | 13.0 | 3172 | 0.2600 |
| 0.1375 | 14.0 | 3416 | 0.3310 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,969 | [
[
-0.0267486572265625,
-0.02142333984375,
0.0033969879150390625,
0.022491455078125,
-0.00955963134765625,
-0.015869140625,
-0.0236663818359375,
-0.0106658935546875,
0.0217437744140625,
0.01468658447265625,
-0.054412841796875,
-0.03936767578125,
-0.059844970703125,... |
ogimgio/distilbert-base-cased-reward-neurallinguisticpioneers | 2023-05-26T14:33:14.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ogimgio | null | null | ogimgio/distilbert-base-cased-reward-neurallinguisticpioneers | 0 | 2 | transformers | 2023-05-24T14:35:35 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-cased-reward-neurallinguisticpioneers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-reward-neurallinguisticpioneers
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2411
- Mse: 3.7748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.4559 | 1.0 | 122 | 0.6534 | 3.4024 |
| 0.5476 | 2.0 | 244 | 0.5601 | 3.8827 |
| 0.4224 | 3.0 | 366 | 0.4717 | 3.8263 |
| 0.3534 | 4.0 | 488 | 0.3511 | 3.7530 |
| 0.2827 | 5.0 | 610 | 0.2960 | 3.8889 |
| 0.2541 | 6.0 | 732 | 0.2416 | 3.5817 |
| 0.2289 | 7.0 | 854 | 0.3085 | 4.0660 |
| 0.1997 | 8.0 | 976 | 0.3212 | 3.4440 |
| 0.1889 | 9.0 | 1098 | 0.2852 | 3.9351 |
| 0.1752 | 10.0 | 1220 | 0.2360 | 3.8505 |
| 0.1683 | 11.0 | 1342 | 0.2939 | 4.1039 |
| 0.1601 | 12.0 | 1464 | 0.3242 | 4.0499 |
| 0.155 | 13.0 | 1586 | 0.2297 | 3.8442 |
| 0.1478 | 14.0 | 1708 | 0.2707 | 3.8680 |
| 0.1439 | 15.0 | 1830 | 0.2582 | 3.8703 |
| 0.1462 | 16.0 | 1952 | 0.2411 | 3.7748 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,281 | [
[
-0.03192138671875,
-0.0316162109375,
0.00855255126953125,
0.0212554931640625,
-0.01152801513671875,
-0.01256561279296875,
-0.012603759765625,
-0.0008215904235839844,
0.0238800048828125,
0.01183319091796875,
-0.051422119140625,
-0.045196533203125,
-0.059722900390... |
YakovElm/Hyperledger20Classic_Unbalance | 2023-05-24T14:40:01.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger20Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T14:38:22 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger20Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger20Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1783
- Train Accuracy: 0.9315
- Validation Loss: 0.3472
- Validation Accuracy: 0.8776
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2858 | 0.9104 | 0.2927 | 0.8983 | 0 |
| 0.2677 | 0.9153 | 0.2946 | 0.8983 | 1 |
| 0.2325 | 0.9170 | 0.3256 | 0.8600 | 2 |
| 0.1783 | 0.9315 | 0.3472 | 0.8776 | 3 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,885 | [
[
-0.0467529296875,
-0.0377197265625,
0.0138092041015625,
0.01334381103515625,
-0.031036376953125,
-0.0196380615234375,
-0.0094451904296875,
-0.019927978515625,
0.0174560546875,
0.020660400390625,
-0.05487060546875,
-0.043701171875,
-0.05340576171875,
-0.02529... |
DataIntelligenceTeam/en_qspot_import_v3_240524 | 2023-05-24T15:19:41.000Z | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | token-classification | DataIntelligenceTeam | null | null | DataIntelligenceTeam/en_qspot_import_v3_240524 | 0 | 2 | spacy | 2023-05-24T15:18:43 | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_qspot_import_v3_240524
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9738694667
- name: NER Recall
type: recall
value: 0.977602108
- name: NER F Score
type: f_score
value: 0.9757322176
---
| Feature | Description |
| --- | --- |
| **Name** | `en_qspot_import_v3_240524` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.2,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (17 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `commodity`, `company`, `delivery_cap`, `delivery_location`, `delivery_port`, `delivery_state`, `incoterms`, `measures`, `package_type`, `pickup_cap`, `pickup_location`, `pickup_port`, `pickup_state`, `quantity`, `stackable`, `volume`, `weight` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 97.57 |
| `ENTS_P` | 97.39 |
| `ENTS_R` | 97.76 |
| `TOK2VEC_LOSS` | 62559.46 |
| `NER_LOSS` | 64506.23 | | 1,335 | [
[
-0.0260772705078125,
-0.01187896728515625,
0.024688720703125,
0.024444580078125,
-0.041900634765625,
0.01503753662109375,
0.0091400146484375,
-0.0102996826171875,
0.03448486328125,
0.0282135009765625,
-0.061981201171875,
-0.0712890625,
-0.041717529296875,
-0... |
satyamverma/distilbert-base-uncased-finetuned-Pre_requisite_finder_2 | 2023-05-24T16:57:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | satyamverma | null | null | satyamverma/distilbert-base-uncased-finetuned-Pre_requisite_finder_2 | 0 | 2 | transformers | 2023-05-24T16:49:53 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-Pre_requisite_finder_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-Pre_requisite_finder_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4182
- Accuracy: 0.8130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.2534703769467627e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 37
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4523 | 1.0 | 863 | 0.4182 | 0.8130 |
| 0.4285 | 2.0 | 1726 | 0.4136 | 0.8130 |
| 0.4236 | 3.0 | 2589 | 0.4267 | 0.8130 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,559 | [
[
-0.03338623046875,
-0.044097900390625,
0.01806640625,
0.01444244384765625,
-0.0243682861328125,
-0.0234222412109375,
-0.006748199462890625,
-0.00811004638671875,
0.0011320114135742188,
0.0187835693359375,
-0.054931640625,
-0.042877197265625,
-0.060821533203125,
... |
jjlmsy/distilbert-base-uncased-finetuned-emotion | 2023-05-31T03:31:40.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jjlmsy | null | null | jjlmsy/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-24T17:37:16 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.925214103163335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8366 | 1.0 | 250 | 0.3248 | 0.902 | 0.8983 |
| 0.2521 | 2.0 | 500 | 0.2201 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.8.0
- Datasets 2.12.0
- Tokenizers 0.11.0
| 1,839 | [
[
-0.038604736328125,
-0.041412353515625,
0.0162811279296875,
0.02166748046875,
-0.026641845703125,
-0.0203857421875,
-0.01285552978515625,
-0.00848388671875,
0.01027679443359375,
0.0090789794921875,
-0.056488037109375,
-0.052337646484375,
-0.059417724609375,
... |
bowphs/GreBerta | 2023-05-24T17:39:39.000Z | [
"transformers",
"pytorch",
"tf",
"roberta",
"fill-mask",
"grc",
"dataset:bowphs/internet_archive_filtered",
"arxiv:2305.13698",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | bowphs | null | null | bowphs/GreBerta | 0 | 2 | transformers | 2023-05-24T17:37:32 | ---
language: grc
license: apache-2.0
inference: false
datasets:
- bowphs/internet_archive_filtered
---
# GrεBerta
The paper [Exploring Language Models for Classical Philology](https://todo.com) is the first effort to systematically provide state-of-the-art language models for Classical Philology. GrεBerta is a RoBerta-base sized, monolingual, encoder-only variant. Further information can be found in our paper or in our [GitHub repository](https://github.com/Heidelberg-NLP/ancient-language-models).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('bowphs/GreBerta')
model = AutoModelForMaskedLM.from_pretrained('bowphs/GreBerta')
```
Please check out the awesome Hugging Face tutorials on how to fine-tune our models.
## Evaluation Results
When fine-tuned on data from [Universal Dependencies 2.10](https://universaldependencies.org/), GrεBerta achieves the following results on the Ancient Greek Perseus dataset:
| Task | XPoS | UPoS | UAS | LAS |
|:--:|:--:|:--:|:--:|:--:|
| |95.83|91.09|88.20|83.98|
## Contact
If you have any questions or problems, feel free to [reach out](mailto:riemenschneider@cl.uni-heidelberg.de).
## Citation
```bibtex
@incollection{riemenschneiderfrank:2023,
address = "Toronto, Canada",
author = "Riemenschneider, Frederick and Frank, Anette",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL’23)",
note = "to appear",
pubType = "incollection",
publisher = "Association for Computational Linguistics",
title = "Exploring Large Language Models for Classical Philology",
url = "https://arxiv.org/abs/2305.13698",
year = "2023",
key = "riemenschneiderfrank:2023"
}
```
| 1,785 | [
[
-0.03277587890625,
-0.046478271484375,
0.02789306640625,
0.00843048095703125,
-0.023284912109375,
-0.022918701171875,
-0.035675048828125,
-0.032867431640625,
0.035736083984375,
0.0321044921875,
-0.03155517578125,
-0.05859375,
-0.05426025390625,
0.00219726562... |
Tron21/roberta-base | 2023-05-24T17:50:55.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"emoberta",
"en",
"dataset:MELD",
"dataset:IEMOCAP",
"arxiv:2108.12009",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Tron21 | null | null | Tron21/roberta-base | 0 | 2 | transformers | 2023-05-24T17:49:29 | ---
language: en
tags:
- emoberta
- roberta
license: mit
datasets:
- MELD
- IEMOCAP
---
Check https://github.com/tae898/erc for the details
[Watch a demo video!](https://youtu.be/qbr7fNd6J28)
# Emotion Recognition in Coversation (ERC)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in)
At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)
## Prerequisites
1. An x86-64 Unix or Unix-like machine
1. Python 3.8 or higher
1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python.
1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule)
1. pip install -r requirements.txt
## EmoBERTa training
First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then,
In this directory run the below commands. I recommend you to run this in a virtualenv.
```sh
python train-erc-text.py
```
This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`.
## Results on the test split (weighted f1 scores)
| Model | | MELD | IEMOCAP |
| -------- | ------------------------------- | :-------: | :-------: |
| EmoBERTa | No past and future utterances | 63.46 | 56.09 |
| | Only past utterances | 64.55 | **68.57** |
| | Only future utterances | 64.23 | 66.56 |
| | Both past and future utterances | **65.61** | 67.42 |
| | → *without speaker names* | 65.07 | 64.02 |
Above numbers are the mean values of five random seed runs.
If you want to see more training test details, check out `./results/`
If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file.
## Deployment
### Huggingface
We have released our models on huggingface:
- [emoberta-base](https://huggingface.co/tae898/emoberta-base)
- [emoberta-large](https://huggingface.co/tae898/emoberta-large)
They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you").
### Flask app
You can either run the Flask RESTful server app as a docker container or just as a python script.
1. Running the app as a docker container **(recommended)**.
There are four images. Take what you need:
- `docker run -it --rm -p 10006:10006 tae898/emoberta-base`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda`
- `docker run -it --rm -p 10006:10006 tae898/emoberta-large`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda`
1. Running the app in your python environment:
This method is less recommended than the docker one.
Run `pip install -r requirements-deploy.txt` first.<br>
The [`app.py`](app.py) is a flask RESTful server. The usage is below:
```console
app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE]
```
For example:
```sh
python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base
```
### Client
Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below:
```console
client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT
```
For example:
```sh
python client.py --text "Emotion recognition is so cool\!"
```
will give you:
```json
{
"neutral": 0.0049800905,
"joy": 0.96399665,
"surprise": 0.018937444,
"anger": 0.0071516023,
"sadness": 0.002021492,
"disgust": 0.001495996,
"fear": 0.0014167271
}
```
## Troubleshooting
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
1. Run `make style && quality` in the root repo directory, to ensure code quality.
1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
1. Push to the Branch (`git push origin feature/AmazingFeature`)
1. Open a Pull Request
## Cite our work
Check out the [paper](https://arxiv.org/abs/2108.12009).
```bibtex
@misc{kim2021emoberta,
title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa},
author={Taewoon Kim and Piek Vossen},
year={2021},
eprint={2108.12009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[](https://zenodo.org/badge/latestdoi/328375452)<br>
## Authors
- [Taewoon Kim](https://taewoonkim.com/)
## License
[MIT](https://choosealicense.com/licenses/mit/)
| 6,025 | [
[
-0.047332763671875,
-0.08209228515625,
0.025543212890625,
0.02557373046875,
-0.0013570785522460938,
-0.024322509765625,
-0.0161285400390625,
-0.0438232421875,
0.047821044921875,
0.005779266357421875,
-0.0335693359375,
-0.03961181640625,
-0.032440185546875,
0... |
Tron21/roberta-large | 2023-05-24T17:53:20.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"emoberta",
"en",
"dataset:MELD",
"dataset:IEMOCAP",
"arxiv:2108.12009",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Tron21 | null | null | Tron21/roberta-large | 0 | 2 | transformers | 2023-05-24T17:52:26 | ---
language: en
tags:
- emoberta
- roberta
license: mit
datasets:
- MELD
- IEMOCAP
---
Check https://github.com/tae898/erc for the details
[Watch a demo video!](https://youtu.be/qbr7fNd6J28)
# Emotion Recognition in Coversation (ERC)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in)
At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)
## Prerequisites
1. An x86-64 Unix or Unix-like machine
1. Python 3.8 or higher
1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python.
1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule)
1. pip install -r requirements.txt
## EmoBERTa training
First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then,
In this directory run the below commands. I recommend you to run this in a virtualenv.
```sh
python train-erc-text.py
```
This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`.
## Results on the test split (weighted f1 scores)
| Model | | MELD | IEMOCAP |
| -------- | ------------------------------- | :-------: | :-------: |
| EmoBERTa | No past and future utterances | 63.46 | 56.09 |
| | Only past utterances | 64.55 | **68.57** |
| | Only future utterances | 64.23 | 66.56 |
| | Both past and future utterances | **65.61** | 67.42 |
| | → *without speaker names* | 65.07 | 64.02 |
Above numbers are the mean values of five random seed runs.
If you want to see more training test details, check out `./results/`
If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file.
## Deployment
### Huggingface
We have released our models on huggingface:
- [emoberta-base](https://huggingface.co/tae898/emoberta-base)
- [emoberta-large](https://huggingface.co/tae898/emoberta-large)
They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you").
### Flask app
You can either run the Flask RESTful server app as a docker container or just as a python script.
1. Running the app as a docker container **(recommended)**.
There are four images. Take what you need:
- `docker run -it --rm -p 10006:10006 tae898/emoberta-base`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda`
- `docker run -it --rm -p 10006:10006 tae898/emoberta-large`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda`
1. Running the app in your python environment:
This method is less recommended than the docker one.
Run `pip install -r requirements-deploy.txt` first.<br>
The [`app.py`](app.py) is a flask RESTful server. The usage is below:
```console
app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE]
```
For example:
```sh
python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base
```
### Client
Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below:
```console
client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT
```
For example:
```sh
python client.py --text "Emotion recognition is so cool\!"
```
will give you:
```json
{
"neutral": 0.0049800905,
"joy": 0.96399665,
"surprise": 0.018937444,
"anger": 0.0071516023,
"sadness": 0.002021492,
"disgust": 0.001495996,
"fear": 0.0014167271
}
```
## Troubleshooting
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
1. Run `make style && quality` in the root repo directory, to ensure code quality.
1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
1. Push to the Branch (`git push origin feature/AmazingFeature`)
1. Open a Pull Request
## Cite our work
Check out the [paper](https://arxiv.org/abs/2108.12009).
```bibtex
@misc{kim2021emoberta,
title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa},
author={Taewoon Kim and Piek Vossen},
year={2021},
eprint={2108.12009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[](https://zenodo.org/badge/latestdoi/328375452)<br>
## Authors
- [Taewoon Kim](https://taewoonkim.com/)
## License
[MIT](https://choosealicense.com/licenses/mit/)
| 6,025 | [
[
-0.047332763671875,
-0.08209228515625,
0.0254974365234375,
0.0255889892578125,
-0.0013570785522460938,
-0.0242919921875,
-0.01611328125,
-0.0438232421875,
0.047821044921875,
0.005779266357421875,
-0.0335693359375,
-0.03961181640625,
-0.032440185546875,
0.013... |
aysusoenmez/criterion_1 | 2023-05-29T12:22:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:aysusoenmez/awareness_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | aysusoenmez | null | null | aysusoenmez/criterion_1 | 0 | 2 | transformers | 2023-05-24T18:21:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: aware1
results: []
datasets:
- aysusoenmez/awareness_dataset
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model to classify criterion 1
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 | 1,136 | [
[
-0.03826904296875,
-0.0341796875,
0.0225830078125,
-0.00019276142120361328,
-0.037811279296875,
-0.026885986328125,
-0.0012388229370117188,
-0.029388427734375,
0.01081085205078125,
0.0246124267578125,
-0.045562744140625,
-0.045623779296875,
-0.0574951171875,
... |
aysusoenmez/criterion_2 | 2023-05-29T12:24:39.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:aysusoenmez/awareness_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | aysusoenmez | null | null | aysusoenmez/criterion_2 | 0 | 2 | transformers | 2023-05-24T18:21:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: aware2
results: []
datasets:
- aysusoenmez/awareness_dataset
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model to classify criterion 2
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 | 1,136 | [
[
-0.027069091796875,
-0.03375244140625,
0.0212860107421875,
-0.00017368793487548828,
-0.039794921875,
-0.026702880859375,
-0.0015630722045898438,
-0.037261962890625,
0.0016794204711914062,
0.02288818359375,
-0.0364990234375,
-0.033660888671875,
-0.061309814453125... |
aysusoenmez/criterion_3 | 2023-05-29T12:25:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:aysusoenmez/awareness_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | aysusoenmez | null | null | aysusoenmez/criterion_3 | 0 | 2 | transformers | 2023-05-24T18:21:48 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: aware3
results: []
datasets:
- aysusoenmez/awareness_dataset
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model to classify criterion 3
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 | 1,136 | [
[
-0.033355712890625,
-0.03515625,
0.0304107666015625,
0.00481414794921875,
-0.037872314453125,
-0.0294036865234375,
0.005645751953125,
-0.036773681640625,
0.00196075439453125,
0.027374267578125,
-0.038726806640625,
-0.042510986328125,
-0.051544189453125,
0.02... |
aysusoenmez/criterion_4 | 2023-05-29T12:25:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:aysusoenmez/awareness_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | aysusoenmez | null | null | aysusoenmez/criterion_4 | 0 | 2 | transformers | 2023-05-24T18:21:53 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: aware4
results: []
datasets:
- aysusoenmez/awareness_dataset
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model to classify criterion 4
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 | 1,136 | [
[
-0.034820556640625,
-0.0276031494140625,
0.0275421142578125,
-0.0014066696166992188,
-0.03631591796875,
-0.020660400390625,
0.004245758056640625,
-0.034698486328125,
0.0057525634765625,
0.025482177734375,
-0.042327880859375,
-0.043243408203125,
-0.04690551757812... |
aysusoenmez/criterion_7 | 2023-05-29T12:25:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:aysusoenmez/awareness_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | aysusoenmez | null | null | aysusoenmez/criterion_7 | 0 | 2 | transformers | 2023-05-24T18:21:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: aware7
results: []
datasets:
- aysusoenmez/awareness_dataset
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model to classify criterion 7
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 | 1,136 | [
[
-0.038543701171875,
-0.02435302734375,
0.0225830078125,
-0.00377655029296875,
-0.04693603515625,
-0.0216217041015625,
0.00432586669921875,
-0.0295867919921875,
0.005279541015625,
0.032196044921875,
-0.035003662109375,
-0.046478271484375,
-0.05645751953125,
0... |
YakovElm/IntelDAOS5Classic_Unbalance | 2023-05-24T19:26:17.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/IntelDAOS5Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T19:25:14 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS5Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS5Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3470
- Train Accuracy: 0.8740
- Validation Loss: 0.4514
- Validation Accuracy: 0.8438
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4064 | 0.8680 | 0.4348 | 0.8438 | 0 |
| 0.3819 | 0.8740 | 0.4280 | 0.8438 | 1 |
| 0.3813 | 0.8740 | 0.4331 | 0.8438 | 2 |
| 0.3712 | 0.8740 | 0.4334 | 0.8438 | 3 |
| 0.3470 | 0.8740 | 0.4514 | 0.8438 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,959 | [
[
-0.044403076171875,
-0.0287322998046875,
0.01081085205078125,
0.00616455078125,
-0.0328369140625,
-0.015716552734375,
-0.0090484619140625,
-0.023101806640625,
0.01922607421875,
0.01497650146484375,
-0.055267333984375,
-0.046234130859375,
-0.0518798828125,
-0... |
YakovElm/IntelDAOS10Classic_Unbalance | 2023-05-24T19:44:26.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/IntelDAOS10Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T19:43:52 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS10Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS10Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2181
- Train Accuracy: 0.9200
- Validation Loss: 0.4534
- Validation Accuracy: 0.8739
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3256 | 0.9000 | 0.3765 | 0.8739 | 0 |
| 0.2675 | 0.9200 | 0.3868 | 0.8739 | 1 |
| 0.2492 | 0.9200 | 0.4025 | 0.8739 | 2 |
| 0.2181 | 0.9200 | 0.4534 | 0.8739 | 3 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,881 | [
[
-0.04400634765625,
-0.031341552734375,
0.010894775390625,
0.007534027099609375,
-0.032745361328125,
-0.0171661376953125,
-0.01085662841796875,
-0.022674560546875,
0.0216827392578125,
0.01494598388671875,
-0.053955078125,
-0.042877197265625,
-0.0523681640625,
... |
YakovElm/IntelDAOS15Classic_Unbalance | 2023-05-24T20:06:33.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/IntelDAOS15Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T20:05:58 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS15Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS15Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0950
- Train Accuracy: 0.9720
- Validation Loss: 0.4458
- Validation Accuracy: 0.8408
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2610 | 0.9350 | 0.3755 | 0.8859 | 0 |
| 0.2019 | 0.9460 | 0.3724 | 0.8859 | 1 |
| 0.1809 | 0.9470 | 0.4223 | 0.8859 | 2 |
| 0.1403 | 0.9570 | 0.4860 | 0.8829 | 3 |
| 0.0950 | 0.9720 | 0.4458 | 0.8408 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,961 | [
[
-0.0435791015625,
-0.03302001953125,
0.008941650390625,
0.007843017578125,
-0.0340576171875,
-0.0176239013671875,
-0.0107421875,
-0.0211334228515625,
0.02081298828125,
0.01451873779296875,
-0.055572509765625,
-0.04547119140625,
-0.05120849609375,
-0.03074645... |
YakovElm/IntelDAOS20Classic_Unbalance | 2023-05-24T20:24:34.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/IntelDAOS20Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T20:24:00 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS20Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS20Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0966
- Train Accuracy: 0.9610
- Validation Loss: 0.4538
- Validation Accuracy: 0.9099
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2178 | 0.9390 | 0.3146 | 0.9099 | 0 |
| 0.1524 | 0.9610 | 0.3181 | 0.9099 | 1 |
| 0.1325 | 0.9610 | 0.3401 | 0.9099 | 2 |
| 0.0966 | 0.9610 | 0.4538 | 0.9099 | 3 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,881 | [
[
-0.04400634765625,
-0.030731201171875,
0.01091766357421875,
0.00860595703125,
-0.03271484375,
-0.0167694091796875,
-0.00921630859375,
-0.0231781005859375,
0.021209716796875,
0.01568603515625,
-0.055877685546875,
-0.043426513671875,
-0.05181884765625,
-0.0309... |
YakovElm/Jira5Classic_Unbalance | 2023-05-24T22:16:07.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Jira5Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T22:15:29 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira5Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira5Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1840
- Train Accuracy: 0.9339
- Validation Loss: 0.6800
- Validation Accuracy: 0.6909
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5246 | 0.7660 | 0.6753 | 0.5205 | 0 |
| 0.4326 | 0.7870 | 1.0077 | 0.5047 | 1 |
| 0.2957 | 0.8814 | 0.9211 | 0.6467 | 2 |
| 0.1840 | 0.9339 | 0.6800 | 0.6909 | 3 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,869 | [
[
-0.035736083984375,
-0.032958984375,
0.00904083251953125,
0.0056915283203125,
-0.034393310546875,
-0.0131988525390625,
-0.0084075927734375,
-0.021209716796875,
0.021240234375,
0.015838623046875,
-0.0516357421875,
-0.04620361328125,
-0.051177978515625,
-0.030... |
YakovElm/Jira10Classic_Unbalance | 2023-05-24T22:33:10.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Jira10Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T22:32:34 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira10Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira10Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2105
- Train Accuracy: 0.9328
- Validation Loss: 1.0382
- Validation Accuracy: 0.6782
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5119 | 0.7618 | 0.7059 | 0.5110 | 0 |
| 0.4274 | 0.7985 | 1.1838 | 0.4921 | 1 |
| 0.3413 | 0.8520 | 0.8121 | 0.6562 | 2 |
| 0.2105 | 0.9328 | 1.0382 | 0.6782 | 3 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,871 | [
[
-0.0369873046875,
-0.034881591796875,
0.00878143310546875,
0.00881195068359375,
-0.0350341796875,
-0.0157012939453125,
-0.0098114013671875,
-0.018585205078125,
0.0244293212890625,
0.015838623046875,
-0.050201416015625,
-0.041748046875,
-0.050689697265625,
-0... |
YakovElm/Jira15Classic_Unbalance | 2023-05-24T22:57:49.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Jira15Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T22:57:13 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira15Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira15Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0539
- Train Accuracy: 0.9822
- Validation Loss: 1.0528
- Validation Accuracy: 0.7129
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4809 | 0.7817 | 0.6534 | 0.6057 | 0 |
| 0.3968 | 0.8132 | 1.0266 | 0.5394 | 1 |
| 0.2732 | 0.8877 | 0.6126 | 0.7413 | 2 |
| 0.1715 | 0.9454 | 1.0817 | 0.6814 | 3 |
| 0.1153 | 0.9601 | 0.7031 | 0.7413 | 4 |
| 0.0539 | 0.9822 | 1.0528 | 0.7129 | 5 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,031 | [
[
-0.038818359375,
-0.03521728515625,
0.007724761962890625,
0.006092071533203125,
-0.03314208984375,
-0.014984130859375,
-0.0091094970703125,
-0.0186004638671875,
0.02301025390625,
0.01515960693359375,
-0.052581787109375,
-0.04388427734375,
-0.0501708984375,
-... |
YakovElm/Jira20Classic_Unbalance | 2023-05-24T23:18:40.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Jira20Classic_Unbalance | 0 | 2 | transformers | 2023-05-24T23:18:03 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira20Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira20Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0438
- Train Accuracy: 0.9864
- Validation Loss: 0.4249
- Validation Accuracy: 0.9085
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3687 | 0.8730 | 0.2944 | 0.9338 | 0 |
| 0.2956 | 0.8741 | 0.2687 | 0.9338 | 1 |
| 0.2062 | 0.9119 | 0.2963 | 0.9243 | 2 |
| 0.1104 | 0.9622 | 0.3692 | 0.9085 | 3 |
| 0.0438 | 0.9864 | 0.4249 | 0.9085 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,951 | [
[
-0.03668212890625,
-0.034271240234375,
0.010040283203125,
0.0065460205078125,
-0.031951904296875,
-0.0131072998046875,
-0.00853729248046875,
-0.0188140869140625,
0.0243377685546875,
0.0172576904296875,
-0.0526123046875,
-0.043914794921875,
-0.050689697265625,
... |
YakovElm/Apache5Classic_256 | 2023-05-24T23:20:16.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache5Classic_256 | 0 | 2 | transformers | 2023-05-24T23:19:35 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache5Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache5Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2678
- Train Accuracy: 0.9131
- Validation Loss: 0.5122
- Validation Accuracy: 0.8194
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3098 | 0.9031 | 0.5071 | 0.8233 | 0 |
| 0.2939 | 0.9105 | 0.4952 | 0.8233 | 1 |
| 0.2678 | 0.9131 | 0.5122 | 0.8194 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,780 | [
[
-0.046661376953125,
-0.043609619140625,
0.0200958251953125,
0.005153656005859375,
-0.03411865234375,
-0.03167724609375,
-0.0169525146484375,
-0.027587890625,
0.0095977783203125,
0.01302337646484375,
-0.055389404296875,
-0.047119140625,
-0.052764892578125,
-0... |
YakovElm/MariaDB5Classic_Unbalance | 2023-05-25T00:52:31.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/MariaDB5Classic_Unbalance | 0 | 2 | transformers | 2023-05-25T00:51:54 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB5Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB5Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1482
- Train Accuracy: 0.9456
- Validation Loss: 0.3442
- Validation Accuracy: 0.9121
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3570 | 0.8762 | 0.2563 | 0.9322 | 0 |
| 0.2876 | 0.8946 | 0.2395 | 0.9322 | 1 |
| 0.2565 | 0.8937 | 0.2757 | 0.9322 | 2 |
| 0.2116 | 0.9121 | 0.3101 | 0.9322 | 3 |
| 0.1482 | 0.9456 | 0.3442 | 0.9121 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,955 | [
[
-0.042266845703125,
-0.033782958984375,
0.0104827880859375,
0.006832122802734375,
-0.03118896484375,
-0.0189361572265625,
-0.004604339599609375,
-0.019500732421875,
0.022369384765625,
0.0212554931640625,
-0.059173583984375,
-0.04931640625,
-0.047607421875,
-... |
YakovElm/MariaDB10Classic_Unbalance | 2023-05-25T01:27:53.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/MariaDB10Classic_Unbalance | 0 | 2 | transformers | 2023-05-25T01:27:15 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB10Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB10Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0400
- Train Accuracy: 0.9858
- Validation Loss: 0.3020
- Validation Accuracy: 0.9472
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3219 | 0.8971 | 0.1939 | 0.9523 | 0 |
| 0.2500 | 0.9163 | 0.2026 | 0.9523 | 1 |
| 0.2343 | 0.9155 | 0.1975 | 0.9523 | 2 |
| 0.1885 | 0.9331 | 0.1921 | 0.9523 | 3 |
| 0.1486 | 0.9381 | 0.2421 | 0.9523 | 4 |
| 0.1038 | 0.9506 | 0.2599 | 0.9372 | 5 |
| 0.0400 | 0.9858 | 0.3020 | 0.9472 | 6 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,117 | [
[
-0.041656494140625,
-0.034698486328125,
0.0099639892578125,
0.0084991455078125,
-0.032684326171875,
-0.0206756591796875,
-0.004581451416015625,
-0.017364501953125,
0.0243072509765625,
0.019622802734375,
-0.057464599609375,
-0.047698974609375,
-0.048675537109375,... |
YakovElm/MariaDB15Classic_Unbalance | 2023-05-25T01:53:33.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/MariaDB15Classic_Unbalance | 0 | 2 | transformers | 2023-05-25T01:52:55 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB15Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB15Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0951
- Train Accuracy: 0.9649
- Validation Loss: 0.2381
- Validation Accuracy: 0.9472
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2708 | 0.9146 | 0.1767 | 0.9598 | 0 |
| 0.2069 | 0.9280 | 0.1763 | 0.9598 | 1 |
| 0.1899 | 0.9305 | 0.1970 | 0.9598 | 2 |
| 0.1531 | 0.9364 | 0.1949 | 0.9598 | 3 |
| 0.0951 | 0.9649 | 0.2381 | 0.9472 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,957 | [
[
-0.0413818359375,
-0.036407470703125,
0.01015472412109375,
0.0109710693359375,
-0.033966064453125,
-0.0198211669921875,
-0.007213592529296875,
-0.01873779296875,
0.021575927734375,
0.0172119140625,
-0.057281494140625,
-0.04742431640625,
-0.049072265625,
-0.0... |
AustinCarthy/Onlyphish_100KP_BFall_fromP_10KGen_topP_0.75 | 2023-05-25T09:55:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Onlyphish_100KP_BFall_fromP_10KGen_topP_0.75 | 0 | 2 | transformers | 2023-05-25T02:12:13 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_100KP_BFall_fromP_10KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_100KP_BFall_fromP_10KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0209
- Accuracy: 0.9965
- F1: 0.9619
- Precision: 0.9996
- Recall: 0.927
- Roc Auc Score: 0.9635
- Tpr At Fpr 0.01: 0.9434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0085 | 1.0 | 72188 | 0.0459 | 0.9920 | 0.9096 | 0.9860 | 0.8442 | 0.9218 | 0.0 |
| 0.007 | 2.0 | 144376 | 0.0406 | 0.9939 | 0.9313 | 0.9991 | 0.8722 | 0.9361 | 0.8966 |
| 0.0017 | 3.0 | 216564 | 0.0273 | 0.9960 | 0.9561 | 0.9993 | 0.9164 | 0.9582 | 0.9216 |
| 0.0011 | 4.0 | 288752 | 0.0221 | 0.9969 | 0.9666 | 0.9985 | 0.9366 | 0.9683 | 0.938 |
| 0.0016 | 5.0 | 360940 | 0.0209 | 0.9965 | 0.9619 | 0.9996 | 0.927 | 0.9635 | 0.9434 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,256 | [
[
-0.041595458984375,
-0.042999267578125,
0.0084381103515625,
0.0102386474609375,
-0.0207061767578125,
-0.0232391357421875,
-0.007328033447265625,
-0.0175323486328125,
0.029022216796875,
0.0286407470703125,
-0.053009033203125,
-0.053619384765625,
-0.04916381835937... |
YakovElm/MariaDB20Classic_Unbalance | 2023-05-25T02:23:56.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/MariaDB20Classic_Unbalance | 0 | 2 | transformers | 2023-05-25T02:23:18 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB20Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB20Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1264
- Train Accuracy: 0.9531
- Validation Loss: 0.1792
- Validation Accuracy: 0.9573
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2700 | 0.9197 | 0.1479 | 0.9698 | 0 |
| 0.2198 | 0.9356 | 0.1380 | 0.9698 | 1 |
| 0.2087 | 0.9297 | 0.1265 | 0.9698 | 2 |
| 0.1787 | 0.9356 | 0.1502 | 0.9698 | 3 |
| 0.1664 | 0.9356 | 0.1463 | 0.9673 | 4 |
| 0.1264 | 0.9531 | 0.1792 | 0.9573 | 5 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,037 | [
[
-0.04180908203125,
-0.036712646484375,
0.00965118408203125,
0.00814056396484375,
-0.0330810546875,
-0.0187835693359375,
-0.003719329833984375,
-0.0180511474609375,
0.0257415771484375,
0.0213623046875,
-0.06005859375,
-0.04754638671875,
-0.04852294921875,
-0.... |
TirkNork/laptop_sentence_classfication | 2023-05-25T05:14:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | TirkNork | null | null | TirkNork/laptop_sentence_classfication | 0 | 2 | transformers | 2023-05-25T03:11:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: laptop_sentence_classfication
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# laptop_sentence_classfication
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6946
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 33 | 0.7876 | 0.6231 |
| No log | 2.0 | 66 | 0.6364 | 0.7308 |
| No log | 3.0 | 99 | 0.5647 | 0.7308 |
| No log | 4.0 | 132 | 0.5991 | 0.7846 |
| No log | 5.0 | 165 | 0.5773 | 0.7769 |
| No log | 6.0 | 198 | 0.5898 | 0.8 |
| No log | 7.0 | 231 | 0.7182 | 0.7769 |
| No log | 8.0 | 264 | 0.7451 | 0.7846 |
| No log | 9.0 | 297 | 0.7192 | 0.7923 |
| No log | 10.0 | 330 | 0.6946 | 0.8 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,918 | [
[
-0.03228759765625,
-0.045257568359375,
0.01068115234375,
0.00782012939453125,
-0.0170135498046875,
-0.016845703125,
-0.0058746337890625,
-0.00847625732421875,
0.009613037109375,
0.01544189453125,
-0.047149658203125,
-0.050506591796875,
-0.052398681640625,
-0... |
UchihaMadara/Thesis-SentimentAnalysis-3 | 2023-05-25T03:58:45.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | UchihaMadara | null | null | UchihaMadara/Thesis-SentimentAnalysis-3 | 0 | 2 | transformers | 2023-05-25T03:57:58 |
# Pretrained checkpoint: bert-base-uncased
# Traning hyperparameters:
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- prompt_format: sentence aspect - sentiment
# Training results
|Epoch | Train loss| Subtask 3 f1 | Subtask 3 precision | Subtask 3 recall | Subtask4 accuracy |
|:----:|:---------:|:------------:|:-------------------:|:----------------:|:-----------------:|
|1|305.5731324516237|0.8653648509763618|0.9142236699239956|0.8214634146341463|0.7921951219512195|
|2|160.19575848057866|0.8591029023746701|0.9356321839080459|0.7941463414634147|0.8009756097560976|
|3|101.52328581456095|0.8882175226586102|0.9177939646201873|0.8604878048780488|0.8321951219512195|
|4|63.44610589882359|0.8818737270875764|0.9222577209797657|0.8448780487804878|0.8282926829268292|
|5|43.48708916385658|0.8917835671342685|0.9165808444902163|0.8682926829268293|0.8214634146341463|
| 1,053 | [
[
-0.050262451171875,
-0.0236663818359375,
0.044586181640625,
0.019317626953125,
-0.03179931640625,
-0.01425933837890625,
-0.00868988037109375,
0.0064544677734375,
0.0225982666015625,
0.01325225830078125,
-0.0670166015625,
-0.03936767578125,
-0.04083251953125,
... |
zorbaalive/bert-test | 2023-05-25T05:40:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | zorbaalive | null | null | zorbaalive/bert-test | 0 | 2 | transformers | 2023-05-25T05:26:06 | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: bert-test
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
config: ynat
split: validation
args: ynat
metrics:
- name: F1
type: f1
value: 0.871822787948333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-test
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3693
- F1: 0.8718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2776 | 1.0 | 714 | 0.4056 | 0.8603 |
| 0.2862 | 2.0 | 1428 | 0.3693 | 0.8718 |
### Framework versions
- Transformers 4.27.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,601 | [
[
-0.036895751953125,
-0.057952880859375,
0.0220794677734375,
0.0182647705078125,
-0.0235595703125,
-0.0347900390625,
-0.0233612060546875,
-0.0264892578125,
0.002655029296875,
0.0143585205078125,
-0.056976318359375,
-0.033416748046875,
-0.055877685546875,
-0.0... |
YakovElm/Qt5Classic_Unbalance | 2023-05-25T05:48:58.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Qt5Classic_Unbalance | 0 | 2 | transformers | 2023-05-25T05:48:21 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt5Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt5Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1505
- Train Accuracy: 0.9470
- Validation Loss: 0.3218
- Validation Accuracy: 0.9067
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3372 | 0.8918 | 0.2536 | 0.9294 | 0 |
| 0.3193 | 0.8943 | 0.2479 | 0.9294 | 1 |
| 0.2871 | 0.8948 | 0.2818 | 0.9286 | 2 |
| 0.2276 | 0.9129 | 0.2921 | 0.9278 | 3 |
| 0.1505 | 0.9470 | 0.3218 | 0.9067 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,945 | [
[
-0.038116455078125,
-0.0219879150390625,
0.01425933837890625,
0.00847625732421875,
-0.036712646484375,
-0.012298583984375,
0.0005688667297363281,
-0.0139312744140625,
0.006946563720703125,
0.0153045654296875,
-0.053924560546875,
-0.047943115234375,
-0.0466613769... |
YakovElm/Apache10Classic_256 | 2023-05-25T06:41:26.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache10Classic_256 | 0 | 2 | transformers | 2023-05-25T06:40:49 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache10Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache10Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2060
- Train Accuracy: 0.9385
- Validation Loss: 0.4085
- Validation Accuracy: 0.8644
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2380 | 0.9348 | 0.4343 | 0.8644 | 0 |
| 0.2199 | 0.9383 | 0.3918 | 0.8644 | 1 |
| 0.2060 | 0.9385 | 0.4085 | 0.8644 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,782 | [
[
-0.046966552734375,
-0.044647216796875,
0.020294189453125,
0.006092071533203125,
-0.034515380859375,
-0.032440185546875,
-0.018218994140625,
-0.0275421142578125,
0.01107025146484375,
0.01433563232421875,
-0.05474853515625,
-0.045989990234375,
-0.052642822265625,... |
Intel/deberta-v3-base-mrpc-int8-static | 2023-05-25T07:50:52.000Z | [
"transformers",
"onnx",
"deberta-v2",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"neural-compressor",
"PostTrainingStatic",
"dataset:glue",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/deberta-v3-base-mrpc-int8-static | 0 | 2 | transformers | 2023-05-25T07:22:11 | ---
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- neural-compressor
- PostTrainingStatic
- onnx
datasets:
- glue
metrics:
- f1
---
# INT8 deberta-v3-base-mrpc
## Post-training static quantization
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Intel/deberta-v3-base-mrpc](https://huggingface.co/Intel/deberta-v3-base-mrpc).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9185|0.9223|
| **Model size (MB)** |361|705|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/deberta-v3-base-mrpc-int8-static')
``` | 822 | [
[
-0.020355224609375,
-0.0226287841796875,
0.03546142578125,
0.01427459716796875,
-0.028778076171875,
0.0074310302734375,
0.00020873546600341797,
0.0087127685546875,
-0.006618499755859375,
0.0110931396484375,
-0.027862548828125,
-0.0226593017578125,
-0.04486083984... |
YakovElm/Qt10Classic_Unbalance | 2023-05-25T07:30:35.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Qt10Classic_Unbalance | 0 | 2 | transformers | 2023-05-25T07:30:01 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt10Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt10Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2165
- Train Accuracy: 0.9208
- Validation Loss: 0.2313
- Validation Accuracy: 0.9416
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2917 | 0.9135 | 0.2157 | 0.9416 | 0 |
| 0.2674 | 0.9210 | 0.2150 | 0.9416 | 1 |
| 0.2591 | 0.9210 | 0.2200 | 0.9416 | 2 |
| 0.2376 | 0.9210 | 0.2135 | 0.9416 | 3 |
| 0.2393 | 0.9181 | 0.2232 | 0.9416 | 4 |
| 0.2564 | 0.9208 | 0.2213 | 0.9416 | 5 |
| 0.2165 | 0.9208 | 0.2313 | 0.9416 | 6 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,107 | [
[
-0.0394287109375,
-0.028076171875,
0.01485443115234375,
0.007678985595703125,
-0.03009033203125,
-0.01061248779296875,
-0.00031685829162597656,
-0.01201629638671875,
0.0165863037109375,
0.01654052734375,
-0.05474853515625,
-0.044403076171875,
-0.04913330078125,
... |
Intel/deberta-v3-base-mrpc-int8-dynamic | 2023-06-27T10:32:10.000Z | [
"transformers",
"onnx",
"deberta-v2",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"neural-compressor",
"PostTrainingDynamic",
"dataset:glue",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/deberta-v3-base-mrpc-int8-dynamic | 0 | 2 | transformers | 2023-05-25T07:39:02 | ---
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- neural-compressor
- PostTrainingDynamic
- onnx
datasets:
- glue
metrics:
- f1
---
# INT8 deberta-v3-base-mrpc
## Post-training Dynamic quantization
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Intel/deberta-v3-base-mrpc](https://huggingface.co/Intel/deberta-v3-base-mrpc).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9239|0.9223|
| **Model size (MB)** |350|705|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/deberta-v3-base-mrpc-int8-dynamic')
``` | 825 | [
[
-0.0160980224609375,
-0.0298919677734375,
0.0347900390625,
0.0161590576171875,
-0.025054931640625,
0.02459716796875,
0.0030841827392578125,
0.005176544189453125,
-0.00492095947265625,
0.007335662841796875,
-0.031494140625,
-0.018341064453125,
-0.045440673828125,... |
SHENMU007/neunit_tts_BASE_V1.0 | 2023-05-26T02:10:39.000Z | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | SHENMU007 | null | null | SHENMU007/neunit_tts_BASE_V1.0 | 0 | 2 | transformers | 2023-05-25T08:16:48 | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,251 | [
[
-0.03485107421875,
-0.05181884765625,
-0.005992889404296875,
0.0128936767578125,
-0.0254364013671875,
-0.0199737548828125,
-0.017547607421875,
-0.026519775390625,
0.01137542724609375,
0.021087646484375,
-0.04083251953125,
-0.049957275390625,
-0.04315185546875,
... |
YakovElm/Apache5Classic_512 | 2023-05-25T08:21:13.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache5Classic_512 | 0 | 2 | transformers | 2023-05-25T08:20:36 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache5Classic_512
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache5Classic_512
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2480
- Train Accuracy: 0.9133
- Validation Loss: 0.5436
- Validation Accuracy: 0.8233
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3081 | 0.9079 | 0.5358 | 0.8233 | 0 |
| 0.2901 | 0.9094 | 0.5686 | 0.8233 | 1 |
| 0.2480 | 0.9133 | 0.5436 | 0.8233 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,780 | [
[
-0.044830322265625,
-0.04327392578125,
0.020233154296875,
0.005382537841796875,
-0.0340576171875,
-0.03118896484375,
-0.0169219970703125,
-0.0281829833984375,
0.010406494140625,
0.01325225830078125,
-0.05413818359375,
-0.047760009765625,
-0.05218505859375,
-... |
YakovElm/Qt15Classic_Unbalance | 2023-05-25T08:43:47.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Qt15Classic_Unbalance | 0 | 2 | transformers | 2023-05-25T08:43:12 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt15Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt15Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0937
- Train Accuracy: 0.9686
- Validation Loss: 0.2791
- Validation Accuracy: 0.9424
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2526 | 0.9286 | 0.1931 | 0.9505 | 0 |
| 0.2277 | 0.9367 | 0.1823 | 0.9505 | 1 |
| 0.2120 | 0.9367 | 0.2099 | 0.9505 | 2 |
| 0.1642 | 0.9432 | 0.2405 | 0.9497 | 3 |
| 0.0937 | 0.9686 | 0.2791 | 0.9424 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,947 | [
[
-0.0382080078125,
-0.028656005859375,
0.01216888427734375,
0.01168060302734375,
-0.03729248046875,
-0.01401519775390625,
-0.0033397674560546875,
-0.0134429931640625,
0.01068878173828125,
0.01568603515625,
-0.05511474609375,
-0.04571533203125,
-0.04803466796875,
... |
SADAF-IMAMU/train | 2023-07-16T08:54:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | SADAF-IMAMU | null | null | SADAF-IMAMU/train | 0 | 2 | transformers | 2023-05-25T09:54:23 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9948
- Macro F1: 0.7856
- Precision: 0.7820
- Recall: 0.7956
- Kappa: 0.6940
- Accuracy: 0.7956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 25
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall | Kappa | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 101 | 1.1562 | 0.6031 | 0.5561 | 0.7044 | 0.4967 | 0.7044 |
| No log | 2.0 | 203 | 0.9119 | 0.7151 | 0.7107 | 0.7672 | 0.6236 | 0.7672 |
| No log | 3.0 | 304 | 0.8493 | 0.7280 | 0.7139 | 0.7734 | 0.6381 | 0.7734 |
| No log | 4.0 | 406 | 0.8087 | 0.7455 | 0.7632 | 0.7648 | 0.6421 | 0.7648 |
| 0.9431 | 5.0 | 507 | 0.7735 | 0.7779 | 0.7741 | 0.7931 | 0.6858 | 0.7931 |
| 0.9431 | 6.0 | 609 | 0.8201 | 0.7753 | 0.7735 | 0.7869 | 0.6797 | 0.7869 |
| 0.9431 | 7.0 | 710 | 0.8564 | 0.7886 | 0.7883 | 0.8017 | 0.7004 | 0.8017 |
| 0.9431 | 8.0 | 812 | 0.8712 | 0.7799 | 0.7754 | 0.7894 | 0.6854 | 0.7894 |
| 0.9431 | 9.0 | 913 | 0.9142 | 0.7775 | 0.7751 | 0.7869 | 0.6811 | 0.7869 |
| 0.2851 | 10.0 | 1015 | 0.9007 | 0.7820 | 0.7764 | 0.7943 | 0.6913 | 0.7943 |
| 0.2851 | 11.0 | 1116 | 0.9425 | 0.7859 | 0.7825 | 0.7956 | 0.6940 | 0.7956 |
| 0.2851 | 12.0 | 1218 | 0.9798 | 0.7815 | 0.7797 | 0.7906 | 0.6869 | 0.7906 |
| 0.2851 | 13.0 | 1319 | 0.9895 | 0.7895 | 0.7860 | 0.7993 | 0.7003 | 0.7993 |
| 0.2851 | 14.0 | 1421 | 0.9872 | 0.7854 | 0.7813 | 0.7943 | 0.6935 | 0.7943 |
| 0.1273 | 14.93 | 1515 | 0.9948 | 0.7856 | 0.7820 | 0.7956 | 0.6940 | 0.7956 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
| 3,016 | [
[
-0.048309326171875,
-0.042816162109375,
0.0107421875,
0.0017452239990234375,
-0.006801605224609375,
-0.00919342041015625,
-0.0008406639099121094,
-0.007266998291015625,
0.035308837890625,
0.0258026123046875,
-0.051971435546875,
-0.0465087890625,
-0.0486145019531... |
YakovElm/Qt20Classic_Unbalance | 2023-05-25T09:56:57.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Qt20Classic_Unbalance | 0 | 2 | transformers | 2023-05-25T09:56:20 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt20Classic_Unbalance
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt20Classic_Unbalance
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0744
- Train Accuracy: 0.9738
- Validation Loss: 0.2085
- Validation Accuracy: 0.9530
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2138 | 0.9462 | 0.1597 | 0.9586 | 0 |
| 0.1984 | 0.9462 | 0.1545 | 0.9586 | 1 |
| 0.1715 | 0.9459 | 0.1812 | 0.9586 | 2 |
| 0.1117 | 0.9584 | 0.2008 | 0.9570 | 3 |
| 0.0744 | 0.9738 | 0.2085 | 0.9530 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,947 | [
[
-0.037506103515625,
-0.0270233154296875,
0.01282501220703125,
0.01221466064453125,
-0.037506103515625,
-0.01180267333984375,
-0.000667572021484375,
-0.01357269287109375,
0.0095062255859375,
0.01629638671875,
-0.055389404296875,
-0.044525146484375,
-0.04791259765... |
gSperanza/wuensche_klassifikation | 2023-05-25T11:34:13.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | gSperanza | null | null | gSperanza/wuensche_klassifikation | 0 | 2 | sentence-transformers | 2023-05-25T11:33:06 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# gSperanza/wuensche_klassifikation
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("gSperanza/wuensche_klassifikation")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,555 | [
[
-0.00605010986328125,
-0.05352783203125,
0.03131103515625,
-0.008148193359375,
-0.0165252685546875,
-0.019439697265625,
-0.01995849609375,
-0.01103973388671875,
0.0006079673767089844,
0.0248260498046875,
-0.043487548828125,
-0.0238494873046875,
-0.04208374023437... |
Chakshu/conversation_terminator_classifier | 2023-05-25T17:16:34.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"en",
"dataset:Chakshu/conversation_ender",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Chakshu | null | null | Chakshu/conversation_terminator_classifier | 0 | 2 | transformers | 2023-05-25T12:11:21 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Chakshu/conversation_terminator_classifier
results: []
datasets:
- Chakshu/conversation_ender
language:
- en
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Chakshu/conversation_terminator_classifier
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0364
- Train Binary Accuracy: 0.9915
- Epoch: 8
## Example Usage
```py
from transformers import AutoTokenizer, TFBertForSequenceClassification, BertTokenizer
import tensorflow as tf
model_name = 'Chakshu/conversation_terminator_classifier'
tokenizer = BertTokenizer.from_pretrained(model_name)
model = TFBertForSequenceClassification.from_pretrained(model_name)
inputs = tokenizer("I will talk to you later", return_tensors="np", padding=True)
outputs = model(inputs.input_ids, inputs.attention_mask)
probabilities = tf.nn.sigmoid(outputs.logits)
# Round the probabilities to the nearest integer to get the class prediction
predicted_class = tf.round(probabilities)
print("The last message by the user indicates that the conversation has", "'ENDED'" if int(predicted_class.numpy()) == 1 else "'NOT ENDED'")
```
## Model description
Classifies if the user is ending the conversation or wanting to continue it.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Binary Accuracy | Epoch |
|:----------:|:---------------------:|:-----:|
| 0.2552 | 0.9444 | 0 |
| 0.1295 | 0.9872 | 1 |
| 0.0707 | 0.9872 | 2 |
| 0.0859 | 0.9829 | 3 |
| 0.0484 | 0.9872 | 4 |
| 0.0363 | 0.9957 | 5 |
| 0.0209 | 1.0 | 6 |
| 0.0268 | 0.9957 | 7 |
| 0.0364 | 0.9915 | 8 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3 | 2,787 | [
[
-0.035858154296875,
-0.058929443359375,
0.01385498046875,
-0.0014638900756835938,
-0.01361083984375,
-0.0139617919921875,
-0.01483917236328125,
-0.0177459716796875,
0.00785064697265625,
0.01015472412109375,
-0.045989990234375,
-0.050384521484375,
-0.0546875,
... |
minhtoan/DeBERTa-MLM-Vietnamese-Nom | 2023-05-25T15:13:22.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"fill-mask",
"nlp",
"lm",
"mlm",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | minhtoan | null | null | minhtoan/DeBERTa-MLM-Vietnamese-Nom | 0 | 2 | transformers | 2023-05-25T12:32:07 | ---
language:
- vi
pipeline_tag: fill-mask
widget:
- text: '[MASK]仍𠎬英䧺淑女'
tags:
- nlp
- lm
- mlm
---
# Pre-trained DeBERTaV2 Language Model for Vietnamese Nôm
DeBERTaV2ForMaskedLM, also known as DeBERTaV2 for short, is an advanced variant of the DeBERTa model specifically optimized for masked language modeling (MLM) tasks. Built upon the success of DeBERTa, DeBERTaV2 incorporates further enhancements to improve the model's performance and capabilities in understanding and generating natural language.
Pre-trained model called "DeBERTaForMaskedLM" designed exclusively for Chữ Nôm, the traditional Vietnamese writing system
Model was trained on some literary works and poetry: Bai ca ran co bac, Buom hoa tan truyen, Chinh phu ngam, Gia huan ca, Ho Xuan Huong, Luc Van Tien, Tale of Kieu-1870, Tale of Kieu 1871, Tale of kieu 1902,...
# Nôm language models
Chữ Nôm language models refer to language models specifically designed and trained to understand and generate text in Chữ Nôm, the traditional writing system used for Vietnamese prior to the 20th century. These language models are trained using large datasets of Chữ Nôm texts to learn the patterns, grammar, and vocabulary specific to this writing system.
# Develop Nôm language model
Developing a high-quality Chữ Nôm language model requires a substantial amount of specialized data and expertise. Here are the general steps involved in creating a Chữ Nôm language model:
1. Data Collection: Gather a sizable corpus of Chữ Nôm texts. This can include historical documents, literature, poetry, and other written materials in Chữ Nôm. It's essential to ensure the dataset covers a wide range of topics and genres.
2. Data Preprocessing: Clean and preprocess the Chữ Nôm dataset. This step involves tokenization, normalization, and segmentation of the text into individual words or characters. Additionally, special attention needs to be given to handling ambiguities, variant spellings, and character forms in Chữ Nôm.
3. Model Architecture: Select an appropriate neural network architecture for your Chữ Nôm language model. Popular choices include transformer-based architectures like BERT, GPT, or their variants, which have shown strong performance in various NLP tasks.
4. Model Training: Train the Chữ Nôm language model on your preprocessed dataset. This typically involves pretraining the model on a masked language modeling objective, where the model predicts masked or missing tokens in a sentence. Additionally, you can employ other pretraining tasks like next sentence prediction or document-level modeling to enhance the model's understanding of context.
5. Fine-tuning: Fine-tune the pretrained model on specific downstream tasks or domains relevant to Chữ Nôm. This step involves training the model on task-specific datasets or applying transfer learning techniques to adapt the model to more specific tasks
# How to use the model
~~~~
from transformers import RobertaTokenizerFast, RobertaForMaskedLM
# Load the tokenizer
tokenizer = RobertaTokenizerFast.from_pretrained('minhtoan/DeBERTa-MLM-Vietnamese-Nom')
# Load the model
model = RobertaForMaskedLM.from_pretrained('minhtoan/DeBERTa-MLM-Vietnamese-Nom')
# Example input sentence with a masked token
input_sentence = '想払𨀐' + '[MASK]'
# Mask the token
mask_token_index = (input_tokens[0] == tokenizer.mask_token_id).nonzero()
input_tokens[0, mask_token_index] = tokenizer.mask_token_id
# Generate predictions
with torch.no_grad():
outputs = model(input_tokens)
predictions = outputs.logits.argmax(dim=-1)
# Decode and print the predicted word
predicted_word = tokenizer.decode(predictions[0, mask_token_index].item())
print("Predicted word:", predicted_word)
~~~~
## Author
Phan Minh Toan | 3,743 | [
[
-0.0030803680419921875,
-0.06524658203125,
0.01496124267578125,
0.01258087158203125,
-0.025482177734375,
0.007068634033203125,
-0.00397491455078125,
-0.019561767578125,
0.006168365478515625,
0.07373046875,
-0.038818359375,
-0.034759521484375,
-0.04608154296875,
... |
Mantas/autotrain-finbert-61675134842 | 2023-05-25T13:06:11.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Mantas/autotrain-data-finbert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Mantas | null | null | Mantas/autotrain-finbert-61675134842 | 0 | 2 | transformers | 2023-05-25T13:04:57 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- Mantas/autotrain-data-finbert
co2_eq_emissions:
emissions: 0.30123820373366667
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 61675134842
- CO2 Emissions (in grams): 0.3012
## Validation Metrics
- Loss: 0.130
- Accuracy: 0.960
- Precision: 0.949
- Recall: 0.972
- AUC: 0.992
- F1: 0.960
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Mantas/autotrain-finbert-61675134842
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Mantas/autotrain-finbert-61675134842", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Mantas/autotrain-finbert-61675134842", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,118 | [
[
-0.03515625,
-0.0214996337890625,
0.01322174072265625,
0.0080108642578125,
-0.0070648193359375,
0.003204345703125,
0.008636474609375,
-0.01348114013671875,
0.006561279296875,
0.01271820068359375,
-0.058380126953125,
-0.03564453125,
-0.058074951171875,
0.0019... |
kitrak-rev/dqn-SpaceInvadersNoFrameskip-v4 | 2023-05-25T13:21:04.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | kitrak-rev | null | null | kitrak-rev/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-25T13:20:32 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 566.00 +/- 95.83
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kitrak-rev -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kitrak-rev -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kitrak-rev
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,697 | [
[
-0.04205322265625,
-0.036865234375,
0.0210113525390625,
0.0248870849609375,
-0.01058197021484375,
-0.01641845703125,
0.011932373046875,
-0.01363372802734375,
0.0135040283203125,
0.025299072265625,
-0.07049560546875,
-0.035797119140625,
-0.0261077880859375,
-... |
YakovElm/Apache15Classic_256 | 2023-05-25T14:03:02.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache15Classic_256 | 0 | 2 | transformers | 2023-05-25T14:02:24 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache15Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache15Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1801
- Train Accuracy: 0.9542
- Validation Loss: 0.3448
- Validation Accuracy: 0.8924
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1981 | 0.9477 | 0.3550 | 0.8924 | 0 |
| 0.1843 | 0.9542 | 0.3590 | 0.8924 | 1 |
| 0.1801 | 0.9542 | 0.3448 | 0.8924 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,782 | [
[
-0.045654296875,
-0.04473876953125,
0.0203704833984375,
0.006214141845703125,
-0.036712646484375,
-0.03216552734375,
-0.0175018310546875,
-0.025634765625,
0.01195526123046875,
0.01349639892578125,
-0.0548095703125,
-0.047027587890625,
-0.05291748046875,
-0.0... |
TirkNork/laptop_sentence_classfication_BERT | 2023-05-25T17:36:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | TirkNork | null | null | TirkNork/laptop_sentence_classfication_BERT | 0 | 2 | transformers | 2023-05-25T16:44:04 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: laptop_sentence_classfication_BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# laptop_sentence_classfication_BERT
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8406
- Accuracy: 0.8769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.4663 | 0.8077 |
| No log | 2.0 | 50 | 0.4100 | 0.8308 |
| No log | 3.0 | 75 | 0.4531 | 0.8615 |
| No log | 4.0 | 100 | 0.4976 | 0.8846 |
| No log | 5.0 | 125 | 0.6578 | 0.8385 |
| No log | 6.0 | 150 | 0.5496 | 0.8923 |
| No log | 7.0 | 175 | 0.5331 | 0.9 |
| No log | 8.0 | 200 | 0.6781 | 0.8538 |
| No log | 9.0 | 225 | 0.7478 | 0.8538 |
| No log | 10.0 | 250 | 0.8248 | 0.8462 |
| No log | 11.0 | 275 | 0.6933 | 0.8846 |
| No log | 12.0 | 300 | 0.7508 | 0.8846 |
| No log | 13.0 | 325 | 0.7998 | 0.8846 |
| No log | 14.0 | 350 | 0.8110 | 0.8846 |
| No log | 15.0 | 375 | 0.8330 | 0.8846 |
| No log | 16.0 | 400 | 0.8348 | 0.8692 |
| No log | 17.0 | 425 | 0.8406 | 0.8692 |
| No log | 18.0 | 450 | 0.8381 | 0.8615 |
| No log | 19.0 | 475 | 0.8391 | 0.8769 |
| 0.0826 | 20.0 | 500 | 0.8406 | 0.8769 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,561 | [
[
-0.036224365234375,
-0.04736328125,
0.006435394287109375,
0.00970458984375,
-0.006679534912109375,
-0.0142974853515625,
-0.011993408203125,
-0.01268768310546875,
0.0276641845703125,
0.015777587890625,
-0.0465087890625,
-0.055908203125,
-0.04644775390625,
-0.... |
cybersyn/mdeberta-homomex-track2 | 2023-05-29T11:54:47.000Z | [
"transformers",
"tf",
"deberta-v2",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | cybersyn | null | null | cybersyn/mdeberta-homomex-track2 | 0 | 2 | transformers | 2023-05-25T17:09:31 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: mdeberta-homomex-track2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mdeberta-homomex-track2
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamW', 'weight_decay': 0.004, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 115, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,453 | [
[
-0.03839111328125,
-0.040863037109375,
0.034515380859375,
0.0209197998046875,
-0.03759765625,
-0.0071563720703125,
-0.00408172607421875,
-0.020782470703125,
0.0204620361328125,
-0.0019817352294921875,
-0.059234619140625,
-0.045745849609375,
-0.055145263671875,
... |
kaitschorr/tutorial | 2023-05-25T19:56:19.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | kaitschorr | null | null | kaitschorr/tutorial | 0 | 2 | transformers | 2023-05-25T17:42:19 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
model-index:
- name: tutorial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tutorial
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,018 | [
[
-0.0338134765625,
-0.053009033203125,
0.018280029296875,
0.0042266845703125,
-0.032928466796875,
-0.035858154296875,
-0.01076507568359375,
-0.0193023681640625,
0.014190673828125,
0.0291595458984375,
-0.06158447265625,
-0.04010009765625,
-0.032440185546875,
-... |
RahulYadav/wav2vec2-xsl-r-300m-hinglish-model | 2023-05-26T14:04:01.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | RahulYadav | null | null | RahulYadav/wav2vec2-xsl-r-300m-hinglish-model | 0 | 2 | transformers | 2023-05-25T18:10:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xsl-r-300m-hinglish-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xsl-r-300m-hinglish-model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 53.3109
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 53.5422 | 2.0 | 2 | 54.4465 | 1.0 |
| 52.8519 | 4.0 | 4 | 54.4457 | 1.0 |
| 52.7079 | 6.0 | 6 | 54.4429 | 1.0 |
| 52.9959 | 8.0 | 8 | 54.4348 | 1.0 |
| 53.5864 | 10.0 | 10 | 54.4155 | 1.0 |
| 54.2708 | 12.0 | 12 | 54.3822 | 1.0 |
| 52.6333 | 14.0 | 14 | 54.3357 | 1.0 |
| 55.1505 | 16.0 | 16 | 54.2576 | 1.0 |
| 53.6833 | 18.0 | 18 | 54.2131 | 1.0 |
| 62.8162 | 20.0 | 20 | 54.1127 | 1.0 |
| 54.0794 | 22.0 | 22 | 53.9824 | 1.0 |
| 52.5195 | 24.0 | 24 | 53.8243 | 1.0 |
| 51.6922 | 26.0 | 26 | 53.6767 | 1.0 |
| 51.0235 | 28.0 | 28 | 53.5179 | 1.0 |
| 51.0729 | 30.0 | 30 | 53.3109 | 1.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,227 | [
[
-0.03253173828125,
-0.038360595703125,
0.0020046234130859375,
0.011810302734375,
-0.0145416259765625,
-0.019775390625,
-0.01058197021484375,
-0.01983642578125,
0.0181121826171875,
0.0294342041015625,
-0.060882568359375,
-0.052337646484375,
-0.042236328125,
-... |
AustinCarthy/Onlyphish_10K_fromB_BFall_10KGen_topP_0.75_noaddedB | 2023-05-25T20:57:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Onlyphish_10K_fromB_BFall_10KGen_topP_0.75_noaddedB | 0 | 2 | transformers | 2023-05-25T19:51:15 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_10K_fromB_BFall_10KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_10K_fromB_BFall_10KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0557
- Accuracy: 0.9950
- F1: 0.9452
- Precision: 0.9960
- Recall: 0.8994
- Roc Auc Score: 0.9496
- Tpr At Fpr 0.01: 0.8826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0118 | 1.0 | 6875 | 0.0270 | 0.9930 | 0.9214 | 0.9947 | 0.8582 | 0.9290 | 0.8176 |
| 0.0063 | 2.0 | 13750 | 0.0301 | 0.9944 | 0.9383 | 0.9957 | 0.8872 | 0.9435 | 0.855 |
| 0.0023 | 3.0 | 20625 | 0.0342 | 0.9951 | 0.9468 | 0.9900 | 0.9072 | 0.9534 | 0.8402 |
| 0.0 | 4.0 | 27500 | 0.0426 | 0.9954 | 0.9500 | 0.9937 | 0.91 | 0.9549 | 0.8686 |
| 0.0 | 5.0 | 34375 | 0.0557 | 0.9950 | 0.9452 | 0.9960 | 0.8994 | 0.9496 | 0.8826 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,264 | [
[
-0.04296875,
-0.043426513671875,
0.00875091552734375,
0.009552001953125,
-0.0212249755859375,
-0.020904541015625,
-0.0087127685546875,
-0.0179595947265625,
0.0301361083984375,
0.0287017822265625,
-0.052703857421875,
-0.05426025390625,
-0.050445556640625,
-0.... |
AustinCarthy/Onlyphish_10K_fromB_BFall_20KGen_topP_0.75_noaddedB | 2023-05-26T14:09:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Onlyphish_10K_fromB_BFall_20KGen_topP_0.75_noaddedB | 0 | 2 | transformers | 2023-05-25T20:57:26 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_10K_fromB_BFall_20KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_10K_fromB_BFall_20KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0515
- Accuracy: 0.9951
- F1: 0.9454
- Precision: 0.9973
- Recall: 0.8986
- Roc Auc Score: 0.9492
- Tpr At Fpr 0.01: 0.8868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0132 | 1.0 | 7188 | 0.0420 | 0.9915 | 0.9029 | 0.9945 | 0.8268 | 0.9133 | 0.7952 |
| 0.0034 | 2.0 | 14376 | 0.0398 | 0.9939 | 0.9322 | 0.9950 | 0.8768 | 0.9383 | 0.8162 |
| 0.0022 | 3.0 | 21564 | 0.0348 | 0.9955 | 0.9512 | 0.9937 | 0.9122 | 0.9560 | 0.886 |
| 0.0 | 4.0 | 28752 | 0.0360 | 0.9955 | 0.9507 | 0.9840 | 0.9196 | 0.9594 | 0.0 |
| 0.0 | 5.0 | 35940 | 0.0515 | 0.9951 | 0.9454 | 0.9973 | 0.8986 | 0.9492 | 0.8868 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,264 | [
[
-0.043365478515625,
-0.043731689453125,
0.0083770751953125,
0.01085662841796875,
-0.0210113525390625,
-0.021697998046875,
-0.008636474609375,
-0.0193328857421875,
0.0286865234375,
0.0279083251953125,
-0.053253173828125,
-0.055206298828125,
-0.050140380859375,
... |
YakovElm/Apache20Classic_256 | 2023-05-25T21:22:47.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache20Classic_256 | 0 | 2 | transformers | 2023-05-25T21:21:27 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache20Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache20Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1516
- Train Accuracy: 0.9624
- Validation Loss: 0.3379
- Validation Accuracy: 0.9055
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1700 | 0.9587 | 0.3521 | 0.9055 | 0 |
| 0.1543 | 0.9624 | 0.3551 | 0.9055 | 1 |
| 0.1516 | 0.9624 | 0.3379 | 0.9055 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,782 | [
[
-0.04638671875,
-0.0450439453125,
0.0204925537109375,
0.00634765625,
-0.03399658203125,
-0.03338623046875,
-0.017974853515625,
-0.0277252197265625,
0.01107025146484375,
0.0140228271484375,
-0.0552978515625,
-0.0472412109375,
-0.052825927734375,
-0.0242156982... |
TheBloke/Vigogne-Instruct-13B-GPTQ | 2023-08-21T13:57:39.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alpaca",
"LLM",
"fr",
"dataset:tatsu-lab/alpaca",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Vigogne-Instruct-13B-GPTQ | 2 | 2 | transformers | 2023-05-25T21:59:20 | ---
license: other
language:
- fr
pipeline_tag: text-generation
library_name: transformers
tags:
- alpaca
- llama
- LLM
datasets:
- tatsu-lab/alpaca
inference: false
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Vigogne Instruct 13B - A French instruction-following LLaMa model GPTQ
These files are GPTQ 4bit model files for [Vigogne Instruct 13B - A French instruction-following LLaMa model](https://huggingface.co/bofenghuang/vigogne-instruct-13b).
It is the result of merging the LoRA then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## Other repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vigogne-Instruct-13B-GPTQ)
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigogne-Instruct-13B-GGML)
* [Unquantised fp16 model in HF format](https://huggingface.co/TheBloke/Vigogne-Instruct-13B-HF)
## How to easily download and use this model in text-generation-webui
Open the text-generation-webui UI as normal.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Vigogne-Instruct-13B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded, `Vigogne-Instruct-13B-GPTQ`.
7. If you see an error in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Provided files
**Compatible file - Vigogne-Instruct-13B-GPTQ-4bit-128g.no-act-order.safetensors**
In the `main` branch you will find `Vigogne-Instruct-13B-GPTQ-4bit-128g.no-act-order.safetensors`
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility.
It was created with groupsize 128 to ensure higher quality inference, without `--act-order` parameter to maximise compatibility.
* `Vigogne-Instruct-13B-GPTQ-4bit-128g.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = 128. No act-order.
* Command used to create the GPTQ:
```
python llama.py /workspace/process/TheBloke_Vigogne-Instruct-13B-GGML/HF wikitext2 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/process/TheBloke_Vigogne-Instruct-13B-GGML/gptq/Vigogne-Instruct-13B-GPTQ-4bit-128g.no-act-order.safetensors
```
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-instruct-13b/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-instruct-13b: A French Instruction-following LLaMA Model
Vigogne-instruct-13b is a LLaMA-13B model fine-tuned to follow the 🇫🇷 French instructions.
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
**Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Vigogne is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
## Usage
This repo only contains the low-rank adapter. In order to access the complete model, you also need to load the base LLM model and tokenizer.
```python
from peft import PeftModel
from transformers import LlamaForCausalLM, LlamaTokenizer
base_model_name_or_path = "name/or/path/to/hf/llama/13b/model"
lora_model_name_or_path = "bofenghuang/vigogne-instruct-13b"
tokenizer = LlamaTokenizer.from_pretrained(base_model_name_or_path, padding_side="right", use_fast=False)
model = LlamaForCausalLM.from_pretrained(
base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, lora_model_name_or_path)
```
You can infer this model by using the following Google Colab Notebook.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
| 8,443 | [
[
-0.039764404296875,
-0.060455322265625,
0.02777099609375,
0.01224517822265625,
-0.025726318359375,
-0.002933502197265625,
0.0101776123046875,
-0.040374755859375,
0.02880859375,
0.021636962890625,
-0.0592041015625,
-0.04083251953125,
-0.036834716796875,
0.003... |
AustinCarthy/Onlyphish_10K_fromB_BFall_30KGen_topP_0.75_noaddedB | 2023-05-25T23:18:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Onlyphish_10K_fromB_BFall_30KGen_topP_0.75_noaddedB | 0 | 2 | transformers | 2023-05-25T22:04:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_10K_fromB_BFall_30KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_10K_fromB_BFall_30KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0506
- Accuracy: 0.9949
- F1: 0.9434
- Precision: 0.9975
- Recall: 0.8948
- Roc Auc Score: 0.9473
- Tpr At Fpr 0.01: 0.8848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0119 | 1.0 | 7500 | 0.0230 | 0.9947 | 0.9423 | 0.9869 | 0.9016 | 0.9505 | 0.7658 |
| 0.0067 | 2.0 | 15000 | 0.0320 | 0.9950 | 0.9447 | 0.9958 | 0.8986 | 0.9492 | 0.8786 |
| 0.0013 | 3.0 | 22500 | 0.0353 | 0.9953 | 0.9480 | 0.9945 | 0.9056 | 0.9527 | 0.8772 |
| 0.0007 | 4.0 | 30000 | 0.0373 | 0.9955 | 0.9509 | 0.9939 | 0.9114 | 0.9556 | 0.8862 |
| 0.0 | 5.0 | 37500 | 0.0506 | 0.9949 | 0.9434 | 0.9975 | 0.8948 | 0.9473 | 0.8848 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,264 | [
[
-0.04266357421875,
-0.0438232421875,
0.00905609130859375,
0.01062774658203125,
-0.020660400390625,
-0.021575927734375,
-0.00830841064453125,
-0.0177001953125,
0.0284576416015625,
0.028656005859375,
-0.052154541015625,
-0.056060791015625,
-0.050537109375,
-0.... |
AustinCarthy/Onlyphish_10K_fromB_BFall_40KGen_topP_0.75_noaddedB | 2023-05-26T14:22:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Onlyphish_10K_fromB_BFall_40KGen_topP_0.75_noaddedB | 0 | 2 | transformers | 2023-05-25T23:18:25 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_10K_fromB_BFall_40KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_10K_fromB_BFall_40KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0500
- Accuracy: 0.9943
- F1: 0.9371
- Precision: 0.9955
- Recall: 0.8852
- Roc Auc Score: 0.9425
- Tpr At Fpr 0.01: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0145 | 1.0 | 7813 | 0.0237 | 0.9946 | 0.9415 | 0.9760 | 0.9094 | 0.9541 | 0.8006 |
| 0.007 | 2.0 | 15626 | 0.0356 | 0.9943 | 0.9365 | 0.9953 | 0.8842 | 0.9420 | 0.8444 |
| 0.0023 | 3.0 | 23439 | 0.0402 | 0.9949 | 0.9435 | 0.9927 | 0.899 | 0.9493 | 0.8434 |
| 0.0019 | 4.0 | 31252 | 0.0453 | 0.9947 | 0.9412 | 0.9955 | 0.8924 | 0.9461 | 0.8592 |
| 0.0 | 5.0 | 39065 | 0.0500 | 0.9943 | 0.9371 | 0.9955 | 0.8852 | 0.9425 | 0.8404 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,264 | [
[
-0.043304443359375,
-0.043975830078125,
0.007904052734375,
0.0089111328125,
-0.02032470703125,
-0.0212249755859375,
-0.00916290283203125,
-0.0176544189453125,
0.030548095703125,
0.0286407470703125,
-0.052764892578125,
-0.055450439453125,
-0.050537109375,
-0.... |
amanda-cristina/finetuning-sentiment-longform-4500 | 2023-05-25T23:27:46.000Z | [
"transformers",
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | amanda-cristina | null | null | amanda-cristina/finetuning-sentiment-longform-4500 | 0 | 2 | transformers | 2023-05-25T23:20:27 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-longform-4500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-longform-4500
This model is a fine-tuned version of [kiddothe2b/longformer-mini-1024](https://huggingface.co/kiddothe2b/longformer-mini-1024) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8168
- F1: 0.8025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5864 | 1.0 | 563 | 0.4922 | 0.7614 | 0.7711 |
| 0.4896 | 2.0 | 1126 | 0.4363 | 0.8125 | 0.8120 |
| 0.4403 | 3.0 | 1689 | 0.4095 | 0.8168 | 0.8025 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,577 | [
[
-0.04217529296875,
-0.04254150390625,
0.0149383544921875,
0.01334381103515625,
-0.032928466796875,
-0.0308074951171875,
-0.0192108154296875,
-0.0202789306640625,
0.00951385498046875,
0.0224456787109375,
-0.06195068359375,
-0.039398193359375,
-0.051788330078125,
... |
AustinCarthy/MixGPT2_10K_fromB_BFall_10KGen_topP_0.75_noaddedB | 2023-05-26T01:21:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/MixGPT2_10K_fromB_BFall_10KGen_topP_0.75_noaddedB | 0 | 2 | transformers | 2023-05-26T00:16:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_10K_fromB_BFall_10KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_10K_fromB_BFall_10KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0624
- Accuracy: 0.9941
- F1: 0.9342
- Precision: 0.9971
- Recall: 0.8788
- Roc Auc Score: 0.9393
- Tpr At Fpr 0.01: 0.8718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0103 | 1.0 | 6875 | 0.0219 | 0.9942 | 0.9359 | 0.9807 | 0.895 | 0.9471 | 0.7034 |
| 0.0064 | 2.0 | 13750 | 0.0368 | 0.9942 | 0.9359 | 0.9922 | 0.8856 | 0.9426 | 0.8102 |
| 0.0019 | 3.0 | 20625 | 0.0487 | 0.9942 | 0.9355 | 0.9977 | 0.8806 | 0.9403 | 0.88 |
| 0.0005 | 4.0 | 27500 | 0.0574 | 0.9942 | 0.9352 | 0.9944 | 0.8826 | 0.9412 | 0.8494 |
| 0.0 | 5.0 | 34375 | 0.0624 | 0.9941 | 0.9342 | 0.9971 | 0.8788 | 0.9393 | 0.8718 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,260 | [
[
-0.045684814453125,
-0.043212890625,
0.006732940673828125,
0.0156097412109375,
-0.0233306884765625,
-0.0186920166015625,
-0.007038116455078125,
-0.0195465087890625,
0.0277557373046875,
0.0253448486328125,
-0.05010986328125,
-0.04583740234375,
-0.054656982421875,... |
YakovElm/Apache10Classic_512 | 2023-05-26T00:54:30.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache10Classic_512 | 0 | 2 | transformers | 2023-05-26T00:53:51 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache10Classic_512
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache10Classic_512
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2149
- Train Accuracy: 0.9383
- Validation Loss: 0.4074
- Validation Accuracy: 0.8644
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2398 | 0.9377 | 0.4290 | 0.8644 | 0 |
| 0.2231 | 0.9383 | 0.3830 | 0.8644 | 1 |
| 0.2149 | 0.9383 | 0.4074 | 0.8644 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,782 | [
[
-0.045135498046875,
-0.04443359375,
0.0209503173828125,
0.00589752197265625,
-0.03485107421875,
-0.0311431884765625,
-0.0177154541015625,
-0.0272216796875,
0.012176513671875,
0.01403045654296875,
-0.05389404296875,
-0.047027587890625,
-0.052764892578125,
-0.... |
thisisHJLee/distilbert-base-uncased-finetuned-emotion | 2023-05-26T01:15:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | thisisHJLee | null | null | thisisHJLee/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-26T01:10:45 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2190
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8292 | 1.0 | 250 | 0.3101 | 0.9095 | 0.9067 |
| 0.2482 | 2.0 | 500 | 0.2190 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
| 1,503 | [
[
-0.03729248046875,
-0.044036865234375,
0.016357421875,
0.0249481201171875,
-0.0277099609375,
-0.01861572265625,
-0.01413726806640625,
-0.007503509521484375,
0.01007843017578125,
0.0069580078125,
-0.05596923828125,
-0.050079345703125,
-0.062744140625,
-0.0082... |
franco1102/platzi-distilroberta-base-mrpc-glue-franco-medina | 2023-05-26T02:17:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | franco1102 | null | null | franco1102/platzi-distilroberta-base-mrpc-glue-franco-medina | 0 | 2 | transformers | 2023-05-26T01:13:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["In a statement , Mr. Rowland said : As is the case with all appointees , Commissioner Anson is accountable to me .",
"As is the case with all appointees, Commissioner Anson is accountable to me, Rowland said ."]
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-franco-medina
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8357843137254902
- name: F1
type: f1
value: 0.8718929254302105
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-franco-medina
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5966
- Accuracy: 0.8358
- F1: 0.8719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5343 | 1.09 | 500 | 0.4880 | 0.8309 | 0.8752 |
| 0.4025 | 2.18 | 1000 | 0.5966 | 0.8358 | 0.8719 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,361 | [
[
-0.0291900634765625,
-0.043304443359375,
0.01084136962890625,
0.02362060546875,
-0.02960205078125,
-0.024505615234375,
-0.009429931640625,
-0.0029506683349609375,
0.01030731201171875,
0.0091094970703125,
-0.04949951171875,
-0.04498291015625,
-0.0574951171875,
... |
AustinCarthy/MixGPT2_10K_fromB_BFall_20KGen_topP_0.75_noaddedB | 2023-05-26T02:30:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/MixGPT2_10K_fromB_BFall_20KGen_topP_0.75_noaddedB | 0 | 2 | transformers | 2023-05-26T01:22:18 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_10K_fromB_BFall_20KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_10K_fromB_BFall_20KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0642
- Accuracy: 0.9943
- F1: 0.9367
- Precision: 0.9968
- Recall: 0.8834
- Roc Auc Score: 0.9416
- Tpr At Fpr 0.01: 0.8766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0121 | 1.0 | 7188 | 0.0325 | 0.9929 | 0.9206 | 0.9906 | 0.8598 | 0.9297 | 0.7518 |
| 0.0047 | 2.0 | 14376 | 0.0269 | 0.9943 | 0.9365 | 0.9962 | 0.8836 | 0.9417 | 0.8458 |
| 0.0032 | 3.0 | 21564 | 0.0412 | 0.9945 | 0.9385 | 0.9944 | 0.8886 | 0.9442 | 0.8502 |
| 0.0004 | 4.0 | 28752 | 0.0586 | 0.9938 | 0.9301 | 0.9966 | 0.872 | 0.9359 | 0.8558 |
| 0.0003 | 5.0 | 35940 | 0.0642 | 0.9943 | 0.9367 | 0.9968 | 0.8834 | 0.9416 | 0.8766 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,260 | [
[
-0.045928955078125,
-0.042144775390625,
0.007556915283203125,
0.01502227783203125,
-0.0229034423828125,
-0.0188751220703125,
-0.00699615478515625,
-0.020172119140625,
0.028472900390625,
0.024993896484375,
-0.051239013671875,
-0.04583740234375,
-0.05438232421875,... |
YakovElm/Hyperledger5Classic_256 | 2023-05-26T02:03:23.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger5Classic_256 | 0 | 2 | transformers | 2023-05-26T02:02:46 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger5Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger5Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3507
- Train Accuracy: 0.8616
- Validation Loss: 0.4339
- Validation Accuracy: 0.8133
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4238 | 0.8478 | 0.4192 | 0.8361 | 0 |
| 0.3849 | 0.8547 | 0.4131 | 0.8361 | 1 |
| 0.3507 | 0.8616 | 0.4339 | 0.8133 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,790 | [
[
-0.049407958984375,
-0.038543701171875,
0.0220794677734375,
0.00146484375,
-0.0291595458984375,
-0.0272216796875,
-0.0167236328125,
-0.0267486572265625,
0.0095367431640625,
0.01312255859375,
-0.055633544921875,
-0.04962158203125,
-0.053131103515625,
-0.01722... |
AustinCarthy/MixGPT2_10K_fromB_BFall_30KGen_topP_0.75_noaddedB | 2023-05-26T03:39:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/MixGPT2_10K_fromB_BFall_30KGen_topP_0.75_noaddedB | 0 | 2 | transformers | 2023-05-26T02:30:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_10K_fromB_BFall_30KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_10K_fromB_BFall_30KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0595
- Accuracy: 0.9942
- F1: 0.9358
- Precision: 0.9968
- Recall: 0.8818
- Roc Auc Score: 0.9408
- Tpr At Fpr 0.01: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0134 | 1.0 | 7500 | 0.0271 | 0.9934 | 0.9272 | 0.9830 | 0.8774 | 0.9383 | 0.7528 |
| 0.0056 | 2.0 | 15000 | 0.0291 | 0.9946 | 0.9406 | 0.9907 | 0.8954 | 0.9475 | 0.8226 |
| 0.0038 | 3.0 | 22500 | 0.0312 | 0.9941 | 0.9341 | 0.9937 | 0.8812 | 0.9405 | 0.8302 |
| 0.0016 | 4.0 | 30000 | 0.0390 | 0.9951 | 0.9463 | 0.9945 | 0.9026 | 0.9512 | 0.852 |
| 0.0 | 5.0 | 37500 | 0.0595 | 0.9942 | 0.9358 | 0.9968 | 0.8818 | 0.9408 | 0.8582 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,260 | [
[
-0.04541015625,
-0.04266357421875,
0.006439208984375,
0.016876220703125,
-0.02301025390625,
-0.01812744140625,
-0.007564544677734375,
-0.02081298828125,
0.02655029296875,
0.024505615234375,
-0.051116943359375,
-0.047821044921875,
-0.05511474609375,
-0.018753... |
limcheekin/flan-t5-xxl-ct2 | 2023-05-30T12:15:05.000Z | [
"transformers",
"ctranslate2",
"flan-t5-xxl",
"quantization",
"int8",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | limcheekin | null | null | limcheekin/flan-t5-xxl-ct2 | 0 | 2 | transformers | 2023-05-26T03:32:31 | ---
license: apache-2.0
language:
- en
tags:
- ctranslate2
- flan-t5-xxl
- quantization
- int8
---
# Model Card for FLAN T5 XXL Q8
The model is quantized version of the [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl) with int8 quantization.
## Model Details
### Model Description
The model being quantized using [CTranslate2](https://opennmt.net/CTranslate2/) with the following command:
```
ct2-transformers-converter --model google/flan-t5-xxl --output_dir google/flan-t5-xxl-ct2 --copy_files tokenizer.json tokenizer_config.json special_tokens_map.json spiece.model --quantization int8 --force --low_cpu_mem_usage
```
If you want to perform the quantization yourself, you need to install the following dependencies:
```
pip install -qU ctranslate2 transformers[torch] sentencepiece accelerate
```
- **Shared by:** Lim Chee Kin
- **License:** Apache 2.0
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import ctranslate2
import transformers
translator = ctranslate2.Translator("google/flan-t5-xxl-ct2")
tokenizer = transformers.AutoTokenizer.from_pretrained("google/flan-t5-xxl-ct2")
input_text = "translate English to German: The house is wonderful."
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_text))
results = translator.translate_batch([input_tokens])
output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))
print(output_text)
```
The code is taken from https://opennmt.net/CTranslate2/guides/transformers.html#t5.
The key method of the code above is `translate_batch`, you can find out [its supported parameters here](https://opennmt.net/CTranslate2/python/ctranslate2.Translator.html#ctranslate2.Translator.translate_batch).
| 1,817 | [
[
-0.019012451171875,
-0.036102294921875,
0.0189056396484375,
0.0152130126953125,
-0.03375244140625,
-0.017822265625,
-0.0261993408203125,
-0.01094818115234375,
0.0009765625,
0.028564453125,
-0.03594970703125,
-0.04193115234375,
-0.042694091796875,
0.006862640... |
AustinCarthy/MixGPT2_10K_fromB_BFall_40KGen_topP_0.75_noaddedB | 2023-05-26T04:51:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/MixGPT2_10K_fromB_BFall_40KGen_topP_0.75_noaddedB | 0 | 2 | transformers | 2023-05-26T03:39:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_10K_fromB_BFall_40KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_10K_fromB_BFall_40KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0427
- Accuracy: 0.9953
- F1: 0.9488
- Precision: 0.9956
- Recall: 0.9062
- Roc Auc Score: 0.9530
- Tpr At Fpr 0.01: 0.8878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.014 | 1.0 | 7813 | 0.0293 | 0.9945 | 0.9402 | 0.9761 | 0.9068 | 0.9528 | 0.0 |
| 0.0053 | 2.0 | 15626 | 0.0322 | 0.9942 | 0.9360 | 0.9893 | 0.8882 | 0.9439 | 0.8134 |
| 0.0032 | 3.0 | 23439 | 0.0360 | 0.9953 | 0.9487 | 0.9924 | 0.9088 | 0.9542 | 0.8634 |
| 0.0 | 4.0 | 31252 | 0.0522 | 0.9940 | 0.9325 | 0.9975 | 0.8754 | 0.9376 | 0.8722 |
| 0.0 | 5.0 | 39065 | 0.0427 | 0.9953 | 0.9488 | 0.9956 | 0.9062 | 0.9530 | 0.8878 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,260 | [
[
-0.0457763671875,
-0.041717529296875,
0.00609588623046875,
0.015380859375,
-0.0231781005859375,
-0.0175018310546875,
-0.00754547119140625,
-0.0208282470703125,
0.0289764404296875,
0.0242462158203125,
-0.050994873046875,
-0.04742431640625,
-0.054443359375,
-0... |
wangsherpa/distilbert-base-uncased-finetuned-emotions | 2023-05-26T04:42:00.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | wangsherpa | null | null | wangsherpa/distilbert-base-uncased-finetuned-emotions | 0 | 2 | transformers | 2023-05-26T04:17:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9224293015994474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2176
- Accuracy: 0.9225
- F1: 0.9224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.817 | 1.0 | 250 | 0.3123 | 0.912 | 0.9091 |
| 0.2481 | 2.0 | 500 | 0.2176 | 0.9225 | 0.9224 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,850 | [
[
-0.038787841796875,
-0.040618896484375,
0.01422119140625,
0.0216217041015625,
-0.026580810546875,
-0.019683837890625,
-0.01395416259765625,
-0.00774383544921875,
0.00737762451171875,
0.007328033447265625,
-0.057525634765625,
-0.052459716796875,
-0.05914306640625... |
AustinCarthy/Onlyphish_100KP_BFall_fromP_10KGen_topP_0.75_noaddedB | 2023-05-26T11:48:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Onlyphish_100KP_BFall_fromP_10KGen_topP_0.75_noaddedB | 0 | 2 | transformers | 2023-05-26T04:52:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_100KP_BFall_fromP_10KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_100KP_BFall_fromP_10KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0211
- Accuracy: 0.9975
- F1: 0.9730
- Precision: 0.9994
- Recall: 0.948
- Roc Auc Score: 0.9740
- Tpr At Fpr 0.01: 0.9576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.004 | 1.0 | 65938 | 0.0210 | 0.9964 | 0.9613 | 0.9966 | 0.9284 | 0.9641 | 0.9244 |
| 0.003 | 2.0 | 131876 | 0.0195 | 0.9966 | 0.9630 | 0.9970 | 0.9312 | 0.9655 | 0.9268 |
| 0.0016 | 3.0 | 197814 | 0.0148 | 0.9977 | 0.9757 | 0.9983 | 0.954 | 0.9770 | 0.9554 |
| 0.0011 | 4.0 | 263752 | 0.0202 | 0.9970 | 0.9677 | 0.9989 | 0.9384 | 0.9692 | 0.9438 |
| 0.0005 | 5.0 | 329690 | 0.0211 | 0.9975 | 0.9730 | 0.9994 | 0.948 | 0.9740 | 0.9576 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,274 | [
[
-0.042938232421875,
-0.0428466796875,
0.0089874267578125,
0.009765625,
-0.0201416015625,
-0.0215911865234375,
-0.006954193115234375,
-0.0170745849609375,
0.0298309326171875,
0.0282135009765625,
-0.0531005859375,
-0.054443359375,
-0.04937744140625,
-0.0126113... |
dwancin/flag-classification | 2023-06-06T17:22:49.000Z | [
"transformers",
"pytorch",
"swin",
"image-classification",
"vision",
"flags",
"geography",
"dataset:dwancin/country-flags",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | dwancin | null | null | dwancin/flag-classification | 0 | 2 | transformers | 2023-05-26T05:19:35 | ---
tags:
- vision
- image-classification
- flags
- geography
datasets:
- dwancin/country-flags
widget:
- src: https://huggingface.co/dwancin/flag-classification/resolve/main/flag.png
example_title: German flag
- src: https://huggingface.co/dwancin/flag-classification/resolve/main/flag2.png
example_title: Danish flag
co2_eq_emissions:
emissions: 0.3886756137436338
---
# Country flag classification
This model has been trained on flags from following countries.
- Austria
- Belgium
- Bulgaria
- Croatia
- Czech Republic
- Denmark
- Estonia
- Finland
- France
- Germany
- Greece
- Holland
- Hungary
- Ireland
- Italy
- Latvia
- Lithuania
- Luxembourg
- Malta
- Slovakia
- Slovenia
- South Cyprus
- Spain
- Sweden
## Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 61828134901
- CO2 Emissions (in grams): 0.3887
## Validation Metrics
- Loss: 0.157
- Accuracy: 0.947
- Macro F1: 0.938
- Micro F1: 0.947
- Weighted F1: 0.946
- Macro Precision: 0.951
- Micro Precision: 0.947
- Weighted Precision: 0.954
- Macro Recall: 0.938
- Micro Recall: 0.947
- Weighted Recall: 0.947
| 1,116 | [
[
-0.048095703125,
0.0026378631591796875,
0.03900146484375,
0.0007834434509277344,
-0.0164031982421875,
0.0260009765625,
0.01202392578125,
-0.0259246826171875,
-0.004421234130859375,
0.032012939453125,
-0.0460205078125,
-0.056976318359375,
-0.040252685546875,
... |
YakovElm/Hyperledger10Classic_256 | 2023-05-26T06:37:37.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger10Classic_256 | 0 | 2 | transformers | 2023-05-26T06:36:59 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger10Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger10Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3173
- Train Accuracy: 0.8817
- Validation Loss: 0.3725
- Validation Accuracy: 0.8600
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3559 | 0.8834 | 0.3700 | 0.8600 | 0 |
| 0.3334 | 0.8838 | 0.3598 | 0.8600 | 1 |
| 0.3173 | 0.8817 | 0.3725 | 0.8600 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,792 | [
[
-0.049102783203125,
-0.04144287109375,
0.02154541015625,
0.00213623046875,
-0.028656005859375,
-0.02874755859375,
-0.0190277099609375,
-0.025054931640625,
0.01324462890625,
0.01371002197265625,
-0.053955078125,
-0.0467529296875,
-0.05316162109375,
-0.0199737... |
Mizuiro-sakura/bert-large-japanese-v2-finetuned-ner | 2023-07-21T14:10:18.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"ner",
"固有表現抽出",
"named entity recognition",
"named-entity-recognition",
"ja",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | Mizuiro-sakura | null | null | Mizuiro-sakura/bert-large-japanese-v2-finetuned-ner | 1 | 2 | transformers | 2023-05-26T09:38:08 | ---
license: mit
language: ja
tags:
- bert
- pytorch
- transformers
- ner
- 固有表現抽出
- named entity recognition
- named-entity-recognition
---
# このモデルはcl-tohoku/bert-large-japanese-v2をファインチューニングして、固有表現抽出(NER)に用いれるようにしたものです。
このモデルはcl-tohoku/bert-large-japanese-v2を
Wikipediaを用いた日本語の固有表現抽出データセット(ストックマーク社、https://github.com/stockmarkteam/ner-wikipedia-dataset )を用いてファインチューニングしたものです。
固有表現抽出(NER)タスクに用いることができます。
# This model is fine-tuned model for Named-Entity-Recognition(NER) which is based on cl-tohoku/bert-large-japanese-v2
This model is fine-tuned by using Wikipedia dataset.
You could use this model for NER tasks.
# モデルの精度 accuracy of model
全体:0.8620626488367833
|| precision |recall | f1-score | support|
|---|----|----|----|----|
|その他の組織名 | 0.80 | 0.78 | 0.79| 238|
|イベント名 | 0.82| 0.88 | 0.85 | 215|
|人名 | 0.92 | 0.95 | 0.93 | 549|
|地名 | 0.90 | 0.89 | 0.89 | 446|
|政治的組織名 | 0.86 | 0.91 | 0.89 | 263|
|施設名 | 0.86 | 0.91 | 0.88 | 241|
|法人名 | 0.88 | 0.89 | 0.88 | 487|
|製品名 | 0.62 | 0.68 | 0.65 | 252|
|micro avg |0.85 | 0.87 | 0.86 | 2691|
|macro avg | 0.83 | 0.86 | 0.85 | 2691|
|weighted avg | 0.85 | 0.87 | 0.86 | 2691|
# How to use 使い方
fugashiとtransformers,unidic_liteをインストールして (pip install fugashi, pip install unidic_lite, pip install transformers)
以下のコードを実行することで、NERタスクを解かせることができます。
please execute this code.
```python
from transformers import AutoTokenizer,pipeline, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/bert-large-japanese-v2-finetuned-ner')
model=AutoModelForTokenClassification.from_pretrained('Mizuiro-sakura/bert-large-japanese-v2-finetuned-ner') # 学習済みモデルの読み込み
text=('昨日は東京で買い物をした')
ner=pipeline('ner', model=model, tokenizer=tokenizer)
result=ner(text)
print(result)
```
| 2,018 | [
[
-0.03228759765625,
-0.046142578125,
0.0161895751953125,
0.0133514404296875,
-0.033935546875,
-0.008026123046875,
-0.0296478271484375,
-0.0264434814453125,
0.034912109375,
0.028656005859375,
-0.04583740234375,
-0.0343017578125,
-0.06549072265625,
0.0173492431... |
yanezh/twiiter_try14_fold1 | 2023-05-26T10:44:50.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | yanezh | null | null | yanezh/twiiter_try14_fold1 | 0 | 2 | transformers | 2023-05-26T10:11:32 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: twiiter_try14_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twiiter_try14_fold1
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2012
- F1: 0.9785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2079 | 1.0 | 500 | 0.1033 | 0.9684 |
| 0.0718 | 2.0 | 1000 | 0.2648 | 0.9503 |
| 0.036 | 3.0 | 1500 | 0.1545 | 0.9709 |
| 0.0228 | 4.0 | 2000 | 0.1603 | 0.9741 |
| 0.0092 | 5.0 | 2500 | 0.2108 | 0.9674 |
| 0.0089 | 6.0 | 3000 | 0.1471 | 0.9775 |
| 0.0056 | 7.0 | 3500 | 0.1388 | 0.9789 |
| 0.0059 | 8.0 | 4000 | 0.1555 | 0.9805 |
| 0.0046 | 9.0 | 4500 | 0.1683 | 0.9783 |
| 0.0 | 10.0 | 5000 | 0.1767 | 0.9809 |
| 0.0022 | 11.0 | 5500 | 0.1801 | 0.9785 |
| 0.0 | 12.0 | 6000 | 0.1942 | 0.9785 |
| 0.0 | 13.0 | 6500 | 0.1912 | 0.9799 |
| 0.0 | 14.0 | 7000 | 0.1921 | 0.9799 |
| 0.0005 | 15.0 | 7500 | 0.2012 | 0.9785 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,131 | [
[
-0.036041259765625,
-0.034332275390625,
0.01087188720703125,
0.0031986236572265625,
-0.0165863037109375,
-0.0293121337890625,
0.0001970529556274414,
-0.00982666015625,
0.0167083740234375,
0.02618408203125,
-0.059600830078125,
-0.048065185546875,
-0.0454711914062... |
YakovElm/Hyperledger15Classic_256 | 2023-05-26T11:12:44.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger15Classic_256 | 0 | 2 | transformers | 2023-05-26T11:12:07 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger15Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger15Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2409
- Train Accuracy: 0.9097
- Validation Loss: 0.4492
- Validation Accuracy: 0.8766
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3241 | 0.8955 | 0.3393 | 0.8807 | 0 |
| 0.2856 | 0.9035 | 0.3414 | 0.8797 | 1 |
| 0.2409 | 0.9097 | 0.4492 | 0.8766 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,792 | [
[
-0.049407958984375,
-0.04248046875,
0.022125244140625,
0.00276947021484375,
-0.02947998046875,
-0.02935791015625,
-0.0179595947265625,
-0.0253143310546875,
0.01174163818359375,
0.01397705078125,
-0.05523681640625,
-0.04803466796875,
-0.052337646484375,
-0.01... |
yanezh/twiiter_try15_fold0 | 2023-05-26T11:58:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | yanezh | null | null | yanezh/twiiter_try15_fold0 | 0 | 2 | transformers | 2023-05-26T11:25:24 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: twiiter_try15_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twiiter_try15_fold0
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2122
- F1: 0.9766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2209 | 1.0 | 500 | 0.1609 | 0.9642 |
| 0.0596 | 2.0 | 1000 | 0.1312 | 0.9705 |
| 0.0274 | 3.0 | 1500 | 0.1583 | 0.9746 |
| 0.0128 | 4.0 | 2000 | 0.1524 | 0.9784 |
| 0.0098 | 5.0 | 2500 | 0.1748 | 0.9784 |
| 0.0101 | 6.0 | 3000 | 0.1385 | 0.9826 |
| 0.0047 | 7.0 | 3500 | 0.1709 | 0.9779 |
| 0.0032 | 8.0 | 4000 | 0.2081 | 0.9739 |
| 0.0018 | 9.0 | 4500 | 0.1727 | 0.9776 |
| 0.0013 | 10.0 | 5000 | 0.2054 | 0.9767 |
| 0.002 | 11.0 | 5500 | 0.1938 | 0.9762 |
| 0.0029 | 12.0 | 6000 | 0.2310 | 0.9743 |
| 0.0 | 13.0 | 6500 | 0.1994 | 0.9774 |
| 0.0 | 14.0 | 7000 | 0.2111 | 0.9761 |
| 0.0 | 15.0 | 7500 | 0.2122 | 0.9766 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,131 | [
[
-0.034332275390625,
-0.032318115234375,
0.0125579833984375,
0.00324249267578125,
-0.0169219970703125,
-0.02972412109375,
-0.0029354095458984375,
-0.0101318359375,
0.0156707763671875,
0.0231475830078125,
-0.0609130859375,
-0.048797607421875,
-0.04376220703125,
... |
yanezh/twiiter_try15_fold1 | 2023-05-26T12:33:14.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | yanezh | null | null | yanezh/twiiter_try15_fold1 | 0 | 2 | transformers | 2023-05-26T11:59:47 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: twiiter_try15_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twiiter_try15_fold1
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1718
- F1: 0.9816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2154 | 1.0 | 500 | 0.0921 | 0.9763 |
| 0.0689 | 2.0 | 1000 | 0.1517 | 0.9646 |
| 0.0329 | 3.0 | 1500 | 0.0965 | 0.9821 |
| 0.0102 | 4.0 | 2000 | 0.1161 | 0.9819 |
| 0.0097 | 5.0 | 2500 | 0.1399 | 0.9784 |
| 0.0028 | 6.0 | 3000 | 0.2075 | 0.9725 |
| 0.006 | 7.0 | 3500 | 0.1767 | 0.9768 |
| 0.0059 | 8.0 | 4000 | 0.1750 | 0.9775 |
| 0.0001 | 9.0 | 4500 | 0.2467 | 0.9724 |
| 0.0073 | 10.0 | 5000 | 0.1923 | 0.9754 |
| 0.0026 | 11.0 | 5500 | 0.1645 | 0.9790 |
| 0.002 | 12.0 | 6000 | 0.1862 | 0.9801 |
| 0.0008 | 13.0 | 6500 | 0.1643 | 0.98 |
| 0.0 | 14.0 | 7000 | 0.1708 | 0.9816 |
| 0.0 | 15.0 | 7500 | 0.1718 | 0.9816 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,131 | [
[
-0.03558349609375,
-0.03271484375,
0.0115814208984375,
0.0029315948486328125,
-0.01519012451171875,
-0.0287017822265625,
-0.0007777214050292969,
-0.0089569091796875,
0.0167388916015625,
0.0240478515625,
-0.0601806640625,
-0.0482177734375,
-0.045623779296875,
... |
yanezh/twiiter_try15_fold2 | 2023-05-26T13:08:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | yanezh | null | null | yanezh/twiiter_try15_fold2 | 0 | 2 | transformers | 2023-05-26T12:33:42 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: twiiter_try15_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twiiter_try15_fold2
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1872
- F1: 0.9801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2295 | 1.0 | 500 | 0.1052 | 0.9689 |
| 0.0621 | 2.0 | 1000 | 0.1340 | 0.9727 |
| 0.0317 | 3.0 | 1500 | 0.1108 | 0.9776 |
| 0.0148 | 4.0 | 2000 | 0.1810 | 0.9738 |
| 0.0066 | 5.0 | 2500 | 0.1783 | 0.9743 |
| 0.0028 | 6.0 | 3000 | 0.1780 | 0.9776 |
| 0.0012 | 7.0 | 3500 | 0.1487 | 0.9826 |
| 0.0059 | 8.0 | 4000 | 0.1443 | 0.9805 |
| 0.0024 | 9.0 | 4500 | 0.1709 | 0.9795 |
| 0.0049 | 10.0 | 5000 | 0.1743 | 0.9781 |
| 0.0003 | 11.0 | 5500 | 0.1898 | 0.9785 |
| 0.0028 | 12.0 | 6000 | 0.2119 | 0.9773 |
| 0.0013 | 13.0 | 6500 | 0.1929 | 0.9786 |
| 0.0 | 14.0 | 7000 | 0.1863 | 0.9801 |
| 0.0 | 15.0 | 7500 | 0.1872 | 0.9801 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,131 | [
[
-0.03314208984375,
-0.034149169921875,
0.01041412353515625,
0.0034923553466796875,
-0.017486572265625,
-0.03021240234375,
-0.0018758773803710938,
-0.01134490966796875,
0.017242431640625,
0.024566650390625,
-0.05816650390625,
-0.047576904296875,
-0.04547119140625... |
yanezh/twiiter_try15_fold3 | 2023-05-26T13:41:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | yanezh | null | null | yanezh/twiiter_try15_fold3 | 0 | 2 | transformers | 2023-05-26T13:08:34 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: twiiter_try15_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twiiter_try15_fold3
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1796
- F1: 0.9805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2022 | 1.0 | 500 | 0.1547 | 0.9636 |
| 0.0612 | 2.0 | 1000 | 0.2014 | 0.9660 |
| 0.0211 | 3.0 | 1500 | 0.1204 | 0.9776 |
| 0.0107 | 4.0 | 2000 | 0.1797 | 0.9745 |
| 0.0073 | 5.0 | 2500 | 0.1931 | 0.9752 |
| 0.0128 | 6.0 | 3000 | 0.1808 | 0.9741 |
| 0.0088 | 7.0 | 3500 | 0.1756 | 0.9750 |
| 0.0088 | 8.0 | 4000 | 0.1726 | 0.9781 |
| 0.0012 | 9.0 | 4500 | 0.1707 | 0.9785 |
| 0.0004 | 10.0 | 5000 | 0.1794 | 0.9780 |
| 0.0031 | 11.0 | 5500 | 0.2156 | 0.9743 |
| 0.0012 | 12.0 | 6000 | 0.2106 | 0.9741 |
| 0.0 | 13.0 | 6500 | 0.1925 | 0.9796 |
| 0.0 | 14.0 | 7000 | 0.1903 | 0.9789 |
| 0.0008 | 15.0 | 7500 | 0.1796 | 0.9805 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,131 | [
[
-0.033599853515625,
-0.030914306640625,
0.01296234130859375,
0.00310516357421875,
-0.0166778564453125,
-0.03131103515625,
-0.0023212432861328125,
-0.01125335693359375,
0.01506805419921875,
0.0239715576171875,
-0.058441162109375,
-0.048919677734375,
-0.0437927246... |
yanezh/twiiter_try15_fold4 | 2023-05-26T14:14:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | yanezh | null | null | yanezh/twiiter_try15_fold4 | 0 | 2 | transformers | 2023-05-26T13:41:31 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: twiiter_try15_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twiiter_try15_fold4
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1791
- F1: 0.9805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2113 | 1.0 | 500 | 0.1149 | 0.9642 |
| 0.0638 | 2.0 | 1000 | 0.1456 | 0.9646 |
| 0.0179 | 3.0 | 1500 | 0.1507 | 0.9737 |
| 0.0171 | 4.0 | 2000 | 0.1835 | 0.9737 |
| 0.0096 | 5.0 | 2500 | 0.2713 | 0.9613 |
| 0.0072 | 6.0 | 3000 | 0.2221 | 0.9695 |
| 0.0073 | 7.0 | 3500 | 0.1639 | 0.9775 |
| 0.0049 | 8.0 | 4000 | 0.2184 | 0.9737 |
| 0.0018 | 9.0 | 4500 | 0.2568 | 0.9723 |
| 0.0062 | 10.0 | 5000 | 0.2106 | 0.9753 |
| 0.0001 | 11.0 | 5500 | 0.2204 | 0.9763 |
| 0.0 | 12.0 | 6000 | 0.2195 | 0.9761 |
| 0.0015 | 13.0 | 6500 | 0.1732 | 0.9795 |
| 0.0 | 14.0 | 7000 | 0.1739 | 0.9810 |
| 0.0011 | 15.0 | 7500 | 0.1791 | 0.9805 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,131 | [
[
-0.036651611328125,
-0.032623291015625,
0.0128936767578125,
0.0029201507568359375,
-0.01277923583984375,
-0.0245361328125,
0.0007910728454589844,
-0.008453369140625,
0.0210418701171875,
0.0263214111328125,
-0.058929443359375,
-0.04730224609375,
-0.04449462890625... |
ShayDuane/distilbert-base-uncased_emotion_ft_0526 | 2023-05-26T15:27:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ShayDuane | null | null | ShayDuane/distilbert-base-uncased_emotion_ft_0526 | 0 | 2 | transformers | 2023-05-26T14:57:58 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
- precision
model-index:
- name: distilbert-base-uncased_emotion_ft_0526
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
- name: F1
type: f1
value: 0.937552703246777
- name: Precision
type: precision
value: 0.9169515578018389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft_0526
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2275
- Accuracy: 0.9375
- F1: 0.9376
- Precision: 0.9170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|
| 0.2131 | 1.0 | 2000 | 0.2301 | 0.93 | 0.9305 | 0.9008 |
| 0.1881 | 2.0 | 4000 | 0.1854 | 0.9385 | 0.9388 | 0.9080 |
| 0.1012 | 3.0 | 6000 | 0.2200 | 0.935 | 0.9353 | 0.9066 |
| 0.0642 | 4.0 | 8000 | 0.2275 | 0.9375 | 0.9376 | 0.9170 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,163 | [
[
-0.03466796875,
-0.03399658203125,
0.0135040283203125,
0.0179290771484375,
-0.02435302734375,
-0.018707275390625,
-0.01026153564453125,
-0.00627899169921875,
0.01073455810546875,
0.00907135009765625,
-0.055419921875,
-0.052459716796875,
-0.058868408203125,
-... |
YakovElm/Hyperledger20Classic_256 | 2023-05-26T15:47:04.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger20Classic_256 | 0 | 2 | transformers | 2023-05-26T15:46:27 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger20Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger20Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2291
- Train Accuracy: 0.9173
- Validation Loss: 0.3351
- Validation Accuracy: 0.8932
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2939 | 0.9104 | 0.2896 | 0.8983 | 0 |
| 0.2623 | 0.9149 | 0.3026 | 0.8983 | 1 |
| 0.2291 | 0.9173 | 0.3351 | 0.8932 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,792 | [
[
-0.049835205078125,
-0.040740966796875,
0.0229339599609375,
0.0024356842041015625,
-0.028533935546875,
-0.0284576416015625,
-0.0164642333984375,
-0.026031494140625,
0.01088714599609375,
0.0143585205078125,
-0.055755615234375,
-0.047760009765625,
-0.052978515625,... |
juierror/whisper-tiny-thai | 2023-05-27T06:46:40.000Z | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"th",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | juierror | null | null | juierror/whisper-tiny-thai | 0 | 2 | transformers | 2023-05-26T16:15:31 | ---
license: apache-2.0
language:
- th
pipeline_tag: automatic-speech-recognition
---
# Whisper-base Thai finetuned
## 1) Environment Setup
```bash
# visit https://pytorch.org/get-started/locally/ to install pytorch
pip3 install transformers librosa
```
## 2) Usage
```python
from transformers import WhisperForConditionalGeneration, WhisperProcessor
import librosa
device = "cuda" # cpu, cuda
model = WhisperForConditionalGeneration.from_pretrained("juierror/whisper-tiny-thai").to(device)
processor = WhisperProcessor.from_pretrained("juierror/whisper-tiny-thai", language="Thai", task="transcribe")
path = "/path/to/audio/file"
def inference(path: str) -> str:
"""
Get the transcription from audio path
Args:
path(str): path to audio file (can be load with librosa)
Returns:
str: transcription
"""
audio, sr = librosa.load(path, sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features
generated_tokens = model.generate(
input_features=input_features.to(device),
max_new_tokens=255,
language="Thai"
).cpu()
transcriptions = processor.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
return transcriptions[0]
print(inference(path=path))
```
## 3) Evaluate Result
This model has been trained and evaluated on three datasets:
- Common Voice 13
- The Common Voice dataset has been cleaned and divided into training, testing, and development sets. Care has been taken to ensure that the sentences in each set are unique and do not have any duplicates.
- [Gowajee Corpus](https://github.com/ekapolc/gowajee_corpus)
- The Gowajee dataset has already been pre-split into training, development, and testing sets, allowing for direct utilization.
```
@techreport{gowajee,
title = {{Gowajee Corpus}},
author = {Ekapol Chuangsuwanich and Atiwong Suchato and Korrawe Karunratanakul and Burin Naowarat and Chompakorn CChaichot
and Penpicha Sangsa-nga and Thunyathon Anutarases and Nitchakran Chaipojjana},
year = {2020},
institution = {Chulalongkorn University, Faculty of Engineering, Computer Engineering Department},
month = {12},
Date-Added = {2021-07-20},
url = {https://github.com/ekapolc/gowajee_corpus}
note = {Version 0.9.2}
}
```
- [Thai Elderly Speech](https://github.com/VISAI-DATAWOW/Thai-Elderly-Speech-dataset/releases/tag/v1.0.0)
- As for the Thai Elderly Speech dataset, I performed a random split.
The Character Error Rate (CER) is calculated by removing spaces in both the labels and predicted text, and then computing the CER.
The Word Error Rate (WER) is calculated using the PythaiNLP newmm tokenizer to tokenize both the labels and predicted text, and then computing the WER.
These are the results.
| Dataset | WER | CER |
|-----------------------------------|-------|------|
| Common Voice 13 | 23.14 | 6.74 |
| Gowajee | 24.79 | 11.39 |
| Thai Elderly Speech (Smart Home) | 13.28 | 4.14 |
| Thai Elderly Speech (Health Care) | 12.99 | 3.41 |
| 3,140 | [
[
-0.00806427001953125,
-0.058349609375,
0.00927734375,
0.022369384765625,
-0.0181884765625,
-0.005138397216796875,
-0.035247802734375,
-0.0236968994140625,
0.015655517578125,
0.0301055908203125,
-0.03143310546875,
-0.0531005859375,
-0.042236328125,
0.00779342... |
nixtasy/diaster_distilbert_base_uncased | 2023-05-26T17:02:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | nixtasy | null | null | nixtasy/diaster_distilbert_base_uncased | 0 | 2 | transformers | 2023-05-26T16:46:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: diaster_distilbert_base_uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# diaster_distilbert_base_uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0345
- Accuracy: 0.8076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 381 | 0.3926 | 0.8372 |
| 0.4214 | 2.0 | 762 | 0.4764 | 0.8234 |
| 0.3014 | 3.0 | 1143 | 0.4208 | 0.8352 |
| 0.2051 | 4.0 | 1524 | 0.5139 | 0.8280 |
| 0.2051 | 5.0 | 1905 | 0.8480 | 0.7840 |
| 0.1424 | 6.0 | 2286 | 0.8045 | 0.8155 |
| 0.1042 | 7.0 | 2667 | 0.9295 | 0.8188 |
| 0.075 | 8.0 | 3048 | 0.9241 | 0.8142 |
| 0.075 | 9.0 | 3429 | 1.0063 | 0.8083 |
| 0.0614 | 10.0 | 3810 | 1.0345 | 0.8076 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,925 | [
[
-0.0341796875,
-0.03790283203125,
0.01227569580078125,
0.007904052734375,
-0.022430419921875,
-0.017852783203125,
0.001087188720703125,
-0.00501251220703125,
0.012908935546875,
0.0207672119140625,
-0.048431396484375,
-0.051483154296875,
-0.053009033203125,
-... |
sahil2801/instruct-codegen-16B | 2023-05-29T07:27:43.000Z | [
"transformers",
"pytorch",
"codegen",
"text-generation",
"code",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | sahil2801 | null | null | sahil2801/instruct-codegen-16B | 19 | 2 | transformers | 2023-05-26T16:52:08 | ---
license: bsd-3-clause
metrics:
- code_eval
pipeline_tag: text-generation
tags:
- code
model-index:
- name: instruct-codegen-16B
results:
- task:
type: code-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.371
verified: false
---
# Model Card for instruct-codegen-16B
<!-- Provide a quick summary of what the model is/does. -->
Instruct-codegen-16B is an instruction following codegen model based on [Salesforce codegen-16B-multi](https://huggingface.co/Salesforce/codegen-16B-multi) , finetuned on a dataset of 250k instruction-following samples in the alpaca format.
The data was not generated using any commercial LLM api.
The model achieves a result of 37.1% pass@1 on the HumanEval benchmark.
## Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "sahil2801/instruct-codegen-16B"
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).half().to(device)
instruction = "Write a function to scrape hacker news."
prompt = f"Below is an instruction that describes a task.\n Write a response that appropriately completes the request.\n\n ### Instruction:\n{instruction}\n\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(**inputs,temperature=0.3,do_sample=True,max_new_tokens=256)
print(tokenizer.decode(outputs[0],skip_special_tokens=True))
``` | 1,610 | [
[
-0.0206451416015625,
-0.056427001953125,
0.006343841552734375,
0.024993896484375,
-0.009765625,
0.0020904541015625,
0.00860595703125,
-0.02410888671875,
-0.0089569091796875,
0.035400390625,
-0.054443359375,
-0.0465087890625,
-0.019256591796875,
-0.0087661743... |
YakovElm/Apache15Classic_512 | 2023-05-26T17:04:32.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache15Classic_512 | 0 | 2 | transformers | 2023-05-26T17:03:55 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache15Classic_512
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache15Classic_512
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1701
- Train Accuracy: 0.9542
- Validation Loss: 0.3117
- Validation Accuracy: 0.8924
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1982 | 0.9540 | 0.3463 | 0.8924 | 0 |
| 0.1791 | 0.9542 | 0.3394 | 0.8924 | 1 |
| 0.1701 | 0.9542 | 0.3117 | 0.8924 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,782 | [
[
-0.043853759765625,
-0.04498291015625,
0.0200653076171875,
0.00624847412109375,
-0.03729248046875,
-0.031463623046875,
-0.0175323486328125,
-0.0249481201171875,
0.01335906982421875,
0.01361083984375,
-0.05426025390625,
-0.047393798828125,
-0.052520751953125,
... |
YakovElm/IntelDAOS5Classic_256 | 2023-05-26T17:28:21.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/IntelDAOS5Classic_256 | 0 | 2 | transformers | 2023-05-26T17:27:43 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS5Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS5Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3729
- Train Accuracy: 0.8740
- Validation Loss: 0.4307
- Validation Accuracy: 0.8438
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4026 | 0.8740 | 0.4333 | 0.8438 | 0 |
| 0.3844 | 0.8740 | 0.4434 | 0.8438 | 1 |
| 0.3729 | 0.8740 | 0.4307 | 0.8438 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,786 | [
[
-0.04522705078125,
-0.038299560546875,
0.021331787109375,
0.0003991127014160156,
-0.03369140625,
-0.0282440185546875,
-0.0182952880859375,
-0.028228759765625,
0.01204681396484375,
0.010650634765625,
-0.054656982421875,
-0.048675537109375,
-0.0518798828125,
-... |
jwoods/dqn-SpaceInvadersNoFrameskip-v4 | 2023-05-26T17:41:17.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | jwoods | null | null | jwoods/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-26T17:40:40 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 597.50 +/- 211.86
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jwoods -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jwoods -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jwoods
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,685 | [
[
-0.040863037109375,
-0.03656005859375,
0.0216217041015625,
0.024871826171875,
-0.00958251953125,
-0.0172576904296875,
0.01311492919921875,
-0.01387786865234375,
0.01316070556640625,
0.02410888671875,
-0.0716552734375,
-0.035003662109375,
-0.0269927978515625,
... |
cojocaruvicentiu/bert-finetuned-squad | 2023-05-27T13:41:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | cojocaruvicentiu | null | null | cojocaruvicentiu/bert-finetuned-squad | 0 | 2 | transformers | 2023-05-26T17:42:04 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,041 | [
[
-0.040679931640625,
-0.0565185546875,
0.01052093505859375,
0.0186004638671875,
-0.0265350341796875,
-0.0170135498046875,
-0.0193634033203125,
-0.025146484375,
0.006031036376953125,
0.02545166015625,
-0.0777587890625,
-0.031036376953125,
-0.045135498046875,
-... |
YakovElm/IntelDAOS10Classic_256 | 2023-05-26T19:04:36.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/IntelDAOS10Classic_256 | 0 | 2 | transformers | 2023-05-26T19:03:59 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS10Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS10Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2668
- Train Accuracy: 0.9200
- Validation Loss: 0.3893
- Validation Accuracy: 0.8739
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2985 | 0.9160 | 0.3932 | 0.8739 | 0 |
| 0.2678 | 0.9200 | 0.3786 | 0.8739 | 1 |
| 0.2668 | 0.9200 | 0.3893 | 0.8739 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,788 | [
[
-0.04510498046875,
-0.040313720703125,
0.021087646484375,
-0.00031876564025878906,
-0.03265380859375,
-0.0278167724609375,
-0.0182952880859375,
-0.027984619140625,
0.0134735107421875,
0.01080322265625,
-0.054229736328125,
-0.0469970703125,
-0.051788330078125,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.