modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
CeroShrijver/m3e-base-text-classification | 2023-06-24T10:32:07.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | CeroShrijver | null | null | CeroShrijver/m3e-base-text-classification | 0 | 2 | transformers | 2023-06-16T19:32:41 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: m3e-base-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m3e-base-text-classification
This model is a fine-tuned version of [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6529
- Accuracy: 0.7826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.495 | 1.0 | 1009 | 0.5175 | 0.7783 |
| 0.3792 | 2.0 | 2018 | 0.5600 | 0.7748 |
| 0.2503 | 3.0 | 3027 | 0.6529 | 0.7826 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.6
| 1,446 | [
[
-0.032501220703125,
-0.0355224609375,
0.0176239013671875,
-0.00971221923828125,
-0.0251312255859375,
-0.0361328125,
0.0055999755859375,
-0.01412200927734375,
0.001674652099609375,
0.0310211181640625,
-0.0599365234375,
-0.054107666015625,
-0.06597900390625,
-... |
sam34738/mBERT | 2023-06-16T23:44:39.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | sam34738 | null | null | sam34738/mBERT | 0 | 2 | transformers | 2023-06-16T20:24:12 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: mbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9812
- Accuracy: 0.6583
- F1: 0.6948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-05
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.749 | 1.0 | 2100 | 0.7068 | 0.4994 | 0.0131 |
| 0.7707 | 2.0 | 4200 | 0.9812 | 0.6583 | 0.6948 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,473 | [
[
-0.03680419921875,
-0.05206298828125,
0.0157928466796875,
0.0197906494140625,
-0.032318115234375,
-0.01079559326171875,
-0.0165557861328125,
-0.01522064208984375,
0.01027679443359375,
0.02716064453125,
-0.042449951171875,
-0.043060302734375,
-0.0491943359375,
... |
mrjunos/depression-reddit-distilroberta-base | 2023-06-20T23:05:41.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"depression",
"reddit",
"generated_from_trainer",
"en",
"dataset:mrjunos/depression-reddit-cleaned",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | mrjunos | null | null | mrjunos/depression-reddit-distilroberta-base | 0 | 2 | transformers | 2023-06-17T01:45:38 | ---
license: apache-2.0
tags:
- text-classification
- depression
- reddit
- generated_from_trainer
datasets:
- mrjunos/depression-reddit-cleaned
metrics:
- accuracy
widget:
- text:
- >-
i just found out my boyfriend is depressed i really want to be there for him
but i feel like i ve only been saying the wrong thing how can i be there for
him help him and see him get better i m worried it will continue to the
point it will consume him i can already see his personality changing and i m
scared for the future what thing can i say or do to comfort or help
example_title: depression
- text:
- >-
i m getting more and more people asking where they can buy the ambients
album simple answer is quot not yet quot it ll be on itunes eventually
example_title: not_depression
model-index:
- name: depression-reddit-distilroberta-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: mrjunos/depression-reddit-cleaned
type: depression-reddit-cleaned
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9715578539107951
language:
- en
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Example Pipeline
```python
from transformers import pipeline
predict_task = pipeline(model="mrjunos/depression-reddit-distilroberta-base", task="text-classification")
predict_task("Stop listing your issues here, use forum instead or open ticket.")
```
```
[{'label': 'not_depression', 'score': 0.9813856482505798}]
```
Disclaimer: This machine learning model classifies texts related to depression, but I am not an expert or a mental health professional.
I do not intend to diagnose or offer medical advice. The information provided should not replace consultation with a qualified professional.
The results may not be accurate. Use this model at your own risk and seek professional advice if needed.
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the [mrjunos/depression-reddit-cleaned dataset](https://huggingface.co/datasets/mrjunos/depression-reddit-cleaned).
It achieves the following results on the evaluation set:
- Loss: 0.0821
- Accuracy: 0.9716
## Model description
This model is a transformer-based model that has been fine-tuned on a dataset of Reddit posts related to depression.
The model can be used to classify posts as either depression or not depression.
## Intended uses & limitations
This model is intended to be used for research purposes. It is not yet ready for production use.
The model has been trained on a dataset of English-language posts, so it may not be accurate for other languages.
## Training and evaluation data
The model was trained on the mrjunos/depression-reddit-cleaned dataset, which contains approximately 7,000 labeled instances.
The data was split into Train and Test using:
```python
ds = ds['train'].train_test_split(test_size=0.2, seed=42)
```
The dataset consists of two main features: 'text' and 'label'. The 'text' feature contains the text data from Reddit posts related to depression, while the 'label' feature indicates whether a post is classified as depression or not.
## Training procedure
You can find here the steps I followed to train this model:
https://github.com/mrjunos/machine_learning/blob/main/NLP-fine_tunning-hugging_face_model.ipynb
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1711 | 0.65 | 500 | 0.0821 | 0.9716 |
| 0.1022 | 1.29 | 1000 | 0.1148 | 0.9709 |
| 0.0595 | 1.94 | 1500 | 0.1178 | 0.9787 |
| 0.0348 | 2.59 | 2000 | 0.0951 | 0.9851 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3 | 4,381 | [
[
-0.0360107421875,
-0.06280517578125,
0.0238037109375,
0.033843994140625,
-0.01396942138671875,
-0.033050537109375,
-0.016510009765625,
-0.02154541015625,
0.0237579345703125,
0.014862060546875,
-0.0535888671875,
-0.05078125,
-0.0748291015625,
0.02577209472656... |
yo/tagger | 2023-06-18T08:56:18.000Z | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"en",
"dataset:cardiffnlp/tweet_topic_multi",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | yo | null | null | yo/tagger | 0 | 2 | transformers | 2023-06-17T11:59:17 | ---
language: en
widget:
- text: It is great to see athletes promoting awareness for climate change.
datasets:
- cardiffnlp/tweet_topic_multi
license: mit
metrics:
- f1
- accuracy
pipeline_tag: text-classification
---
# Lenster Tagger
<b>Labels</b>:
| <span style="font-weight:normal">0: arts\_&_culture</span> | <span style="font-weight:normal">5: fashion\_&_style</span> | <span style="font-weight:normal">10: learning\_&_educational</span> | <span style="font-weight:normal">15: science\_&_technology</span> |
| ---------------------------------------------------------- | ----------------------------------------------------------- | ------------------------------------------------------------------- | ----------------------------------------------------------------- |
| 1: business\_&_entrepreneurs | 6: film*tv*&\_video | 11: music | 16: sports |
| 2: celebrity\_&_pop_culture | 7: fitness\_&_health | 12: news\_&_social_concern | 17: travel\_&_adventure |
| 3: diaries\_&_daily_life | 8: food\_&_dining | 13: other_hobbies | 18: youth\_&_student_life |
| 4: family | 9: gaming | 14: relationships | |
| 1,839 | [
[
-0.04498291015625,
-0.021484375,
0.0086517333984375,
0.0231475830078125,
-0.01299285888671875,
0.0157623291015625,
0.004505157470703125,
-0.01241302490234375,
0.0400390625,
0.005901336669921875,
-0.0635986328125,
-0.04644775390625,
-0.046173095703125,
0.0186... |
JvThunder/a2c-AntBulletEnv-v0 | 2023-07-20T09:02:28.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | JvThunder | null | null | JvThunder/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-06-17T19:07:25 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1480.48 +/- 111.99
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 791 | [
[
-0.026763916015625,
-0.044403076171875,
0.01068878173828125,
0.0208892822265625,
-0.003498077392578125,
0.0017833709716796875,
0.0187530517578125,
-0.01763916015625,
0.0193939208984375,
0.0265655517578125,
-0.052581787109375,
-0.0374755859375,
-0.044281005859375... |
arminmrm93/dqn-SpaceInvadersNoFrameskip-V4 | 2023-06-18T02:27:49.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | arminmrm93 | null | null | arminmrm93/dqn-SpaceInvadersNoFrameskip-V4 | 0 | 2 | stable-baselines3 | 2023-06-17T23:42:53 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 625.00 +/- 90.33
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga arminmrm93 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga arminmrm93 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga arminmrm93
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,764 | [
[
-0.044219970703125,
-0.03936767578125,
0.01910400390625,
0.0235443115234375,
-0.01139068603515625,
-0.016265869140625,
0.01061248779296875,
-0.01284027099609375,
0.0120697021484375,
0.0228424072265625,
-0.07196044921875,
-0.03472900390625,
-0.0258026123046875,
... |
edwardjjj/ppo-LunarLander-v2 | 2023-07-12T08:09:11.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | edwardjjj | null | null | edwardjjj/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-06-18T05:37:16 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.15 +/- 18.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
antphb/DS-Chatbox-gpt2-vietnamese-V3 | 2023-06-19T11:13:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | antphb | null | null | antphb/DS-Chatbox-gpt2-vietnamese-V3 | 0 | 2 | transformers | 2023-06-18T07:40:12 | ---
tags:
- generated_from_trainer
model-index:
- name: DS-Chatbox-gpt2-vietnamese-V3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DS-Chatbox-gpt2-vietnamese-V3
This model is a fine-tuned version of [NlpHUST/gpt2-vietnamese](https://huggingface.co/NlpHUST/gpt2-vietnamese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0015
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.9759 | 0.66 | 1400 | 2.6781 |
| 2.5019 | 1.31 | 2800 | 2.4921 |
| 2.3352 | 1.97 | 4200 | 2.3726 |
| 2.0759 | 2.62 | 5600 | 2.3240 |
| 1.9303 | 3.28 | 7000 | 2.3279 |
| 1.7867 | 3.93 | 8400 | 2.2556 |
| 1.5133 | 4.59 | 9800 | 2.3424 |
| 1.3726 | 5.25 | 11200 | 2.5290 |
| 1.1925 | 5.9 | 12600 | 2.5132 |
| 1.0211 | 6.56 | 14000 | 2.7322 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,835 | [
[
-0.03021240234375,
-0.054290771484375,
0.01401519775390625,
0.01085662841796875,
-0.02447509765625,
-0.0236968994140625,
-0.0016984939575195312,
-0.01145172119140625,
0.0013284683227539062,
0.0321044921875,
-0.047698974609375,
-0.04571533203125,
-0.0502319335937... |
MUmairAB/English_to_French_Translation_Transformer | 2023-06-19T18:46:14.000Z | [
"keras",
"region:us"
] | null | MUmairAB | null | null | MUmairAB/English_to_French_Translation_Transformer | 0 | 2 | keras | 2023-06-18T08:50:01 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | RMSprop |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | 100 |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| rho | 0.9 |
| momentum | 0.0 |
| epsilon | 1e-07 |
| centered | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.035400390625,
-0.0357666015625,
0.02520751953125,
0.018096923828125,
-0.04638671875,
-0.028106689453125,
0.016632080078125,
0.0048980712890625,
0.0186767578125,
0.036712646484375,
-0.0443115234375,
-0.04931640625,
-0.03875732421875,
-0.00673675537109375,
... |
michaelfeil/ct2fast-e5-small | 2023-10-13T13:36:53.000Z | [
"sentence-transformers",
"bert",
"ctranslate2",
"int8",
"float16",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | michaelfeil | null | null | michaelfeil/ct2fast-e5-small | 1 | 2 | sentence-transformers | 2023-06-18T11:41:56 | ---
tags:
- ctranslate2
- int8
- float16
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: e5-small
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.22388059701493
- type: ap
value: 40.27466219523129
- type: f1
value: 70.60533006025108
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 87.525775
- type: ap
value: 83.51063993897611
- type: f1
value: 87.49342736805572
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 42.611999999999995
- type: f1
value: 42.05088045932892
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.826
- type: map_at_10
value: 38.269
- type: map_at_100
value: 39.322
- type: map_at_1000
value: 39.344
- type: map_at_3
value: 33.428000000000004
- type: map_at_5
value: 36.063
- type: mrr_at_1
value: 24.253
- type: mrr_at_10
value: 38.425
- type: mrr_at_100
value: 39.478
- type: mrr_at_1000
value: 39.5
- type: mrr_at_3
value: 33.606
- type: mrr_at_5
value: 36.195
- type: ndcg_at_1
value: 23.826
- type: ndcg_at_10
value: 46.693
- type: ndcg_at_100
value: 51.469
- type: ndcg_at_1000
value: 52.002
- type: ndcg_at_3
value: 36.603
- type: ndcg_at_5
value: 41.365
- type: precision_at_1
value: 23.826
- type: precision_at_10
value: 7.383000000000001
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 15.268
- type: precision_at_5
value: 11.479000000000001
- type: recall_at_1
value: 23.826
- type: recall_at_10
value: 73.82600000000001
- type: recall_at_100
value: 95.306
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 45.804
- type: recall_at_5
value: 57.397
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.13995374767436
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 37.13950072624313
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.35843292105327
- type: mrr
value: 73.72312359846987
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.55140418324174
- type: cos_sim_spearman
value: 84.21637675860022
- type: euclidean_pearson
value: 81.26069614610006
- type: euclidean_spearman
value: 83.25069210421785
- type: manhattan_pearson
value: 80.17441422581014
- type: manhattan_spearman
value: 81.87596198487877
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.87337662337661
- type: f1
value: 81.76647866926402
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.80600542614507
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.86321613256603
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.054
- type: map_at_10
value: 40.699999999999996
- type: map_at_100
value: 41.818
- type: map_at_1000
value: 41.959999999999994
- type: map_at_3
value: 37.742
- type: map_at_5
value: 39.427
- type: mrr_at_1
value: 38.769999999999996
- type: mrr_at_10
value: 46.150000000000006
- type: mrr_at_100
value: 46.865
- type: mrr_at_1000
value: 46.925
- type: mrr_at_3
value: 43.705
- type: mrr_at_5
value: 45.214999999999996
- type: ndcg_at_1
value: 38.769999999999996
- type: ndcg_at_10
value: 45.778
- type: ndcg_at_100
value: 50.38
- type: ndcg_at_1000
value: 52.922999999999995
- type: ndcg_at_3
value: 41.597
- type: ndcg_at_5
value: 43.631
- type: precision_at_1
value: 38.769999999999996
- type: precision_at_10
value: 8.269
- type: precision_at_100
value: 1.278
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.266
- type: precision_at_5
value: 13.705
- type: recall_at_1
value: 32.054
- type: recall_at_10
value: 54.947
- type: recall_at_100
value: 74.79599999999999
- type: recall_at_1000
value: 91.40899999999999
- type: recall_at_3
value: 42.431000000000004
- type: recall_at_5
value: 48.519
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.035
- type: map_at_10
value: 38.007000000000005
- type: map_at_100
value: 39.125
- type: map_at_1000
value: 39.251999999999995
- type: map_at_3
value: 35.77
- type: map_at_5
value: 37.057
- type: mrr_at_1
value: 36.497
- type: mrr_at_10
value: 44.077
- type: mrr_at_100
value: 44.743
- type: mrr_at_1000
value: 44.79
- type: mrr_at_3
value: 42.123
- type: mrr_at_5
value: 43.308
- type: ndcg_at_1
value: 36.497
- type: ndcg_at_10
value: 42.986000000000004
- type: ndcg_at_100
value: 47.323
- type: ndcg_at_1000
value: 49.624
- type: ndcg_at_3
value: 39.805
- type: ndcg_at_5
value: 41.286
- type: precision_at_1
value: 36.497
- type: precision_at_10
value: 7.8340000000000005
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.248
- type: recall_at_1
value: 29.035
- type: recall_at_10
value: 51.06
- type: recall_at_100
value: 69.64099999999999
- type: recall_at_1000
value: 84.49
- type: recall_at_3
value: 41.333999999999996
- type: recall_at_5
value: 45.663
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.239
- type: map_at_10
value: 47.873
- type: map_at_100
value: 48.842999999999996
- type: map_at_1000
value: 48.913000000000004
- type: map_at_3
value: 45.050000000000004
- type: map_at_5
value: 46.498
- type: mrr_at_1
value: 42.508
- type: mrr_at_10
value: 51.44
- type: mrr_at_100
value: 52.087
- type: mrr_at_1000
value: 52.129999999999995
- type: mrr_at_3
value: 49.164
- type: mrr_at_5
value: 50.343
- type: ndcg_at_1
value: 42.508
- type: ndcg_at_10
value: 53.31399999999999
- type: ndcg_at_100
value: 57.245000000000005
- type: ndcg_at_1000
value: 58.794000000000004
- type: ndcg_at_3
value: 48.295
- type: ndcg_at_5
value: 50.415
- type: precision_at_1
value: 42.508
- type: precision_at_10
value: 8.458
- type: precision_at_100
value: 1.133
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 21.191
- type: precision_at_5
value: 14.307
- type: recall_at_1
value: 37.239
- type: recall_at_10
value: 65.99000000000001
- type: recall_at_100
value: 82.99499999999999
- type: recall_at_1000
value: 94.128
- type: recall_at_3
value: 52.382
- type: recall_at_5
value: 57.648999999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.039
- type: map_at_10
value: 29.694
- type: map_at_100
value: 30.587999999999997
- type: map_at_1000
value: 30.692999999999998
- type: map_at_3
value: 27.708
- type: map_at_5
value: 28.774
- type: mrr_at_1
value: 24.633
- type: mrr_at_10
value: 31.478
- type: mrr_at_100
value: 32.299
- type: mrr_at_1000
value: 32.381
- type: mrr_at_3
value: 29.435
- type: mrr_at_5
value: 30.446
- type: ndcg_at_1
value: 24.633
- type: ndcg_at_10
value: 33.697
- type: ndcg_at_100
value: 38.080000000000005
- type: ndcg_at_1000
value: 40.812
- type: ndcg_at_3
value: 29.654000000000003
- type: ndcg_at_5
value: 31.474000000000004
- type: precision_at_1
value: 24.633
- type: precision_at_10
value: 5.0729999999999995
- type: precision_at_100
value: 0.753
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 12.279
- type: precision_at_5
value: 8.452
- type: recall_at_1
value: 23.039
- type: recall_at_10
value: 44.275999999999996
- type: recall_at_100
value: 64.4
- type: recall_at_1000
value: 85.135
- type: recall_at_3
value: 33.394
- type: recall_at_5
value: 37.687
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.594999999999999
- type: map_at_10
value: 19.933999999999997
- type: map_at_100
value: 20.966
- type: map_at_1000
value: 21.087
- type: map_at_3
value: 17.749000000000002
- type: map_at_5
value: 19.156000000000002
- type: mrr_at_1
value: 17.662
- type: mrr_at_10
value: 24.407
- type: mrr_at_100
value: 25.385
- type: mrr_at_1000
value: 25.465
- type: mrr_at_3
value: 22.056
- type: mrr_at_5
value: 23.630000000000003
- type: ndcg_at_1
value: 17.662
- type: ndcg_at_10
value: 24.391
- type: ndcg_at_100
value: 29.681
- type: ndcg_at_1000
value: 32.923
- type: ndcg_at_3
value: 20.271
- type: ndcg_at_5
value: 22.621
- type: precision_at_1
value: 17.662
- type: precision_at_10
value: 4.44
- type: precision_at_100
value: 0.8200000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 9.577
- type: precision_at_5
value: 7.313
- type: recall_at_1
value: 13.594999999999999
- type: recall_at_10
value: 33.976
- type: recall_at_100
value: 57.43000000000001
- type: recall_at_1000
value: 80.958
- type: recall_at_3
value: 22.897000000000002
- type: recall_at_5
value: 28.714000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.683
- type: map_at_10
value: 35.068
- type: map_at_100
value: 36.311
- type: map_at_1000
value: 36.436
- type: map_at_3
value: 32.371
- type: map_at_5
value: 33.761
- type: mrr_at_1
value: 32.435
- type: mrr_at_10
value: 40.721000000000004
- type: mrr_at_100
value: 41.535
- type: mrr_at_1000
value: 41.593
- type: mrr_at_3
value: 38.401999999999994
- type: mrr_at_5
value: 39.567
- type: ndcg_at_1
value: 32.435
- type: ndcg_at_10
value: 40.538000000000004
- type: ndcg_at_100
value: 45.963
- type: ndcg_at_1000
value: 48.400999999999996
- type: ndcg_at_3
value: 36.048
- type: ndcg_at_5
value: 37.899
- type: precision_at_1
value: 32.435
- type: precision_at_10
value: 7.1129999999999995
- type: precision_at_100
value: 1.162
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 16.683
- type: precision_at_5
value: 11.684
- type: recall_at_1
value: 26.683
- type: recall_at_10
value: 51.517
- type: recall_at_100
value: 74.553
- type: recall_at_1000
value: 90.649
- type: recall_at_3
value: 38.495000000000005
- type: recall_at_5
value: 43.495
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.186
- type: map_at_10
value: 31.972
- type: map_at_100
value: 33.117000000000004
- type: map_at_1000
value: 33.243
- type: map_at_3
value: 29.423
- type: map_at_5
value: 30.847
- type: mrr_at_1
value: 29.794999999999998
- type: mrr_at_10
value: 36.767
- type: mrr_at_100
value: 37.645
- type: mrr_at_1000
value: 37.716
- type: mrr_at_3
value: 34.513
- type: mrr_at_5
value: 35.791000000000004
- type: ndcg_at_1
value: 29.794999999999998
- type: ndcg_at_10
value: 36.786
- type: ndcg_at_100
value: 41.94
- type: ndcg_at_1000
value: 44.830999999999996
- type: ndcg_at_3
value: 32.504
- type: ndcg_at_5
value: 34.404
- type: precision_at_1
value: 29.794999999999998
- type: precision_at_10
value: 6.518
- type: precision_at_100
value: 1.0659999999999998
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 15.296999999999999
- type: precision_at_5
value: 10.731
- type: recall_at_1
value: 24.186
- type: recall_at_10
value: 46.617
- type: recall_at_100
value: 68.75
- type: recall_at_1000
value: 88.864
- type: recall_at_3
value: 34.199
- type: recall_at_5
value: 39.462
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.22083333333333
- type: map_at_10
value: 31.606666666666662
- type: map_at_100
value: 32.6195
- type: map_at_1000
value: 32.739999999999995
- type: map_at_3
value: 29.37825
- type: map_at_5
value: 30.596083333333336
- type: mrr_at_1
value: 28.607916666666668
- type: mrr_at_10
value: 35.54591666666666
- type: mrr_at_100
value: 36.33683333333333
- type: mrr_at_1000
value: 36.40624999999999
- type: mrr_at_3
value: 33.526250000000005
- type: mrr_at_5
value: 34.6605
- type: ndcg_at_1
value: 28.607916666666668
- type: ndcg_at_10
value: 36.07966666666667
- type: ndcg_at_100
value: 40.73308333333333
- type: ndcg_at_1000
value: 43.40666666666666
- type: ndcg_at_3
value: 32.23525
- type: ndcg_at_5
value: 33.97083333333333
- type: precision_at_1
value: 28.607916666666668
- type: precision_at_10
value: 6.120333333333335
- type: precision_at_100
value: 0.9921666666666668
- type: precision_at_1000
value: 0.14091666666666666
- type: precision_at_3
value: 14.54975
- type: precision_at_5
value: 10.153166666666667
- type: recall_at_1
value: 24.22083333333333
- type: recall_at_10
value: 45.49183333333334
- type: recall_at_100
value: 66.28133333333332
- type: recall_at_1000
value: 85.16541666666667
- type: recall_at_3
value: 34.6485
- type: recall_at_5
value: 39.229749999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.842
- type: map_at_10
value: 27.573999999999998
- type: map_at_100
value: 28.410999999999998
- type: map_at_1000
value: 28.502
- type: map_at_3
value: 25.921
- type: map_at_5
value: 26.888
- type: mrr_at_1
value: 24.08
- type: mrr_at_10
value: 29.915999999999997
- type: mrr_at_100
value: 30.669
- type: mrr_at_1000
value: 30.746000000000002
- type: mrr_at_3
value: 28.349000000000004
- type: mrr_at_5
value: 29.246
- type: ndcg_at_1
value: 24.08
- type: ndcg_at_10
value: 30.898999999999997
- type: ndcg_at_100
value: 35.272999999999996
- type: ndcg_at_1000
value: 37.679
- type: ndcg_at_3
value: 27.881
- type: ndcg_at_5
value: 29.432000000000002
- type: precision_at_1
value: 24.08
- type: precision_at_10
value: 4.678
- type: precision_at_100
value: 0.744
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 11.860999999999999
- type: precision_at_5
value: 8.16
- type: recall_at_1
value: 21.842
- type: recall_at_10
value: 38.66
- type: recall_at_100
value: 59.169000000000004
- type: recall_at_1000
value: 76.887
- type: recall_at_3
value: 30.532999999999998
- type: recall_at_5
value: 34.354
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.145
- type: map_at_10
value: 22.729
- type: map_at_100
value: 23.574
- type: map_at_1000
value: 23.695
- type: map_at_3
value: 21.044
- type: map_at_5
value: 21.981
- type: mrr_at_1
value: 20.888
- type: mrr_at_10
value: 26.529000000000003
- type: mrr_at_100
value: 27.308
- type: mrr_at_1000
value: 27.389000000000003
- type: mrr_at_3
value: 24.868000000000002
- type: mrr_at_5
value: 25.825
- type: ndcg_at_1
value: 20.888
- type: ndcg_at_10
value: 26.457000000000004
- type: ndcg_at_100
value: 30.764000000000003
- type: ndcg_at_1000
value: 33.825
- type: ndcg_at_3
value: 23.483999999999998
- type: ndcg_at_5
value: 24.836
- type: precision_at_1
value: 20.888
- type: precision_at_10
value: 4.58
- type: precision_at_100
value: 0.784
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 10.874
- type: precision_at_5
value: 7.639
- type: recall_at_1
value: 17.145
- type: recall_at_10
value: 33.938
- type: recall_at_100
value: 53.672
- type: recall_at_1000
value: 76.023
- type: recall_at_3
value: 25.363000000000003
- type: recall_at_5
value: 29.023
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.275
- type: map_at_10
value: 30.438
- type: map_at_100
value: 31.489
- type: map_at_1000
value: 31.601000000000003
- type: map_at_3
value: 28.647
- type: map_at_5
value: 29.660999999999998
- type: mrr_at_1
value: 28.077999999999996
- type: mrr_at_10
value: 34.098
- type: mrr_at_100
value: 35.025
- type: mrr_at_1000
value: 35.109
- type: mrr_at_3
value: 32.4
- type: mrr_at_5
value: 33.379999999999995
- type: ndcg_at_1
value: 28.077999999999996
- type: ndcg_at_10
value: 34.271
- type: ndcg_at_100
value: 39.352
- type: ndcg_at_1000
value: 42.199
- type: ndcg_at_3
value: 30.978
- type: ndcg_at_5
value: 32.498
- type: precision_at_1
value: 28.077999999999996
- type: precision_at_10
value: 5.345
- type: precision_at_100
value: 0.897
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 13.526
- type: precision_at_5
value: 9.16
- type: recall_at_1
value: 24.275
- type: recall_at_10
value: 42.362
- type: recall_at_100
value: 64.461
- type: recall_at_1000
value: 84.981
- type: recall_at_3
value: 33.249
- type: recall_at_5
value: 37.214999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.358
- type: map_at_10
value: 30.062
- type: map_at_100
value: 31.189
- type: map_at_1000
value: 31.386999999999997
- type: map_at_3
value: 27.672
- type: map_at_5
value: 28.76
- type: mrr_at_1
value: 26.877000000000002
- type: mrr_at_10
value: 33.948
- type: mrr_at_100
value: 34.746
- type: mrr_at_1000
value: 34.816
- type: mrr_at_3
value: 31.884
- type: mrr_at_5
value: 33.001000000000005
- type: ndcg_at_1
value: 26.877000000000002
- type: ndcg_at_10
value: 34.977000000000004
- type: ndcg_at_100
value: 39.753
- type: ndcg_at_1000
value: 42.866
- type: ndcg_at_3
value: 30.956
- type: ndcg_at_5
value: 32.381
- type: precision_at_1
value: 26.877000000000002
- type: precision_at_10
value: 6.7
- type: precision_at_100
value: 1.287
- type: precision_at_1000
value: 0.215
- type: precision_at_3
value: 14.360999999999999
- type: precision_at_5
value: 10.119
- type: recall_at_1
value: 22.358
- type: recall_at_10
value: 44.183
- type: recall_at_100
value: 67.14
- type: recall_at_1000
value: 87.53999999999999
- type: recall_at_3
value: 32.79
- type: recall_at_5
value: 36.829
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.198999999999998
- type: map_at_10
value: 25.229000000000003
- type: map_at_100
value: 26.003
- type: map_at_1000
value: 26.111
- type: map_at_3
value: 23.442
- type: map_at_5
value: 24.343
- type: mrr_at_1
value: 21.072
- type: mrr_at_10
value: 27.02
- type: mrr_at_100
value: 27.735
- type: mrr_at_1000
value: 27.815
- type: mrr_at_3
value: 25.416
- type: mrr_at_5
value: 26.173999999999996
- type: ndcg_at_1
value: 21.072
- type: ndcg_at_10
value: 28.862
- type: ndcg_at_100
value: 33.043
- type: ndcg_at_1000
value: 36.003
- type: ndcg_at_3
value: 25.35
- type: ndcg_at_5
value: 26.773000000000003
- type: precision_at_1
value: 21.072
- type: precision_at_10
value: 4.436
- type: precision_at_100
value: 0.713
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 10.659
- type: precision_at_5
value: 7.32
- type: recall_at_1
value: 19.198999999999998
- type: recall_at_10
value: 38.376
- type: recall_at_100
value: 58.36900000000001
- type: recall_at_1000
value: 80.92099999999999
- type: recall_at_3
value: 28.715000000000003
- type: recall_at_5
value: 32.147
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.9319999999999995
- type: map_at_10
value: 10.483
- type: map_at_100
value: 11.97
- type: map_at_1000
value: 12.171999999999999
- type: map_at_3
value: 8.477
- type: map_at_5
value: 9.495000000000001
- type: mrr_at_1
value: 13.094
- type: mrr_at_10
value: 21.282
- type: mrr_at_100
value: 22.556
- type: mrr_at_1000
value: 22.628999999999998
- type: mrr_at_3
value: 18.218999999999998
- type: mrr_at_5
value: 19.900000000000002
- type: ndcg_at_1
value: 13.094
- type: ndcg_at_10
value: 15.811
- type: ndcg_at_100
value: 23.035
- type: ndcg_at_1000
value: 27.089999999999996
- type: ndcg_at_3
value: 11.905000000000001
- type: ndcg_at_5
value: 13.377
- type: precision_at_1
value: 13.094
- type: precision_at_10
value: 5.225
- type: precision_at_100
value: 1.2970000000000002
- type: precision_at_1000
value: 0.203
- type: precision_at_3
value: 8.86
- type: precision_at_5
value: 7.309
- type: recall_at_1
value: 5.9319999999999995
- type: recall_at_10
value: 20.305
- type: recall_at_100
value: 46.314
- type: recall_at_1000
value: 69.612
- type: recall_at_3
value: 11.21
- type: recall_at_5
value: 14.773
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.674
- type: map_at_10
value: 17.822
- type: map_at_100
value: 24.794
- type: map_at_1000
value: 26.214
- type: map_at_3
value: 12.690999999999999
- type: map_at_5
value: 15.033
- type: mrr_at_1
value: 61.75000000000001
- type: mrr_at_10
value: 71.58
- type: mrr_at_100
value: 71.923
- type: mrr_at_1000
value: 71.932
- type: mrr_at_3
value: 70.125
- type: mrr_at_5
value: 71.038
- type: ndcg_at_1
value: 51
- type: ndcg_at_10
value: 38.637
- type: ndcg_at_100
value: 42.398
- type: ndcg_at_1000
value: 48.962
- type: ndcg_at_3
value: 43.29
- type: ndcg_at_5
value: 40.763
- type: precision_at_1
value: 61.75000000000001
- type: precision_at_10
value: 30.125
- type: precision_at_100
value: 9.53
- type: precision_at_1000
value: 1.9619999999999997
- type: precision_at_3
value: 45.583
- type: precision_at_5
value: 38.95
- type: recall_at_1
value: 8.674
- type: recall_at_10
value: 23.122
- type: recall_at_100
value: 47.46
- type: recall_at_1000
value: 67.662
- type: recall_at_3
value: 13.946
- type: recall_at_5
value: 17.768
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.86000000000001
- type: f1
value: 41.343580452760776
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.609
- type: map_at_10
value: 47.552
- type: map_at_100
value: 48.283
- type: map_at_1000
value: 48.321
- type: map_at_3
value: 44.869
- type: map_at_5
value: 46.509
- type: mrr_at_1
value: 39.214
- type: mrr_at_10
value: 50.434999999999995
- type: mrr_at_100
value: 51.122
- type: mrr_at_1000
value: 51.151
- type: mrr_at_3
value: 47.735
- type: mrr_at_5
value: 49.394
- type: ndcg_at_1
value: 39.214
- type: ndcg_at_10
value: 53.52400000000001
- type: ndcg_at_100
value: 56.997
- type: ndcg_at_1000
value: 57.975
- type: ndcg_at_3
value: 48.173
- type: ndcg_at_5
value: 51.05800000000001
- type: precision_at_1
value: 39.214
- type: precision_at_10
value: 7.573
- type: precision_at_100
value: 0.9440000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 19.782
- type: precision_at_5
value: 13.453000000000001
- type: recall_at_1
value: 36.609
- type: recall_at_10
value: 69.247
- type: recall_at_100
value: 84.99600000000001
- type: recall_at_1000
value: 92.40899999999999
- type: recall_at_3
value: 54.856
- type: recall_at_5
value: 61.797000000000004
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.466
- type: map_at_10
value: 27.060000000000002
- type: map_at_100
value: 28.511999999999997
- type: map_at_1000
value: 28.693
- type: map_at_3
value: 22.777
- type: map_at_5
value: 25.086000000000002
- type: mrr_at_1
value: 32.716
- type: mrr_at_10
value: 41.593999999999994
- type: mrr_at_100
value: 42.370000000000005
- type: mrr_at_1000
value: 42.419000000000004
- type: mrr_at_3
value: 38.143
- type: mrr_at_5
value: 40.288000000000004
- type: ndcg_at_1
value: 32.716
- type: ndcg_at_10
value: 34.795
- type: ndcg_at_100
value: 40.58
- type: ndcg_at_1000
value: 43.993
- type: ndcg_at_3
value: 29.573
- type: ndcg_at_5
value: 31.583
- type: precision_at_1
value: 32.716
- type: precision_at_10
value: 9.937999999999999
- type: precision_at_100
value: 1.585
- type: precision_at_1000
value: 0.22
- type: precision_at_3
value: 19.496
- type: precision_at_5
value: 15.247
- type: recall_at_1
value: 16.466
- type: recall_at_10
value: 42.886
- type: recall_at_100
value: 64.724
- type: recall_at_1000
value: 85.347
- type: recall_at_3
value: 26.765
- type: recall_at_5
value: 33.603
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.025
- type: map_at_10
value: 47.343
- type: map_at_100
value: 48.207
- type: map_at_1000
value: 48.281
- type: map_at_3
value: 44.519
- type: map_at_5
value: 46.217000000000006
- type: mrr_at_1
value: 66.05
- type: mrr_at_10
value: 72.94699999999999
- type: mrr_at_100
value: 73.289
- type: mrr_at_1000
value: 73.30499999999999
- type: mrr_at_3
value: 71.686
- type: mrr_at_5
value: 72.491
- type: ndcg_at_1
value: 66.05
- type: ndcg_at_10
value: 56.338
- type: ndcg_at_100
value: 59.599999999999994
- type: ndcg_at_1000
value: 61.138000000000005
- type: ndcg_at_3
value: 52.034000000000006
- type: ndcg_at_5
value: 54.352000000000004
- type: precision_at_1
value: 66.05
- type: precision_at_10
value: 11.693000000000001
- type: precision_at_100
value: 1.425
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 32.613
- type: precision_at_5
value: 21.401999999999997
- type: recall_at_1
value: 33.025
- type: recall_at_10
value: 58.467
- type: recall_at_100
value: 71.242
- type: recall_at_1000
value: 81.452
- type: recall_at_3
value: 48.92
- type: recall_at_5
value: 53.504
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 75.5492
- type: ap
value: 69.42911637216271
- type: f1
value: 75.39113704261024
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.173
- type: map_at_10
value: 35.453
- type: map_at_100
value: 36.573
- type: map_at_1000
value: 36.620999999999995
- type: map_at_3
value: 31.655
- type: map_at_5
value: 33.823
- type: mrr_at_1
value: 23.868000000000002
- type: mrr_at_10
value: 36.085
- type: mrr_at_100
value: 37.15
- type: mrr_at_1000
value: 37.193
- type: mrr_at_3
value: 32.376
- type: mrr_at_5
value: 34.501
- type: ndcg_at_1
value: 23.854
- type: ndcg_at_10
value: 42.33
- type: ndcg_at_100
value: 47.705999999999996
- type: ndcg_at_1000
value: 48.91
- type: ndcg_at_3
value: 34.604
- type: ndcg_at_5
value: 38.473
- type: precision_at_1
value: 23.854
- type: precision_at_10
value: 6.639
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.685
- type: precision_at_5
value: 10.782
- type: recall_at_1
value: 23.173
- type: recall_at_10
value: 63.441
- type: recall_at_100
value: 88.25
- type: recall_at_1000
value: 97.438
- type: recall_at_3
value: 42.434
- type: recall_at_5
value: 51.745
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.05426356589147
- type: f1
value: 91.88068588063942
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.23985408116735
- type: f1
value: 55.858906745287506
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.21923335574984
- type: f1
value: 70.0174116204253
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.77673167451245
- type: f1
value: 75.44811354778666
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.340414710728737
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.196676760061578
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 29.564149683482206
- type: mrr
value: 30.28995474250486
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.93
- type: map_at_10
value: 12.828000000000001
- type: map_at_100
value: 15.501000000000001
- type: map_at_1000
value: 16.791
- type: map_at_3
value: 9.727
- type: map_at_5
value: 11.318999999999999
- type: mrr_at_1
value: 47.678
- type: mrr_at_10
value: 55.893
- type: mrr_at_100
value: 56.491
- type: mrr_at_1000
value: 56.53
- type: mrr_at_3
value: 54.386
- type: mrr_at_5
value: 55.516
- type: ndcg_at_1
value: 45.975
- type: ndcg_at_10
value: 33.928999999999995
- type: ndcg_at_100
value: 30.164
- type: ndcg_at_1000
value: 38.756
- type: ndcg_at_3
value: 41.077000000000005
- type: ndcg_at_5
value: 38.415
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 24.365000000000002
- type: precision_at_100
value: 7.344
- type: precision_at_1000
value: 1.994
- type: precision_at_3
value: 38.184000000000005
- type: precision_at_5
value: 33.003
- type: recall_at_1
value: 5.93
- type: recall_at_10
value: 16.239
- type: recall_at_100
value: 28.782999999999998
- type: recall_at_1000
value: 60.11
- type: recall_at_3
value: 10.700999999999999
- type: recall_at_5
value: 13.584
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.163000000000004
- type: map_at_10
value: 51.520999999999994
- type: map_at_100
value: 52.449
- type: map_at_1000
value: 52.473000000000006
- type: map_at_3
value: 47.666
- type: map_at_5
value: 50.043000000000006
- type: mrr_at_1
value: 40.266999999999996
- type: mrr_at_10
value: 54.074
- type: mrr_at_100
value: 54.722
- type: mrr_at_1000
value: 54.739000000000004
- type: mrr_at_3
value: 51.043000000000006
- type: mrr_at_5
value: 52.956
- type: ndcg_at_1
value: 40.238
- type: ndcg_at_10
value: 58.73199999999999
- type: ndcg_at_100
value: 62.470000000000006
- type: ndcg_at_1000
value: 63.083999999999996
- type: ndcg_at_3
value: 51.672
- type: ndcg_at_5
value: 55.564
- type: precision_at_1
value: 40.238
- type: precision_at_10
value: 9.279
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.078000000000003
- type: precision_at_5
value: 16.176
- type: recall_at_1
value: 36.163000000000004
- type: recall_at_10
value: 77.88199999999999
- type: recall_at_100
value: 93.83399999999999
- type: recall_at_1000
value: 98.465
- type: recall_at_3
value: 59.857000000000006
- type: recall_at_5
value: 68.73599999999999
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.344
- type: map_at_10
value: 83.907
- type: map_at_100
value: 84.536
- type: map_at_1000
value: 84.557
- type: map_at_3
value: 80.984
- type: map_at_5
value: 82.844
- type: mrr_at_1
value: 81.02000000000001
- type: mrr_at_10
value: 87.158
- type: mrr_at_100
value: 87.268
- type: mrr_at_1000
value: 87.26899999999999
- type: mrr_at_3
value: 86.17
- type: mrr_at_5
value: 86.87
- type: ndcg_at_1
value: 81.02000000000001
- type: ndcg_at_10
value: 87.70700000000001
- type: ndcg_at_100
value: 89.004
- type: ndcg_at_1000
value: 89.139
- type: ndcg_at_3
value: 84.841
- type: ndcg_at_5
value: 86.455
- type: precision_at_1
value: 81.02000000000001
- type: precision_at_10
value: 13.248999999999999
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.963
- type: precision_at_5
value: 24.33
- type: recall_at_1
value: 70.344
- type: recall_at_10
value: 94.75099999999999
- type: recall_at_100
value: 99.30499999999999
- type: recall_at_1000
value: 99.928
- type: recall_at_3
value: 86.506
- type: recall_at_5
value: 91.083
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.873718018378305
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 56.39477366450528
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.868
- type: map_at_10
value: 9.611
- type: map_at_100
value: 11.087
- type: map_at_1000
value: 11.332
- type: map_at_3
value: 6.813
- type: map_at_5
value: 8.233
- type: mrr_at_1
value: 19
- type: mrr_at_10
value: 28.457
- type: mrr_at_100
value: 29.613
- type: mrr_at_1000
value: 29.695
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.29
- type: ndcg_at_1
value: 19
- type: ndcg_at_10
value: 16.419
- type: ndcg_at_100
value: 22.817999999999998
- type: ndcg_at_1000
value: 27.72
- type: ndcg_at_3
value: 15.379000000000001
- type: ndcg_at_5
value: 13.645
- type: precision_at_1
value: 19
- type: precision_at_10
value: 8.540000000000001
- type: precision_at_100
value: 1.7819999999999998
- type: precision_at_1000
value: 0.297
- type: precision_at_3
value: 14.267
- type: precision_at_5
value: 12.04
- type: recall_at_1
value: 3.868
- type: recall_at_10
value: 17.288
- type: recall_at_100
value: 36.144999999999996
- type: recall_at_1000
value: 60.199999999999996
- type: recall_at_3
value: 8.688
- type: recall_at_5
value: 12.198
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.96614722598582
- type: cos_sim_spearman
value: 78.9003023008781
- type: euclidean_pearson
value: 81.01829384436505
- type: euclidean_spearman
value: 78.93248416788914
- type: manhattan_pearson
value: 81.1665428926402
- type: manhattan_spearman
value: 78.93264116287453
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 83.54613363895993
- type: cos_sim_spearman
value: 75.1883451602451
- type: euclidean_pearson
value: 79.70320886899894
- type: euclidean_spearman
value: 74.5917140136796
- type: manhattan_pearson
value: 79.82157067185999
- type: manhattan_spearman
value: 74.74185720594735
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.30430156721782
- type: cos_sim_spearman
value: 81.79962989974364
- type: euclidean_pearson
value: 80.89058823224924
- type: euclidean_spearman
value: 81.35929372984597
- type: manhattan_pearson
value: 81.12204370487478
- type: manhattan_spearman
value: 81.6248963282232
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.13064504403134
- type: cos_sim_spearman
value: 78.48371403924872
- type: euclidean_pearson
value: 80.16794919665591
- type: euclidean_spearman
value: 78.29216082221699
- type: manhattan_pearson
value: 80.22308565207301
- type: manhattan_spearman
value: 78.37829229948022
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.52918899541099
- type: cos_sim_spearman
value: 87.49276894673142
- type: euclidean_pearson
value: 86.77440570164254
- type: euclidean_spearman
value: 87.5753295736756
- type: manhattan_pearson
value: 86.86098573892133
- type: manhattan_spearman
value: 87.65848591821947
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.86805307244882
- type: cos_sim_spearman
value: 84.58066253757511
- type: euclidean_pearson
value: 84.38377000876991
- type: euclidean_spearman
value: 85.1837278784528
- type: manhattan_pearson
value: 84.41903291363842
- type: manhattan_spearman
value: 85.19023736251052
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.77218560282436
- type: cos_sim_spearman
value: 87.94243515296604
- type: euclidean_pearson
value: 88.22800939214864
- type: euclidean_spearman
value: 87.91106839439841
- type: manhattan_pearson
value: 88.17063269848741
- type: manhattan_spearman
value: 87.72751904126062
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.40731554300387
- type: cos_sim_spearman
value: 63.76300532966479
- type: euclidean_pearson
value: 62.94727878229085
- type: euclidean_spearman
value: 63.678039531461216
- type: manhattan_pearson
value: 63.00661039863549
- type: manhattan_spearman
value: 63.6282591984376
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.92731569745344
- type: cos_sim_spearman
value: 86.36336704300167
- type: euclidean_pearson
value: 86.09122224841195
- type: euclidean_spearman
value: 86.2116149319238
- type: manhattan_pearson
value: 86.07879456717032
- type: manhattan_spearman
value: 86.2022069635119
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.75976311752326
- type: mrr
value: 94.15782837351466
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 51.193999999999996
- type: map_at_10
value: 61.224999999999994
- type: map_at_100
value: 62.031000000000006
- type: map_at_1000
value: 62.066
- type: map_at_3
value: 59.269000000000005
- type: map_at_5
value: 60.159
- type: mrr_at_1
value: 53.667
- type: mrr_at_10
value: 62.74999999999999
- type: mrr_at_100
value: 63.39399999999999
- type: mrr_at_1000
value: 63.425
- type: mrr_at_3
value: 61.389
- type: mrr_at_5
value: 61.989000000000004
- type: ndcg_at_1
value: 53.667
- type: ndcg_at_10
value: 65.596
- type: ndcg_at_100
value: 68.906
- type: ndcg_at_1000
value: 69.78999999999999
- type: ndcg_at_3
value: 62.261
- type: ndcg_at_5
value: 63.453
- type: precision_at_1
value: 53.667
- type: precision_at_10
value: 8.667
- type: precision_at_100
value: 1.04
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 24.556
- type: precision_at_5
value: 15.6
- type: recall_at_1
value: 51.193999999999996
- type: recall_at_10
value: 77.156
- type: recall_at_100
value: 91.43299999999999
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 67.994
- type: recall_at_5
value: 71.14399999999999
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81485148514851
- type: cos_sim_ap
value: 95.28896513388551
- type: cos_sim_f1
value: 90.43478260869566
- type: cos_sim_precision
value: 92.56544502617801
- type: cos_sim_recall
value: 88.4
- type: dot_accuracy
value: 99.30594059405941
- type: dot_ap
value: 61.6432597455472
- type: dot_f1
value: 59.46481665014866
- type: dot_precision
value: 58.93909626719057
- type: dot_recall
value: 60
- type: euclidean_accuracy
value: 99.81980198019802
- type: euclidean_ap
value: 95.21411049527
- type: euclidean_f1
value: 91.06090373280944
- type: euclidean_precision
value: 89.47876447876449
- type: euclidean_recall
value: 92.7
- type: manhattan_accuracy
value: 99.81782178217821
- type: manhattan_ap
value: 95.32449994414968
- type: manhattan_f1
value: 90.86395233366436
- type: manhattan_precision
value: 90.23668639053254
- type: manhattan_recall
value: 91.5
- type: max_accuracy
value: 99.81980198019802
- type: max_ap
value: 95.32449994414968
- type: max_f1
value: 91.06090373280944
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 59.08045614613064
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.297802606804748
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.12801740706292
- type: mrr
value: 50.05592956879722
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.523347880124497
- type: cos_sim_spearman
value: 31.388214436391014
- type: dot_pearson
value: 24.55403435439901
- type: dot_spearman
value: 23.50153210841191
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.243
- type: map_at_10
value: 1.886
- type: map_at_100
value: 10.040000000000001
- type: map_at_1000
value: 23.768
- type: map_at_3
value: 0.674
- type: map_at_5
value: 1.079
- type: mrr_at_1
value: 88
- type: mrr_at_10
value: 93.667
- type: mrr_at_100
value: 93.667
- type: mrr_at_1000
value: 93.667
- type: mrr_at_3
value: 93.667
- type: mrr_at_5
value: 93.667
- type: ndcg_at_1
value: 83
- type: ndcg_at_10
value: 76.777
- type: ndcg_at_100
value: 55.153
- type: ndcg_at_1000
value: 47.912
- type: ndcg_at_3
value: 81.358
- type: ndcg_at_5
value: 80.74799999999999
- type: precision_at_1
value: 88
- type: precision_at_10
value: 80.80000000000001
- type: precision_at_100
value: 56.02
- type: precision_at_1000
value: 21.51
- type: precision_at_3
value: 86
- type: precision_at_5
value: 86
- type: recall_at_1
value: 0.243
- type: recall_at_10
value: 2.0869999999999997
- type: recall_at_100
value: 13.014000000000001
- type: recall_at_1000
value: 44.433
- type: recall_at_3
value: 0.6910000000000001
- type: recall_at_5
value: 1.1440000000000001
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.066
- type: map_at_10
value: 10.615
- type: map_at_100
value: 16.463
- type: map_at_1000
value: 17.815
- type: map_at_3
value: 5.7860000000000005
- type: map_at_5
value: 7.353999999999999
- type: mrr_at_1
value: 38.775999999999996
- type: mrr_at_10
value: 53.846000000000004
- type: mrr_at_100
value: 54.37
- type: mrr_at_1000
value: 54.37
- type: mrr_at_3
value: 48.980000000000004
- type: mrr_at_5
value: 51.735
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 26.811
- type: ndcg_at_100
value: 37.342999999999996
- type: ndcg_at_1000
value: 47.964
- type: ndcg_at_3
value: 30.906
- type: ndcg_at_5
value: 27.77
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_10
value: 23.878
- type: precision_at_100
value: 7.632999999999999
- type: precision_at_1000
value: 1.469
- type: precision_at_3
value: 31.973000000000003
- type: precision_at_5
value: 26.939
- type: recall_at_1
value: 3.066
- type: recall_at_10
value: 17.112
- type: recall_at_100
value: 47.723
- type: recall_at_1000
value: 79.50500000000001
- type: recall_at_3
value: 6.825
- type: recall_at_5
value: 9.584
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.76460000000002
- type: ap
value: 14.944240012137053
- type: f1
value: 55.89805777266571
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 63.30503678551217
- type: f1
value: 63.57492701921179
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 37.51066495006874
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.07021517553794
- type: cos_sim_ap
value: 74.15520712370555
- type: cos_sim_f1
value: 68.64321608040201
- type: cos_sim_precision
value: 65.51558752997602
- type: cos_sim_recall
value: 72.0844327176781
- type: dot_accuracy
value: 80.23484532395541
- type: dot_ap
value: 54.298763810214176
- type: dot_f1
value: 53.22254659779924
- type: dot_precision
value: 46.32525410476936
- type: dot_recall
value: 62.532981530343015
- type: euclidean_accuracy
value: 86.04637301066937
- type: euclidean_ap
value: 73.85333854233123
- type: euclidean_f1
value: 68.77723660599845
- type: euclidean_precision
value: 66.87437686939182
- type: euclidean_recall
value: 70.79155672823218
- type: manhattan_accuracy
value: 85.98676759849795
- type: manhattan_ap
value: 73.56016090035973
- type: manhattan_f1
value: 68.48878539036647
- type: manhattan_precision
value: 63.9505607690547
- type: manhattan_recall
value: 73.7203166226913
- type: max_accuracy
value: 86.07021517553794
- type: max_ap
value: 74.15520712370555
- type: max_f1
value: 68.77723660599845
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.92769821865176
- type: cos_sim_ap
value: 85.78879502899773
- type: cos_sim_f1
value: 78.14414083990464
- type: cos_sim_precision
value: 74.61651607480563
- type: cos_sim_recall
value: 82.0218663381583
- type: dot_accuracy
value: 84.95750378390964
- type: dot_ap
value: 75.80219641857563
- type: dot_f1
value: 70.13966179585681
- type: dot_precision
value: 65.71140262361251
- type: dot_recall
value: 75.20788420080073
- type: euclidean_accuracy
value: 88.93546008460433
- type: euclidean_ap
value: 85.72056428301667
- type: euclidean_f1
value: 78.14387902598124
- type: euclidean_precision
value: 75.3376688344172
- type: euclidean_recall
value: 81.16723129042192
- type: manhattan_accuracy
value: 88.96262661543835
- type: manhattan_ap
value: 85.76605136314335
- type: manhattan_f1
value: 78.26696165191743
- type: manhattan_precision
value: 75.0990659496179
- type: manhattan_recall
value: 81.71388974437943
- type: max_accuracy
value: 88.96262661543835
- type: max_ap
value: 85.78879502899773
- type: max_f1
value: 78.26696165191743
language:
- en
license: mit
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [intfloat/e5-small](https://huggingface.co/intfloat/e5-small)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-e5-small"
model_name_orig="intfloat/e5-small"
from hf_hub_ctranslate2 import EncoderCT2fromHfHub
model = EncoderCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16"
)
outputs = model.generate(
text=["I like soccer", "I like tennis", "The eiffel tower is in Paris"],
max_length=64,
) # perform downstream tasks on outputs
outputs["pooler_output"]
outputs["last_hidden_state"]
outputs["attention_mask"]
# alternative, use SentenceTransformer Mix-In
# for end-to-end Sentence embeddings generation
# (not pulling from this CT2fast-HF repo)
from hf_hub_ctranslate2 import CT2SentenceTransformer
model = CT2SentenceTransformer(
model_name_orig, compute_type="int8_float16", device="cuda"
)
embeddings = model.encode(
["I like soccer", "I like tennis", "The eiffel tower is in Paris"],
batch_size=32,
convert_to_numpy=True,
normalize_embeddings=True,
)
print(embeddings.shape, embeddings)
scores = (embeddings @ embeddings.T) * 100
# Hint: you can also host this code via REST API and
# via github.com/michaelfeil/infinity
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-10-13 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# E5-small
**News (May 2023): please switch to [e5-small-v2](https://huggingface.co/intfloat/e5-small-v2), which has better performance and same method of usage.**
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 12 layers and the embedding size is 384.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small')
model = AutoModel.from_pretrained('intfloat/e5-small')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-small')
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
| 70,093 | [
[
-0.01509857177734375,
-0.057373046875,
0.0219573974609375,
0.02423095703125,
-0.01934814453125,
-0.0249176025390625,
-0.00811004638671875,
-0.03155517578125,
0.00974273681640625,
0.0209197998046875,
-0.028656005859375,
-0.0447998046875,
-0.0672607421875,
0.0... |
ManuD/speecht5_finetuned_voxpopuli_de | 2023-06-18T13:54:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | ManuD | null | null | ManuD/speecht5_finetuned_voxpopuli_de | 0 | 2 | transformers | 2023-06-18T11:52:20 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_de
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5307 | 2.26 | 1000 | 0.4842 |
| 0.5081 | 4.52 | 2000 | 0.4712 |
| 0.505 | 6.79 | 3000 | 0.4646 |
| 0.4986 | 9.05 | 4000 | 0.4636 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,565 | [
[
-0.031341552734375,
-0.042694091796875,
-0.00270843505859375,
0.007808685302734375,
-0.019989013671875,
-0.022979736328125,
-0.01422119140625,
-0.00980377197265625,
-0.0108489990234375,
0.019989013671875,
-0.04766845703125,
-0.049652099609375,
-0.0435791015625,
... |
mazeinmouse/dqn-SpaceInvadersNoFrameskip-v | 2023-06-18T19:57:55.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | mazeinmouse | null | null | mazeinmouse/dqn-SpaceInvadersNoFrameskip-v | 0 | 2 | stable-baselines3 | 2023-06-18T19:57:10 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 585.00 +/- 142.99
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mazeinmouse -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mazeinmouse -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mazeinmouse
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,768 | [
[
-0.041900634765625,
-0.040008544921875,
0.0179290771484375,
0.0244903564453125,
-0.005931854248046875,
-0.01800537109375,
0.0129547119140625,
-0.0131683349609375,
0.013458251953125,
0.0206451416015625,
-0.0736083984375,
-0.0341796875,
-0.025665283203125,
-0.... |
gevis1/distilbert-base-cased-finetuned-financial-csv-gevis1 | 2023-06-20T03:27:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gevis1 | null | null | gevis1/distilbert-base-cased-finetuned-financial-csv-gevis1 | 0 | 2 | transformers | 2023-06-18T22:14:46 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-cased-finetuned-financial-csv-gevis1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-financial-csv-gevis1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.0
- Tokenizers 0.11.0
| 1,109 | [
[
-0.0253753662109375,
-0.059844970703125,
0.007640838623046875,
0.0175628662109375,
-0.0308074951171875,
-0.003997802734375,
-0.00994110107421875,
-0.0016298294067382812,
0.006885528564453125,
0.03045654296875,
-0.04736328125,
-0.045074462890625,
-0.0588989257812... |
NasimB/gpt2_left_out_gutenberg | 2023-06-19T13:03:02.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | NasimB | null | null | NasimB/gpt2_left_out_gutenberg | 0 | 2 | transformers | 2023-06-19T09:06:05 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2_left_out_gutenberg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_left_out_gutenberg
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.8917 | 0.26 | 500 | 5.0150 |
| 4.6559 | 0.53 | 1000 | 4.6338 |
| 4.3512 | 0.79 | 1500 | 4.4091 |
| 4.1461 | 1.06 | 2000 | 4.2691 |
| 3.9654 | 1.32 | 2500 | 4.1719 |
| 3.8972 | 1.59 | 3000 | 4.0869 |
| 3.8271 | 1.85 | 3500 | 4.0113 |
| 3.6889 | 2.12 | 4000 | 3.9762 |
| 3.586 | 2.38 | 4500 | 3.9376 |
| 3.5724 | 2.65 | 5000 | 3.8870 |
| 3.5435 | 2.91 | 5500 | 3.8480 |
| 3.3888 | 3.17 | 6000 | 3.8520 |
| 3.3327 | 3.44 | 6500 | 3.8282 |
| 3.3538 | 3.7 | 7000 | 3.8039 |
| 3.3427 | 3.97 | 7500 | 3.7743 |
| 3.1287 | 4.23 | 8000 | 3.8093 |
| 3.1293 | 4.5 | 8500 | 3.7959 |
| 3.1508 | 4.76 | 9000 | 3.7735 |
| 3.1169 | 5.03 | 9500 | 3.7815 |
| 2.8937 | 5.29 | 10000 | 3.8078 |
| 2.9281 | 5.56 | 10500 | 3.7999 |
| 2.9357 | 5.82 | 11000 | 3.7869 |
| 2.8489 | 6.08 | 11500 | 3.8165 |
| 2.6858 | 6.35 | 12000 | 3.8367 |
| 2.7074 | 6.61 | 12500 | 3.8300 |
| 2.7252 | 6.88 | 13000 | 3.8234 |
| 2.5862 | 7.14 | 13500 | 3.8661 |
| 2.4957 | 7.41 | 14000 | 3.8772 |
| 2.5091 | 7.67 | 14500 | 3.8791 |
| 2.5155 | 7.94 | 15000 | 3.8773 |
| 2.3794 | 8.2 | 15500 | 3.9064 |
| 2.349 | 8.47 | 16000 | 3.9130 |
| 2.3595 | 8.73 | 16500 | 3.9154 |
| 2.3579 | 8.99 | 17000 | 3.9160 |
| 2.2743 | 9.26 | 17500 | 3.9268 |
| 2.2753 | 9.52 | 18000 | 3.9287 |
| 2.2734 | 9.79 | 18500 | 3.9287 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
| 3,211 | [
[
-0.046661376953125,
-0.037567138671875,
0.0172119140625,
0.00284576416015625,
-0.00933074951171875,
-0.006168365478515625,
0.005802154541015625,
-0.00511932373046875,
0.0254974365234375,
0.0278778076171875,
-0.046112060546875,
-0.04095458984375,
-0.0498352050781... |
IIC/XLM-R_Galen | 2023-06-19T11:05:59.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"beto",
"galen",
"es",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | IIC | null | null | IIC/XLM-R_Galen | 0 | 2 | transformers | 2023-06-19T10:51:30 | ---
language: es
tags:
- beto
- galen
license: mit
---
# XLM-R Galén
This is a third party reupload of the original XLM-R Galén model, available in [GitHub](https://github.com/guilopgar/ClinicalCodingTransformerES).
Please refer to the original publication for more information
## BibTeX entry and citation info
```bibtex
@article{9430499,
author={López-García, Guillermo and Jerez, José M. and Ribelles, Nuria and Alba, Emilio and Veredas, Francisco J.},
journal={IEEE Access},
title={Transformers for Clinical Coding in Spanish},
year={2021},
volume={9},
number={},
pages={72387-72397},
doi={10.1109/ACCESS.2021.3080085}}
```
| 652 | [
[
0.00957489013671875,
-0.023651123046875,
0.0496826171875,
0.0301666259765625,
-0.0266265869140625,
-0.01306915283203125,
0.00826263427734375,
-0.00345611572265625,
0.0253143310546875,
0.0540771484375,
-0.03582763671875,
-0.05694580078125,
-0.04827880859375,
... |
emresvd/u198 | 2023-06-19T14:05:39.000Z | [
"keras",
"region:us"
] | null | emresvd | null | null | emresvd/u198 | 0 | 2 | keras | 2023-06-19T14:05:37 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.03759765625,
-0.0401611328125,
0.0321044921875,
0.007656097412109375,
-0.0433349609375,
-0.017974853515625,
0.01090240478515625,
-0.0037326812744140625,
0.020172119140625,
0.0307464599609375,
-0.043670654296875,
-0.051025390625,
-0.039306640625,
0.0002460... |
yo/locale-detector | 2023-06-19T14:23:41.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:common_language",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yo | null | null | yo/locale-detector | 0 | 2 | transformers | 2023-06-19T14:14:56 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: language-detection-fine-tuned-on-xlm-roberta-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: common_language
type: common_language
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.9738386718094919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-fine-tuned-on-xlm-roberta-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [common_language](https://huggingface.co/datasets/common_language) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1886
- Accuracy: 0.9738
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1 | 1.0 | 22194 | 0.1886 | 0.9738 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
### Notebook
[notebook](https://github.com/IvanLauLinTiong/language-detector/blob/main/xlm_roberta_base_commonlanguage_language_detector.ipynb) | 1,748 | [
[
-0.03619384765625,
-0.056671142578125,
0.022674560546875,
0.00873565673828125,
-0.024383544921875,
-0.01412200927734375,
-0.045318603515625,
-0.0258941650390625,
0.006450653076171875,
0.04095458984375,
-0.03851318359375,
-0.061004638671875,
-0.060546875,
0.0... |
mun33b/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-19T18:14:14.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | mun33b | null | null | mun33b/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-19T15:53:18 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 523.50 +/- 90.11
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mun33b -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mun33b -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mun33b
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,752 | [
[
-0.043426513671875,
-0.038848876953125,
0.0196685791015625,
0.0250091552734375,
-0.010772705078125,
-0.017486572265625,
0.01039886474609375,
-0.0129241943359375,
0.0133514404296875,
0.022552490234375,
-0.072509765625,
-0.034423828125,
-0.0252227783203125,
-0... |
namedotpg/dqn-SpaceInvadersTraining | 2023-06-19T21:26:39.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | namedotpg | null | null | namedotpg/dqn-SpaceInvadersTraining | 0 | 2 | stable-baselines3 | 2023-06-19T21:26:01 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 488.50 +/- 158.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga namedotpg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga namedotpg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga namedotpg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,760 | [
[
-0.042694091796875,
-0.039947509765625,
0.0198211669921875,
0.0258026123046875,
-0.0109405517578125,
-0.0172576904296875,
0.01076507568359375,
-0.01319122314453125,
0.01245880126953125,
0.0225982666015625,
-0.072021484375,
-0.0350341796875,
-0.0253143310546875,
... |
vladimirchabanov/mnist_decoder | 2023-06-20T13:30:39.000Z | [
"keras",
"region:us"
] | null | vladimirchabanov | null | null | vladimirchabanov/mnist_decoder | 0 | 2 | keras | 2023-06-20T13:24:14 | ---
library_name: keras
---
# Чать автоэнкодера (декодер) обученный на наборе данных mnist
Форма входа: `(49,)`
Форма выхода: `(28, 28, 1)`
Функция активации выходного слоя: `sigmoid` | 186 | [
[
-0.00678253173828125,
-0.0377197265625,
0.03741455078125,
0.0015249252319335938,
-0.04071044921875,
0.0101776123046875,
0.04254150390625,
0.0183258056640625,
0.050048828125,
0.0198822021484375,
-0.053009033203125,
-0.045379638671875,
-0.046142578125,
-0.0016... |
vladimirchabanov/fashion_mnist_decoder | 2023-06-20T13:32:45.000Z | [
"keras",
"region:us"
] | null | vladimirchabanov | null | null | vladimirchabanov/fashion_mnist_decoder | 0 | 2 | keras | 2023-06-20T13:32:29 | ---
library_name: keras
---
# Чать автоэнкодера (декодер) обученный на наборе данных fashion_mnist
Форма входа: `(49,)`
Форма выхода: `(28, 28, 1)`
Функция активации выходного слоя: `sigmoid` | 194 | [
[
-0.0019521713256835938,
-0.0419921875,
0.025726318359375,
0.00995635986328125,
-0.048248291015625,
0.015838623046875,
0.036651611328125,
-0.00025010108947753906,
0.0443115234375,
0.01149749755859375,
-0.064208984375,
-0.055206298828125,
-0.031402587890625,
-... |
kchen621/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-20T13:54:28.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | kchen621 | null | null | kchen621/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-20T13:53:48 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 598.00 +/- 294.47
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kchen621 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kchen621 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kchen621
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,759 | [
[
-0.043365478515625,
-0.039154052734375,
0.0200347900390625,
0.0252532958984375,
-0.0109405517578125,
-0.017974853515625,
0.00994110107421875,
-0.01275634765625,
0.012969970703125,
0.0233917236328125,
-0.0723876953125,
-0.035003662109375,
-0.02569580078125,
-... |
SotirisLegkas/Socratic-GODEL-instruct | 2023-06-20T14:54:20.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | SotirisLegkas | null | null | SotirisLegkas/Socratic-GODEL-instruct | 0 | 2 | transformers | 2023-06-20T13:54:02 | ---
pipeline_tag: text2text-generation
---
Instruction: given a context, reply as in a Socratic dialogue. | 105 | [
[
0.021728515625,
-0.046966552734375,
0.0299224853515625,
0.018646240234375,
-0.030181884765625,
-0.01247406005859375,
-0.002147674560546875,
0.018402099609375,
0.01352691650390625,
0.0645751953125,
-0.059295654296875,
-0.007228851318359375,
-0.0233917236328125,
... |
venomdenom/MarkModel | 2023-06-20T15:56:31.000Z | [
"keras",
"dataset:mnist",
"region:us"
] | null | venomdenom | null | null | venomdenom/MarkModel | 0 | 2 | keras | 2023-06-20T14:34:00 | ---
datasets:
- mnist
metrics:
- accuracy
library_name: keras
---
## Задание:
Дан датасет mnist по входному изображению определить цифру;

## Общее количество обучаемых параметров: 269,322
## Используемые алгоритмы:
adam_optimizer - алгоритм оптимизации
sparse_categorical_crossentropy - категориальная кроссэнтропия - функция потерь
## Размеры датасетов:
тренировочный - 10000
тестовый - 10000
## Результаты работы
тренировочный -
Training loss: 0.14755813777446747
Training accuracy: 0.9786666631698608
тестовый -
Validation loss: 0.1685849279165268
Validation accuracy: 0.9717000126838684
## Ссылка на Colab:
https://colab.research.google.com/drive/1TnfNRwHOqq5NjewGWZ3v1B7iEiS-iuFG?usp=sharing | 732 | [
[
-0.0308380126953125,
-0.058441162109375,
0.01064300537109375,
0.0075225830078125,
-0.031402587890625,
0.024505615234375,
0.007038116455078125,
-0.003833770751953125,
0.034332275390625,
-0.01311492919921875,
-0.057647705078125,
-0.049957275390625,
-0.053253173828... |
IlyaHtuePav/ForExam | 2023-06-20T17:48:18.000Z | [
"keras",
"region:us"
] | null | IlyaHtuePav | null | null | IlyaHtuePav/ForExam | 0 | 2 | keras | 2023-06-20T14:59:21 | ---
library_name: keras
---
Текст задания: "1. Дан датасет mnist по входному изображению определить цифру"
1. Данная модель нейросети предназначена для распознавания цифр.
2. Изображение послойной архитектуры НС: рисунок ниже.
3. Общее количество обучаемых параметров НС: рисунок ниже.
4. Алгоритм оптимизации: Adam
Функция ошибок: sparse_categorical_crossentropy
6. Объем обучающего датасета: 60000 экземпляров.
Объем валидационного датасета: 5000 экземпляров.
Объем тестового датасета: 5000 экземпляров.
7. Training loss: 0.1968870609998703
Training accuracy: 0.9866499900817871
Validation loss: 0.2491597682237625
Validation accuracy: 0.9675999879837036
Test loss: 0.19332264363765717
Test accuracy: 0.98580002784729


| 823 | [
[
-0.0285797119140625,
-0.04937744140625,
0.01160430908203125,
0.0175933837890625,
-0.043182373046875,
0.0019741058349609375,
0.009796142578125,
-0.01251983642578125,
0.04931640625,
-0.005279541015625,
-0.051727294921875,
-0.0452880859375,
-0.04241943359375,
0... |
Maksimk04/Digits_autoencoder_mnist | 2023-06-20T17:04:16.000Z | [
"keras",
"dataset:mnist",
"region:us"
] | null | Maksimk04 | null | null | Maksimk04/Digits_autoencoder_mnist | 0 | 2 | keras | 2023-06-20T15:00:59 | ---
datasets:
- mnist
---
Данная НС, по сути, является вариационным автоэнкодером (VAE), принимающая на вход изображение 28х28,
возвращая измененное изображение той же самой цифры.
Структура модели:

Общее количество параметров составляет 249247 (124233 для энкодера и 125014 для декодера)
В качестве алгоритма оптимизации был использован стандартный 'adam' из keras.
Функция ошибок - mse (mean squared error).
(В дальнейшем функцию ошибок лучше заменить на специальную для vae)
Размеры тренировочного и тестового датасеты стандартны:
60 тыс. тренировочный
10 тыс. тестовый
В ходе обучения тренировочный разбивается еще и на валидационный в пропорции 1:5 (0.2),
поэтому итоговый размер тренировочного датасета - 48 тыс., валидационный - 12 тыс.
По окончанию обучения (10 эпох):
loss для тренировочной 0.334
loss для валидационной 0.335
loss для тестовой 0.336
В качестве метрики для точности для такого рода НС выбрать что-либо очень сложно,
Была выбрана стандартная метрика accuracy,
которая, соответсвенно, показала не самые информативные результаты:
для тренировочной 0.0092
для валидационной 0.0093
для тестовой 0.0074
Пример генерации сетью цифры 7
 | 1,187 | [
[
-0.028900146484375,
-0.032318115234375,
0.0298614501953125,
0.005374908447265625,
-0.0361328125,
-0.0109100341796875,
0.01187896728515625,
-0.0040130615234375,
0.048431396484375,
0.0033473968505859375,
-0.04119873046875,
-0.0548095703125,
-0.051544189453125,
... |
jxssx/autoencoder | 2023-06-20T16:40:13.000Z | [
"keras",
"region:us"
] | null | jxssx | null | null | jxssx/autoencoder | 0 | 2 | keras | 2023-06-20T15:05:31 | Данная нейронная сеть восстанавливает входное изображение из "скрытого" состояния. Таким образом, на выходе получается новое изображение.

Алгоритм оптимизации: Adam.
Функция ошибки выглядит так:
def loss(y, z):
y = K.reshape(y, shape = (batch_size, 28*28))
z = K.reshape(z, shape = (batch_size, 28*28))
mse = K.sum(K.square(y - z), axis = 1)
kl = -.5 * K.sum(1 + loss_z_log_var - K.square(loss_z_mean) - K.exp(loss_z_log_var), axis = 1)
return mse
Длина тренировочного и тестового датасетов: 60000 и 10000 соответственно.
Потери в процессе обучения:

| 592 | [
[
-0.01239013671875,
-0.044097900390625,
0.03460693359375,
0.003856658935546875,
-0.0306549072265625,
-0.022003173828125,
0.00577545166015625,
0.00939178466796875,
0.050750732421875,
0.022857666015625,
-0.0660400390625,
-0.041168212890625,
-0.0309600830078125,
... |
Elvis120/95points | 2023-06-20T15:30:22.000Z | [
"keras",
"region:us"
] | null | Elvis120 | null | null | Elvis120/95points | 0 | 2 | keras | 2023-06-20T15:25:38 | ---
library_name: keras
---
# Моя модель для распознавания цифр
Натренирована на наборе данных mnist
навания цифр | 128 | [
[
-0.00965118408203125,
-0.051849365234375,
0.0159454345703125,
0.003971099853515625,
-0.0562744140625,
0.04150390625,
0.0282440185546875,
0.01212310791015625,
0.06866455078125,
0.028167724609375,
-0.032379150390625,
-0.044464111328125,
-0.054168701171875,
-0.... |
IIC/mdeberta-v3-base-caresA | 2023-06-20T15:54:52.000Z | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"biomedical",
"clinical",
"spanish",
"mdeberta-v3-base",
"es",
"dataset:chizhikchi/CARES",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | IIC | null | null | IIC/mdeberta-v3-base-caresA | 0 | 2 | transformers | 2023-06-20T15:27:49 | ---
language: es
tags:
- biomedical
- clinical
- spanish
- mdeberta-v3-base
license: mit
datasets:
- "chizhikchi/CARES"
metrics:
- f1
model-index:
- name: IIC/mdeberta-v3-base-caresA
results:
- task:
type: multi-label-classification
dataset:
name: Cares Area
type: chizhikchi/CARES
split: test
metrics:
- name: f1
type: f1
value: 0.993
pipeline_tag: text-classification
---
# mdeberta-v3-base-caresA
This model is a finetuned version of mdeberta-v3-base for the cantemist dataset used in a benchmark in the paper TODO. The model has a F1 of 0.993
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e-05 |
| classifier dropout | 0.2 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
| 1,158 | [
[
-0.0287628173828125,
-0.021942138671875,
0.0428466796875,
0.03265380859375,
-0.04803466796875,
-0.0247344970703125,
0.00926971435546875,
-0.018707275390625,
0.021728515625,
0.03466796875,
-0.058441162109375,
-0.04345703125,
-0.05023193359375,
-0.014953613281... |
CyberTea/neuro5_fashion_mnist | 2023-06-20T20:09:15.000Z | [
"keras",
"region:us"
] | null | CyberTea | null | null | CyberTea/neuro5_fashion_mnist | 0 | 2 | keras | 2023-06-20T15:34:05 | # Распознавание класса изображений на датасете mnist.
# Задача НС
Модель распознаёт к какому классу из 3 (0 - одежда, 1 - обувь, 2 - сумка) относится изображение
## Изображение послойной архитектуры:

## Общее количество обучаемых параметров
Обучаемых параметров: 16,547
## Используемые алгоритмы оптимизации и функция ошибки
Алгоритм оптимизации - `adam`
Функция ошибки - `categorical_crossentropy`
## Размеры тренировочного, валидационного и тестового датасетов:
Тренировочный: 60000
Тестовый: 10000
Валидационный(тестовый): 10000
## Результаты обучения модели: loss и accuracy на всех трёх датасетах:
Train Loss: 0.002967413514852524
Train Accuracy: 0.9993500113487244
Test Loss: 0.016184156760573387
Test Accuracy: 0.9958000183105469
Validation Loss: 0.016184156760573387
Validation Accuracy: 0.9958000183105469
## Результаты работы программы и нейросети:
 | 934 | [
[
-0.024810791015625,
-0.039154052734375,
0.0188446044921875,
0.0174407958984375,
-0.0394287109375,
0.0019969940185546875,
0.016082763671875,
-0.01303863525390625,
0.03240966796875,
-0.00676727294921875,
-0.038787841796875,
-0.03741455078125,
-0.047821044921875,
... |
IIC/xlm-roberta-large-caresA | 2023-06-20T15:39:00.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"biomedical",
"clinical",
"spanish",
"xlm-roberta-large",
"es",
"dataset:chizhikchi/CARES",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | IIC | null | null | IIC/xlm-roberta-large-caresA | 0 | 2 | transformers | 2023-06-20T15:35:16 | ---
language: es
tags:
- biomedical
- clinical
- spanish
- xlm-roberta-large
license: mit
datasets:
- "chizhikchi/CARES"
metrics:
- f1
model-index:
- name: IIC/xlm-roberta-large-caresA
results:
- task:
type: multi-label-classification
dataset:
name: Cares Area
type: chizhikchi/CARES
split: test
metrics:
- name: f1
type: f1
value: 0.994
pipeline_tag: text-classification
---
# xlm-roberta-large-caresA
This model is a finetuned version of xlm-roberta-large for the Cares Area dataset used in a benchmark in the paper TODO. The model has a F1 of 0.994
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 3e-05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
| 1,163 | [
[
-0.023193359375,
-0.0478515625,
0.058837890625,
0.01203155517578125,
-0.034454345703125,
-0.04022216796875,
-0.0022430419921875,
-0.0169830322265625,
0.005558013916015625,
0.046173095703125,
-0.054443359375,
-0.042938232421875,
-0.052947998046875,
-0.0090026... |
Elvis120/95point | 2023-06-20T16:05:20.000Z | [
"keras",
"region:us"
] | null | Elvis120 | null | null | Elvis120/95point | 0 | 2 | keras | 2023-06-20T15:37:36 | ---
library_name: keras
---
# Моя модель для распознавания цифр и определения остатка от деления этой цифры на 2
# Описание задачи
Цель данной нейронной сети состоит в определении остатка от деления цифры на 2 по входному изображению из набора данных MNIST.
# Послойная архитектура нейронной сети

# Общее количество обучаемых параметров НС
Всего обучаемых параметров в нейронной сети: (28*28 + 1) * 128 + (128 + 1) * 1 = 100609 параметра.
# Используемый алгоритм оптимизации и функция ошибки
Алгоритм оптимизации: Adam
Функция ошибки: binary_crossentropy
# Размеры тренировочного, валидационного и тестового датасетов
Размер тренировочного датасета: 48000 изображений.
Размер валидационного датасета: 12000 изображений.
Размер тестового датасета: 10000 изображений.
# Результаты обучения модели
Тренировочная выборка - Loss: 0.01 Accuracy: 0.99
Тестовая выборка - Loss: 0.04 Accuracy: 0.98 | 910 | [
[
-0.023956298828125,
-0.035552978515625,
0.0216217041015625,
0.01500701904296875,
-0.04949951171875,
0.0302886962890625,
0.0169677734375,
-0.0232086181640625,
0.039459228515625,
0.0024547576904296875,
-0.03863525390625,
-0.03851318359375,
-0.05517578125,
-0.0... |
IIC/BETO_Galen-caresA | 2023-08-02T06:23:15.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"biomedical",
"clinical",
"spanish",
"BETO_Galen",
"es",
"dataset:chizhikchi/CARES",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | IIC | null | null | IIC/BETO_Galen-caresA | 0 | 2 | transformers | 2023-06-20T15:39:02 | ---
language: es
tags:
- biomedical
- clinical
- spanish
- BETO_Galen
license: mit
datasets:
- "chizhikchi/CARES"
metrics:
- f1
model-index:
- name: IIC/BETO_Galen-caresA
results:
- task:
type: multi-label-classification
dataset:
name: Cares Area
type: chizhikchi/CARES
split: test
metrics:
- name: f1
type: f1
value: 0.977
pipeline_tag: text-classification
---
# BETO_Galen-caresA
This model is a finetuned version of BETO_Galen for the Cares Area dataset used in a benchmark in the paper TODO. The model has a F1 of 0.977
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 3e-05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
| 1,135 | [
[
-0.0149993896484375,
-0.036865234375,
0.0501708984375,
0.01427459716796875,
-0.04315185546875,
-0.039825439453125,
0.01641845703125,
-0.0091552734375,
0.01186370849609375,
0.029998779296875,
-0.044769287109375,
-0.040374755859375,
-0.03558349609375,
-0.02151... |
Yandexxxx/zachet_python | 2023-06-20T17:04:05.000Z | [
"keras",
"region:us"
] | null | Yandexxxx | null | null | Yandexxxx/zachet_python | 0 | 2 | keras | 2023-06-20T16:13:33 | ---
library_name: keras
---
Модель для распознования цифр, которая выдает результат %2 от чисел, натренерованна на наборе данных mnist

Общее количество обучаемых параметров НС мы узнаем с помощью .summary и их число равно 209 826
.summary выводит сводку модели машинного обучения, созданной в рамках проекта. Он позволяет увидеть количество слоев, количество нейронов в каждом слое,
функции активации и другие параметры модели. Это помогает определить, какие данные будут входить в модель, какие выходные данные будут получены,
какие параметры будут использоваться и какие функции потерь будут использоваться при обучении модели.

В данной работе я использую функцию потерь categorical_crossentropy, которая используется для классификации с несколькими классами.
В качестве оптимизатора я использую adam, который является одним из наиболее популярных оптимизаторов для обучения нейронных сетей.
Так как в данной работе я использую Mnist, он содержит 70 000 рукописных чисел, при чем 10 000 это тестовая выборка, 60 000 тренировочная, но в ней 20% являются валидационными
поэтому тестовая 10 000, валидационная 12 000 и тренировочная 48 000 данных
Ниже прикреплены картинки который показывают loss, accuracy на всех трех датасетах
Точность accuracy для валидационной и обучающей

Loss для валидационной и обучающей

accuracy и loss для тестовой выборки

| 1,443 | [
[
-0.03662109375,
-0.035247802734375,
0.0298614501953125,
0.002532958984375,
-0.033905029296875,
-0.0014867782592773438,
0.0085296630859375,
-0.0175018310546875,
0.037811279296875,
0.00887298583984375,
-0.03216552734375,
-0.049072265625,
-0.038482666015625,
-0... |
Dugoss/qwerty | 2023-06-20T17:30:10.000Z | [
"keras",
"region:us"
] | null | Dugoss | null | null | Dugoss/qwerty | 0 | 2 | keras | 2023-06-20T16:23:31 | Построили модель и натренировали ее на большей части данных с цифрами так, чтобы можно было передавать модели фотографии с цифрами размером 28×28 пикселей и получать на выходе значение этой цифры.

Для построения модели использовали обычные полносвязанные слои с разным количеством узлов. В качестве функции активации на входном и промежуточных слоях использовали функцию relu. На выходном слое в качестве функции активации определили сигмоиду

В качестве оптимайзера был выбран Adam.
В массиве X_train содержится 60000 изображений, ровно столько же содержится и в массиве y_train с соответствующими метками. Тестовые данные X_test и y_test содержат по 10000 элементов.
Epoch 1/5
96/96 [==============================] - 43s 429ms/step - loss: 0.1776 - binary_accuracy: 0.9385 - val_loss: 0.0580 - val_binary_accuracy: 0.9812
Epoch 2/5
96/96 [==============================] - 40s 417ms/step - loss: 0.0492 - binary_accuracy: 0.9838 - val_loss: 0.0376 - val_binary_accuracy: 0.9880
Epoch 3/5
96/96 [==============================] - 40s 419ms/step - loss: 0.0370 - binary_accuracy: 0.9881 - val_loss: 0.0347 - val_binary_accuracy: 0.9892
Epoch 4/5
96/96 [==============================] - 41s 423ms/step - loss: 0.0327 - binary_accuracy: 0.9893 - val_loss: 0.0327 - val_binary_accuracy: 0.9896
Epoch 5/5
96/96 [==============================] - 41s 427ms/step - loss: 0.0295 - binary_accuracy: 0.9905 - val_loss: 0.0312 - val_binary_accuracy: 0.9903
В результате обучения модели на 5 эпохах был замечен очень низкий loss и высокая точность! | 1,573 | [
[
-0.0382080078125,
-0.023773193359375,
0.033447265625,
0.01009368896484375,
-0.034942626953125,
-0.01114654541015625,
0.004871368408203125,
-0.01132965087890625,
0.0309600830078125,
0.007965087890625,
-0.046600341796875,
-0.0413818359375,
-0.038818359375,
-0.... |
Andrey13rasfasf/task | 2023-06-20T17:08:20.000Z | [
"keras",
"region:us"
] | null | Andrey13rasfasf | null | null | Andrey13rasfasf/task | 0 | 2 | keras | 2023-06-20T16:25:39 | ---
library_name: keras
---
Характеристики НС:
Архитектура: автоэнкодер имеет два скрытых слоя, первый из которых имеет 128 нейронов, а второй слой имеет 64 нейрона. Выходной слой имеет 784 нейрона, которые соответствуют размеру исходного изображения MNIST.
Функции активации: автоэнкодер использует "ReLU" функцию активации для скрытых слоев и "sigmoid" - для выходного слоя.
Функция потерь: НС использует метод среднеквадратической ошибки (MSE) в качестве функции потерь, что помогает минимизировать ошибку при восстановлении исходного изображения из сжатого.
Алгоритм оптимизации: НС используется алгоритм оптимизации стохастический градиентный спуск с небольшим шагом обучения (learning rate).
Размер и тип данных: НС обрабатывает изображения MNIST размером 28x28, которые являются черно-белыми (одноканальными).
Временные характеристики: Количество эпох обучения 10 и размер пакета данных 128
Количество нейронов и размер НС: имеет 97280 обучаемых параметров, скрытые слои содержат 16512 и 8256 параметров соответственно, выходной слой - 50240 параметров.
 | 1,087 | [
[
-0.040313720703125,
-0.03118896484375,
0.021636962890625,
0.014862060546875,
-0.037750244140625,
0.00489044189453125,
0.01235198974609375,
-0.0140228271484375,
0.03411865234375,
0.0076751708984375,
-0.0300750732421875,
-0.0570068359375,
-0.03985595703125,
0.... |
Andysoeasy/fashion_detects | 2023-06-20T16:48:53.000Z | [
"keras",
"region:us"
] | null | Andysoeasy | null | null | Andysoeasy/fashion_detects | 0 | 2 | keras | 2023-06-20T16:35:34 | ---
library_name: keras
---
# Модель распознавания изображений.
Обучена на наборе данных fashion_mnist
Модель нейронной сети выполняет задачу предсказания образов, на основе чего делается вывод - какой это именно элемент: одежда, обувь или сумка.
Структура модели

Общее количество обучающих параметров - 242 762.
Алгоритм оптимизации - adam
Функция ошибки - sparse_categorical_crossentropy.
Размеры датасетов:
- тренировочный: (60000, 28, 28) - изображения, (60000, ) - метки;
- валидационный: (100, 28, 28) - изображения, (100, ) - метки;
- тестовый: (10000, 28, 28) - изображения, (10000, ) - метки.
Результаты обучения:
- тренировочный: loss: 0.4489, accuracy: 0.8598;
- валидационный: val_loss: 0.4829, val_accuracy: 0.8535;
- тестовый: loss: 58.6129 - accuracy: 0.6714.
| 811 | [
[
-0.01776123046875,
-0.045013427734375,
0.0172271728515625,
0.00388336181640625,
-0.048492431640625,
0.0184173583984375,
0.01259613037109375,
-0.0154876708984375,
0.05340576171875,
-0.01039886474609375,
-0.06329345703125,
-0.065185546875,
-0.035308837890625,
... |
SaiderNN/Task | 2023-06-20T19:43:33.000Z | [
"keras",
"region:us"
] | null | SaiderNN | null | null | SaiderNN/Task | 0 | 2 | keras | 2023-06-20T16:51:25 | # Модель восстановления изображения
ИНС - автоэнкодер, на вход которой подается изображение размером 28*28.
Задача ИНС - сжать изображение и восстановить его.
Общее количество обучаемых параметров НС: 4,385
Используемый алгоритм оптимизации: Adamax , функция ошибки: mse
Размеры датасетов:
тренировочный - 48000 изображений,
валидационный - 12000 изображений,
тестовый - 10000 изображений
Результаты обучения модели на 10 эпохах:
(в качестве вычисления accuracy была использована метрика SSIM)
Тренировочный датасет: loss - 0.01716545782983303, SSIM - 0.8874326
Валидационный датасет: loss - 0.017233747988939285, SSIM - 0.8873238
Тестовый датасет: loss - 0.01724238507449627, SSIM - 0.88665247

| 735 | [
[
-0.0265655517578125,
-0.04376220703125,
0.034881591796875,
0.00670623779296875,
-0.03765869140625,
0.00614166259765625,
0.0223846435546875,
-0.00934600830078125,
0.0298919677734375,
0.0023441314697265625,
-0.04327392578125,
-0.0516357421875,
-0.045166015625,
... |
Piun/Zachet | 2023-06-20T17:43:53.000Z | [
"keras",
"region:us"
] | null | Piun | null | null | Piun/Zachet | 0 | 2 | keras | 2023-06-20T17:16:33 | # Модель распознавания изображений.
Обучена на наборе данных mnist
Модель нейронной сети выполняет задачу предсказания цифр, на основе чего выводится остаток от деления данной цифры на 3.
Структура модели

Общее количество обучающих параметров - 111,146.
Алгоритм оптимизации - adam
Функция ошибки - sparse_categorical_crossentropy.
Размеры датасетов:
тренировочный: (60000, 28, 28) - изображения, (60000, ) - метки;
валидационный: (100, 28, 28) - изображения, (100, ) - метки;
тестовый: (10000, 28, 28) - изображения, (10000, ) - метки.
Результаты обучения:
тренировочный: loss: 0.2079, accuracy: 0.9695;
валидационный: val_loss: 0.2054, val_accuracy: 0.9690;
тестовый: loss: 14.7035 - accuracy: 0.9470. | 739 | [
[
-0.023895263671875,
-0.0479736328125,
0.0209197998046875,
0.00925445556640625,
-0.0391845703125,
0.019073486328125,
0.00826263427734375,
-0.007457733154296875,
0.056640625,
-0.01079559326171875,
-0.04931640625,
-0.057861328125,
-0.045928955078125,
0.00128936... |
Bobiiii/FinalNumRemindByThree | 2023-06-20T19:45:41.000Z | [
"keras",
"region:us"
] | null | Bobiiii | null | null | Bobiiii/FinalNumRemindByThree | 0 | 2 | keras | 2023-06-20T17:27:02 | # Описание модели
Модель принимает цифры на основе датасета `mnist` определяет число и выводит остаток от деления этого числа на 3.
Модель состоит из двух частей.
Первая распознает число и передает это значение в вторую часть модели.
Вторая делит полученный результат число на три.
Выходной результат выглядит как массив их трех элементов.
Индекс максимального аргумента и будет соответствовать нужному значению.
Например: `[0,0,1] - 2`
Пример работы модели:

Как видим модель неплохо справляется с поставленной задачей и хорошо предсказывает результат.
# Архитектруа модели

# Summary
Model: "ImageToRemainder"
| Layer (type) | Output Shape | Param # |
|-----------------------------|------------------|---------|
| MnistImg (InputLayer) | [(None, 28, 28)] | 0 |
| ImgToNum (Functional) | (None, 10) | 124310 |
| NumToRemainder (Functional) | (None, 3) | 155 |
Total params: `124,465`
Trainable params: `124,465`
Non-trainable params: `0`
# Используемый алгоритмы оптимизации и функция ошибки
Алгоритм оптимизации: `adam`
Функция ошибки: `categorical_crossentropy`
Валидация - `validation_split=0.3`
# Размеры тренировочного, валидационного и тестового датасетов
Train shape: `42000`
Validation shape: `18000`
Test shape: `10000`
# Результаты обучения модели: loss и accuracy.
История обучения `accuracy` и `loss` для `train` и `validation`


Проверка после обучения на данных из `test`:
- Test loss: `0.07424477487802505`
- Test accuracy: `0.9800999760627747`
| 1,891 | [
[
-0.031829833984375,
-0.042816162109375,
0.0276336669921875,
0.0218658447265625,
-0.03375244140625,
-0.01425933837890625,
0.0098114013671875,
-0.02337646484375,
0.04071044921875,
0.00933074951171875,
-0.052154541015625,
-0.032562255859375,
-0.0445556640625,
-... |
mariabashkeva/Exam | 2023-06-20T19:40:20.000Z | [
"keras",
"region:us"
] | null | mariabashkeva | null | null | mariabashkeva/Exam | 0 | 2 | keras | 2023-06-20T17:38:23 | 1. Описание задачи которую выполняет НС;
Дан датасет mnist постройте автоэнкодер принимающий на вход изображение цифры и
создающий её же изображение на выходе;
2. Изображение послойной архитектуры НС на которой указаны размеры слоя, функция
активации;

3. Общее количество обучаемых параметров НС;
131457
5. Используемый алгоритмы оптимизации и функция ошибки;
adam, mean_squared_error
6. Размеры тренировочного, валидационного и тестового датасетов;
Тренировочный: 60000
Тестовый: 10000
8. Результаты обучения модели: loss и accuracy на всех трёх датасетах.
 | 114,018 | [
[
-0.0673828125,
-0.06207275390625,
0.035736083984375,
-0.0016183853149414062,
-0.01546478271484375,
0.0004055500030517578,
0.0235748291015625,
-0.0281219482421875,
0.05181884765625,
0.036163330078125,
-0.020172119140625,
-0.026611328125,
-0.047332763671875,
0... |
Disskretnost/neuro9_ashion_mnist | 2023-06-20T18:06:29.000Z | [
"keras",
"region:us"
] | null | Disskretnost | null | null | Disskretnost/neuro9_ashion_mnist | 0 | 2 | keras | 2023-06-20T17:47:26 | # Распознавание класса изображений на датасете mnist.
# Задача НС
Генерация изображения похожего на предмет из набора fashion_mnist
## Изображение послойной архитектуры:
### Полная нейросеть:

### Encoder:

## Общее количество обучаемых параметров
Обучаемых параметров: 54,410
## Используемые алгоритмы оптимизации и функция ошибки
Алгоритм оптимизации - `adam`
Функция ошибки - `mse`
## Размеры тренировочного, валидационного и тестового датасетов:
Тренировочный: 60000
Тестовый: 10000
Валидационный(тестовый): 10000
## Результаты обучения модели: loss и accuracy на всех трёх датасетах:
Train Loss: 0.06076487898826599
Train Accuracy: 0.49122941493988037
Test Loss: 0.06062548980116844
Test Accuracy: 0.4893147945404053
Validation Loss: 0.06062548980116844
Validation Accuracy: 0.4893147945404053
## Результаты работы программы и нейросети:
 | 914 | [
[
-0.0304107666015625,
-0.032196044921875,
0.0104217529296875,
0.01488494873046875,
-0.051055908203125,
0.00562286376953125,
0.01117706298828125,
-0.0184478759765625,
0.035430908203125,
-0.002124786376953125,
-0.051971435546875,
-0.043792724609375,
-0.042846679687... |
Au3609/Exam | 2023-06-20T19:16:03.000Z | [
"keras",
"region:us"
] | null | Au3609 | null | null | Au3609/Exam | 0 | 2 | keras | 2023-06-20T17:53:51 | Дан датасет mnist по входному изображению определить цифру
Total params: 118,282
Используемый алгоритм оптимизации: Adam. Функция ошибки: разреженная категориальная кросс энтропия

LOSS

ACCURACY
 | 248 | [
[
-0.01529693603515625,
-0.04290771484375,
0.041656494140625,
0.00662994384765625,
-0.060638427734375,
0.005588531494140625,
0.0228729248046875,
0.0181732177734375,
0.05462646484375,
0.02142333984375,
-0.0433349609375,
-0.0701904296875,
-0.0582275390625,
-0.00... |
Aleksandra131325425/zachet_python_3 | 2023-06-20T18:12:33.000Z | [
"keras",
"region:us"
] | null | Aleksandra131325425 | null | null | Aleksandra131325425/zachet_python_3 | 0 | 2 | keras | 2023-06-20T17:55:45 | ---
library_name: keras
---
Модель для распознования цифр выдающая результаты %3 от чисел, которая была натренерованна на наборе данных mnist

Общее количество обучаемых параметров НС равно 209,826

В данной работе я воспользовалась функцией потерь categorical_crossentropy, которая используется для классификации с несколькими классами.
В качестве оптимизатора я воспользовалась adam.
Так как в данной работе я использую Mnist, поэтому тестовая = 10 000, валидационная = 12 000 и тренировочная = 48 000 данных
Ниже показаны картинки которые отражают показатели loss и accuracy на всех трех датасетах
accuracy и loss для тестовой выборки

Точность accuracy и loss для валидационной и обучающей
 | 772 | [
[
-0.03173828125,
-0.0379638671875,
0.027435302734375,
0.00476837158203125,
-0.036956787109375,
0.01187896728515625,
0.01480865478515625,
-0.00936126708984375,
0.03631591796875,
0.00597381591796875,
-0.0272369384765625,
-0.04718017578125,
-0.048614501953125,
-... |
msproper/PR6 | 2023-06-21T04:36:55.000Z | [
"keras",
"region:us"
] | null | msproper | null | null | msproper/PR6 | 0 | 2 | keras | 2023-06-20T18:07:32 | Дан датасет fashion_mnist и обученная нейронная сеть.
Использовал их для генерации изображения похожего на предмет из набора fashion_mnist .
Веса нейронной сети данной по заданию не должны быть изменены в процессе дообучения.
Оптимизатор использовал Adam, потери - среднеквадратичное
Total params: 54,699

./
./
 | 390 | [
[
-0.027679443359375,
-0.045623779296875,
0.03387451171875,
0.013824462890625,
-0.060699462890625,
0.00931549072265625,
0.0159454345703125,
-0.0228118896484375,
0.058441162109375,
0.0011777877807617188,
-0.05609130859375,
-0.0615234375,
-0.027191162109375,
0.0... |
ChilNik/PR_digits | 2023-06-21T11:23:24.000Z | [
"keras",
"code",
"dataset:mnist",
"region:us"
] | null | ChilNik | null | null | ChilNik/PR_digits | 0 | 2 | keras | 2023-06-20T18:17:48 | ---
datasets:
- mnist
library_name: keras
tags:
- code
---
Модель берет изображение (в данном случае из mnist) определяет цифру которая изображена, делит эту цифру на 2 и выводит остаток от деления.

Оптимизаторы: Adam
Размер тренировочного датасета: 60000
Размер валидационного датасета: 6000
Размер тестового датасета: 10000
Результаты обучения: Loss: 0.045721635222435, Accuracy: 0.9848999977111816 | 525 | [
[
-0.0170135498046875,
-0.054779052734375,
0.0316162109375,
-0.004955291748046875,
-0.042236328125,
0.01364898681640625,
0.01436614990234375,
-0.020538330078125,
0.0615234375,
0.01043701171875,
-0.055145263671875,
-0.0399169921875,
-0.035125732421875,
0.000208... |
Neitha/fashion_mnist | 2023-06-20T19:16:49.000Z | [
"keras",
"region:us"
] | null | Neitha | null | null | Neitha/fashion_mnist | 0 | 2 | keras | 2023-06-20T18:38:09 | На этапе присоединения заданного декодера и энкодера была получена ошибка, решить которую за длительное время не получилось.
Код
input_dec = Input(shape=(49,))
x = input_dec
x = model.layers[1](input_dec)
x = model.layers[2](x)
decoded = Reshape((28, 28, 1))(x)
decoder = keras.Model(input_dec, decoded, name='decoder')
vae = keras.Model(input_img, decoder(encoder), name='vae')
Ошибка:
Inputs to a layer should be tensors. Got '<keras.engine.functional.Functional object at 0x7f2e04012590>'
(of type <class 'keras.engine.functional.Functional'>) as input for layer 'decoder'.
1. Генеративная модель - модель, которая по входным данным создает новые данные по заданным во время обучения характеристикам.
Автоэнкорер - модель, которой не требуются размеченные данные. Данные сначала попадают на входной слой, затем попадают в скрытые слои,
затем выходят в выходной слой энкодера, где имеют меньшую размерность, чем оригинальные данные. Каждый слой имеет свои веса, loss функцию и функцию активации.
Затем из энкодера данные попадают в декодер, который максимально близко воспроизводит данные. Задача обучения - научить энкодер так кодировать, а
декодер так воспроизводить данные, чтобы полученные данные минимально отличались от исходных.
2. 
3. Общее количество обучаемых параметров: 635796
4. Оптимизатор adam, функция ошибок

5. размеры датасетов:
тренировочный: 57000, 28, 28, 1
валидационный: 3000, 28, 28, 1
тестовый: 9000, 28, 28, 1
6. loss на датасетах:
test Loss: 29.885684967041016 train loss: 30.035560607910156 validation loss: 29.61772918701172
accuracy на автоэнкодерах не применяется, так как они не классифицируют данные по классам, а являются генеративными нейросетями. | 1,768 | [
[
-0.037322998046875,
-0.035980224609375,
0.0271148681640625,
0.00832366943359375,
-0.0299530029296875,
-0.01324462890625,
0.0011882781982421875,
-0.007228851318359375,
0.04022216796875,
0.00830078125,
-0.03741455078125,
-0.0491943359375,
-0.052764892578125,
0... |
Rage4/Gasilin_var8 | 2023-06-20T20:05:17.000Z | [
"keras",
"region:us"
] | null | Rage4 | null | null | Rage4/Gasilin_var8 | 0 | 2 | keras | 2023-06-20T19:14:31 | 1. Нейронная сеть генерирует цифры похожие на цифры из датасета mnist.
2. 
3. Общее количество обучаемых параметров НС: 54160
4. Используемый алгоритмы оптимизации и функция ошибки: adam и categorical_crossentropy.
5. Размеры тренировочного, валидационного и тестового датасетов: тренировочный: 60000, валидационный: 10000, тестовый: 10000
6. Результаты обучения модели: loss и accuracy на всех трёх датасетах: тренировочный: loss: 2554.3391, accuracy: 0.7287; валидационный: loss: 2521.8169, accuracy: 0.7296; тестовый: loss: 2570.7542, accuracy: 0.7292 | 607 | [
[
-0.0145263671875,
-0.04644775390625,
0.034027099609375,
0.0017681121826171875,
-0.032440185546875,
0.0175018310546875,
0.0113983154296875,
-0.02984619140625,
0.044677734375,
-0.0018825531005859375,
-0.051483154296875,
-0.048004150390625,
-0.035369873046875,
... |
pln-fing-udelar/robertuito-HUHU-task1 | 2023-06-22T22:25:41.000Z | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | pln-fing-udelar | null | null | pln-fing-udelar/robertuito-HUHU-task1 | 0 | 2 | transformers | 2023-06-20T20:13:45 | ---
tags:
- generated_from_keras_callback
model-index:
- name: robertuito-HUHU-task1
results: []
widget:
- text: "El español es un idioma muy hablado en el mundo."
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robertuito-HUHU-task1
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the HUHU Shared Task at IberLEF 2023. It was trained on a partition of the train set provided by the organizers.
## Model description
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) for the task of classifying a tweet (considered to be hurtful or conveying prejudice in some way) into humorous or non-humorous.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
| 1,600 | [
[
-0.0302581787109375,
-0.0458984375,
0.021881103515625,
0.0108795166015625,
-0.039794921875,
-0.0214080810546875,
-0.019073486328125,
-0.031585693359375,
0.01526641845703125,
0.0240936279296875,
-0.055755615234375,
-0.0457763671875,
-0.064697265625,
-0.010421... |
emresvd/u203 | 2023-06-20T20:38:17.000Z | [
"keras",
"region:us"
] | null | emresvd | null | null | emresvd/u203 | 0 | 2 | keras | 2023-06-20T20:38:11 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 841 | [
[
-0.037200927734375,
-0.03997802734375,
0.031890869140625,
0.0081634521484375,
-0.043243408203125,
-0.0177154541015625,
0.01097869873046875,
-0.0033969879150390625,
0.0204620361328125,
0.030517578125,
-0.04376220703125,
-0.05120849609375,
-0.040008544921875,
... |
dickreuter/poker-card-classification | 2023-06-20T20:58:10.000Z | [
"keras",
"poker-card-classification",
"pokerbot",
"region:us"
] | null | dickreuter | null | null | dickreuter/poker-card-classification | 1 | 2 | keras | 2023-06-20T20:54:16 | ---
library_name: keras
tags:
- poker-card-classification
- pokerbot
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.0010000000474974513 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 601 | [
[
-0.03021240234375,
-0.042022705078125,
0.0220947265625,
0.0024738311767578125,
-0.0287017822265625,
-0.020599365234375,
0.0006575584411621094,
-0.0090484619140625,
0.016754150390625,
0.0216217041015625,
-0.034820556640625,
-0.052154541015625,
-0.03778076171875,
... |
akira225/deberta-v3-base-ECE | 2023-06-21T08:46:41.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"deberta-v3-base",
"deberta-v3",
"deberta",
"token-classification",
"emotion",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | token-classification | akira225 | null | null | akira225/deberta-v3-base-ECE | 0 | 2 | transformers | 2023-06-21T02:18:24 | ---
license: apache-2.0
language: en
tags:
- deberta-v3-base
- deberta-v3
- deberta
- token-classification
- emotion
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for DeBERTa-v3-base-ECE
This is [DeBERTa-v3](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) fine-tuned for Emotion Cause Extraction (ECE) task.
For input text i.e. a sequence of tokens containing a situation with emotional coloring, it is necessary to determine the subset of which tokens justify the emotional state of the speaker. Formally speaking, it is convenient to look at the problem as a binary token classification, where one means that the corresponding token belongs to the desired subset.
## Training
Code use to train this model avaliable on my [GitHub](https://github.com/akira225/emotion-cause-detection)
## Evaluation
Has following results on [EmoCause](https://github.com/skywalker023/focused-empathy) and [EmpatheticDialodues](https://github.com/facebookresearch/EmpatheticDialogues):
| Accuracy | Top-1 Recall | Top-3 Recall | Top-5 Recall |
| ------------- | ------------- | ------------- | ------------- |
| 0.59 | 0.249 | 0.623 | 0.806 |
</details> | 1,239 | [
[
-0.03265380859375,
-0.044769287109375,
0.04803466796875,
0.02960205078125,
-0.0217132568359375,
-0.0241241455078125,
0.0027065277099609375,
-0.034576416015625,
0.027740478515625,
0.012420654296875,
-0.0574951171875,
-0.053985595703125,
-0.05841064453125,
0.0... |
IIC/bert-base-spanish-wwm-cased-ctebmsp | 2023-07-18T07:10:29.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"biomedical",
"clinical",
"spanish",
"bert-base-spanish-wwm-cased",
"token-classification",
"es",
"dataset:lcampillos/ctebmsp",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | IIC | null | null | IIC/bert-base-spanish-wwm-cased-ctebmsp | 0 | 2 | transformers | 2023-06-21T06:46:59 | ---
language: es
tags:
- biomedical
- clinical
- spanish
- bert-base-spanish-wwm-cased
license: cc-by-4.0
datasets:
- "lcampillos/ctebmsp"
metrics:
- f1
model-index:
- name: IIC/bert-base-spanish-wwm-cased-ctebmsp
results:
- task:
type: token-classification
dataset:
name: CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish)
type: lcampillos/ctebmsp
split: test
metrics:
- name: f1
type: f1
value: 0.88
pipeline_tag: token-classification
---
# bert-base-spanish-wwm-cased-ctebmsp
This model is a finetuned version of bert-base-spanish-wwm-cased for the CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) dataset used in a benchmark in the paper TODO. The model has a F1 of 0.88
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
| 1,318 | [
[
-0.0283050537109375,
-0.04351806640625,
0.024200439453125,
0.03302001953125,
-0.038055419921875,
-0.0296783447265625,
-0.0153045654296875,
-0.015289306640625,
0.0209808349609375,
0.037445068359375,
-0.054168701171875,
-0.052001953125,
-0.049835205078125,
-0.... |
IIC/mdeberta-v3-base-ctebmsp | 2023-06-21T06:54:01.000Z | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"biomedical",
"clinical",
"spanish",
"mdeberta-v3-base",
"token-classification",
"es",
"dataset:lcampillos/ctebmsp",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | IIC | null | null | IIC/mdeberta-v3-base-ctebmsp | 0 | 2 | transformers | 2023-06-21T06:47:50 | ---
language: es
tags:
- biomedical
- clinical
- spanish
- mdeberta-v3-base
license: mit
datasets:
- "lcampillos/ctebmsp"
metrics:
- f1
model-index:
- name: IIC/mdeberta-v3-base-ctebmsp
results:
- task:
type: token-classification
dataset:
name: CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish)
type: lcampillos/ctebmsp
split: test
metrics:
- name: f1
type: f1
value: 0.902
pipeline_tag: token-classification
---
# mdeberta-v3-base-ctebmsp
This model is a finetuned version of mdeberta-v3-base for the CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) dataset used in a benchmark in the paper TODO. The model has a F1 of 0.902
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 32 |
| learning rate | 4e-05 |
| classifier dropout | 0.2 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
| 1,272 | [
[
-0.0268096923828125,
-0.03466796875,
0.0396728515625,
0.0338134765625,
-0.03662109375,
-0.0221710205078125,
0.003582000732421875,
-0.0087432861328125,
0.027984619140625,
0.046295166015625,
-0.04461669921875,
-0.053009033203125,
-0.049652099609375,
-0.0034847... |
predictia/europe_reanalysis_downscaler_convbaseline | 2023-07-01T03:01:00.000Z | [
"transformers",
"pytorch",
"tensorboard",
"convbilinear",
"climate",
"super-resolution",
"image-to-image",
"es",
"en",
"dataset:openclimatefix/era5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-to-image | predictia | null | null | predictia/europe_reanalysis_downscaler_convbaseline | 0 | 2 | transformers | 2023-06-21T08:01:26 | ---
license: apache-2.0
datasets:
- openclimatefix/era5
language:
- es
- en
metrics:
- mse
library_name: transformers
pipeline_tag: image-to-image
tags:
- climate
- transformers
- super-resolution
---
# Europe Reanalysis Super Resolution
The aim of the project is to create a Machine learning (ML) model that can generate high-resolution regional reanalysis data (similar to the one produced by CERRA) by downscaling global reanalysis data from ERA5.
This will be accomplished by using state-of-the-art Deep Learning (DL) techniques like U-Net, conditional GAN, and diffusion models (among others). Additionally, an ingestion module will be implemented to assess the possible benefit of using CERRA pseudo-observations as extra predictors. Once the model is designed and trained, a detailed validation framework takes the place.
It combines classical deterministic error metrics with in-depth validations, including time series, maps, spatio-temporal correlations, and computer vision metrics, disaggregated by months, seasons, and geographical regions, to evaluate the effectiveness of the model in reducing errors and representing physical processes. This level of granularity allows for a more comprehensive and accurate assessment, which is critical for ensuring that the model is effective in practice.
Moreover, tools for interpretability of DL models can be used to understand the inner workings and decision-making processes of these complex structures by analyzing the activations of different neurons and the importance of different features in the input data.
This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) initiative. | 1,673 | [
[
-0.042266845703125,
-0.0447998046875,
0.034759521484375,
-0.026947021484375,
-0.014617919921875,
-0.0086212158203125,
0.00411224365234375,
-0.0511474609375,
0.01444244384765625,
0.0509033203125,
-0.06085205078125,
-0.048431396484375,
-0.029632568359375,
0.02... |
IIC/bert-base-spanish-wwm-cased-distemist | 2023-08-30T07:26:01.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"biomedical",
"clinical",
"spanish",
"bert-base-spanish-wwm-cased",
"token-classification",
"es",
"dataset:bigbio/distemist",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | IIC | null | null | IIC/bert-base-spanish-wwm-cased-distemist | 0 | 2 | transformers | 2023-06-21T09:25:32 | ---
language: es
tags:
- biomedical
- clinical
- spanish
- bert-base-spanish-wwm-cased
license: cc-by-4.0
datasets:
- "bigbio/distemist"
metrics:
- f1
model-index:
- name: IIC/bert-base-spanish-wwm-cased-distemist
results:
- task:
type: token-classification
dataset:
name: distemist
type: bigbio/distemist
split: test
metrics:
- name: f1
type: f1
value: 0.801
pipeline_tag: token-classification
---
# bert-base-spanish-wwm-cased-distemist
This model is a finetuned version of bert-base-spanish-wwm-cased for the distemist dataset used in a benchmark in the paper TODO. The model has a F1 of 0.801
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
| 1,206 | [
[
-0.0478515625,
-0.04052734375,
0.024322509765625,
0.024658203125,
-0.04071044921875,
-0.007358551025390625,
-0.0055084228515625,
-0.01131439208984375,
0.007568359375,
0.01605224609375,
-0.06268310546875,
-0.0404052734375,
-0.053314208984375,
-0.0189666748046... |
pollner/distilhubert-finetuned-ravdess | 2023-06-21T12:36:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:xbgoose/ravdess",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | pollner | null | null | pollner/distilhubert-finetuned-ravdess | 2 | 2 | transformers | 2023-06-21T10:33:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xbgoose/ravdess
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-ravdess
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-ravdess
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the RAVDESS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2810
- Accuracy: 0.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7599 | 1.0 | 162 | 1.7350 | 0.3264 |
| 1.3271 | 2.0 | 324 | 1.1987 | 0.5972 |
| 0.8845 | 3.0 | 486 | 0.8824 | 0.7639 |
| 0.6083 | 4.0 | 648 | 0.5919 | 0.8403 |
| 0.4952 | 5.0 | 810 | 0.4469 | 0.8611 |
| 0.1386 | 6.0 | 972 | 0.3736 | 0.8681 |
| 0.1028 | 7.0 | 1134 | 0.3645 | 0.8819 |
| 0.053 | 8.0 | 1296 | 0.3079 | 0.9028 |
| 0.0149 | 9.0 | 1458 | 0.2723 | 0.9236 |
| 0.0154 | 10.0 | 1620 | 0.2810 | 0.9236 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,975 | [
[
-0.03582763671875,
-0.04412841796875,
0.0049285888671875,
0.00655364990234375,
-0.0171051025390625,
-0.0212860107421875,
-0.0029811859130859375,
-0.0153350830078125,
0.0095977783203125,
0.0210418701171875,
-0.051177978515625,
-0.044281005859375,
-0.05322265625,
... |
dg845/diffusers-ct_imagenet64 | 2023-09-01T07:27:08.000Z | [
"diffusers",
"generative model",
"unconditional image generation",
"arxiv:2303.01469",
"arxiv:1506.03365",
"arxiv:1512.00567",
"license:mit",
"diffusers:ConsistencyModelPipeline",
"region:us"
] | null | dg845 | null | null | dg845/diffusers-ct_imagenet64 | 0 | 2 | diffusers | 2023-06-21T11:08:15 | ---
license: mit
tags:
- generative model
- unconditional image generation
---
Consistency models are a new class of generative models introduced in ["Consistency Models"](https://arxiv.org/abs/2303.01469) ([paper](https://arxiv.org/pdf/2303.01469.pdf), [code](https://github.com/openai/consistency_models)) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.
From the paper abstract:
> Diffusion models have significantly advanced the fields of image, audio, and video generation, but
they depend on an iterative sampling process that causes slow generation. To overcome this limitation,
we propose consistency models, a new family of models that generate high quality samples by directly
mapping noise to data. They support fast one-step generation by design, while still allowing multistep
sampling to trade compute for sample quality. They also support zero-shot data editing, such as image
inpainting, colorization, and super-resolution, without requiring explicit training on these tasks.
Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone
generative models altogether. Through extensive experiments, we demonstrate that they outperform
existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new
state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64 x 64 for one-step generation. When
trained in isolation, consistency models become a new family of generative models that can outperform
existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet
64 x 64 and LSUN 256 x 256.
Intuitively, a consistency model can be thought of as a model which, when evaluated on a noisy image and timestep, returns an output image sample similar to that which would be returned by running a sampling algorithm on a diffusion model.
Consistency models can be parameterized by any neural network whose input has the same dimensionality as its output, such as a U-Net.
More precisely, given a teacher diffusion model and fixed sampler, we can train ("distill") a consistency model such that when it is given a noisy image and its corresponding timestep, the output sample of the consistency model will be close to the output that would result by using the sampler on the diffusion model to produce a sample, starting at the same noisy image and timestep.
The authors call this procedure "consistency distillation (CD)".
Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)".
This model is a `diffusers`-compatible version of the [ct_imagenet64.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models).
This model was trained on the ImageNet 64x64 dataset using the consistency training (CT) algorithm.
See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information.
## Download
The original PyTorch model checkpoint can be downloaded from the [original code and model release](https://github.com/openai/consistency_models#pre-trained-models).
The `diffusers` pipeline for the `ct_imagenet64` model can be downloaded as follows:
```python
from diffusers import ConsistencyModelPipeline
pipe = ConsistencyModelPipeline.from_pretrained("dg845/diffusers-ct_imagenet64")
```
## Usage
The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models).
Here is an example of using the `ct_imagenet64` checkpoint with `diffusers`:
```python
import torch
from diffusers import ConsistencyModelPipeline
device = "cuda"
# Load the ct_imagenet64 checkpoint.
model_id_or_path = "dg845/diffusers-ct_imagenet64"
pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)
# Onestep Sampling
image = pipe(num_inference_steps=1).images[0]
image.save("ct_imagenet64_onestep_sample.png")
# Onestep sampling, class-conditional image generation
# ImageNet-64 class label 145 corresponds to king penguins
image = pipe(num_inference_steps=1, class_labels=145).images[0]
image.save("ct_imagenet64_onestep_sample_penguin.png")
# Multistep sampling, class-conditional image generation
# Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo:
# https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L80
image = pipe(num_inference_steps=None, timesteps=[106, 0], class_labels=145).images[0]
image.save("ct_imagenet64_multistep_sample_penguin.png")
```
## Model Details
- **Model type:** Consistency model unconditional image generation model
- **Dataset:** ImageNet 64x64
- **License:** MIT
- **Model Description:** This model performs unconditional image generation. Its main component is a U-Net, which parameterizes the consistency model. This model was trained by the Consistency Model authors.
- **Resources for more information:**: [Paper](https://arxiv.org/abs/2303.01469), [GitHub Repository](https://github.com/openai/consistency_models), [Original Model Card](/openai/consistency_models/blob/main/model-card.md)
## Datasets
_Note: This section is taken from the ["Datasets" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#datasets)_.
The models that we are making available have been trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on individual categories from [LSUN](https://arxiv.org/abs/1506.03365). Here we outline the characteristics of these datasets that influence the behavior of the models:
**ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and has around a million pictures, each of which belongs to one of 1,000 categories. A significant number of the categories in this dataset are animals, plants, and other naturally occurring objects. Although many photographs include humans, these humans are typically not represented by the class label (for example, the category "Tench, tinca tinca" includes many photographs of individuals holding fish).
**LSUN**: This dataset was collected in 2015 by a combination of human labeling via Amazon Mechanical Turk and automated data labeling. Both classes that we consider have more than a million images. The dataset creators discovered that when assessed by trained experts, the label accuracy was approximately 90% throughout the entire LSUN dataset. The pictures are gathered from the internet, and those in the cat class often follow a "meme" format. Occasionally, people, including faces, appear in these photographs.
## Performance
_Note: This section is taken from the ["Performance" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#performance)_.
These models are intended to generate samples consistent with their training distributions.
This has been measured in terms of FID, Inception Score, Precision, and Recall.
These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567),
which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces).
## Intended Use
_Note: This section is taken from the ["Intended Use" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#intended-use)_.
These models are intended to be used for research purposes only. In particular, they can be used as a baseline for generative modeling research, or as a starting point for advancing such research. These models are not intended to be commercially deployed. Additionally, they are not intended to be used to create propaganda or offensive imagery.
## Limitations
_Note: This section is taken from the ["Limitations" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#limitations)_.
These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces.
This may stem from ImageNet's emphasis on non-human objects.
In consistency distillation and training, minimizing LPIPS results in better sample quality, as evidenced by improved FID and Inception scores. However, it also carries the risk of overestimating model performance, because LPIPS uses a VGG network pre-trained on ImageNet, while FID and Inception scores also rely on convolutional neural networks (the Inception network in particular) pre-trained on the same ImageNet dataset. Although these two convolutional neural networks do not share the same architecture and we extract latents from them in substantially different ways, knowledge leakage is still plausible which can undermine the fidelity of FID and Inception scores.
Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information. | 9,397 | [
[
-0.02587890625,
-0.0277252197265625,
0.00946044921875,
0.0017747879028320312,
-0.01006317138671875,
-0.0538330078125,
-0.004940032958984375,
-0.043792724609375,
-0.00530242919921875,
0.0374755859375,
-0.011871337890625,
-0.0236663818359375,
-0.0562744140625,
... |
Hollway/gpt2_finetune | 2023-06-29T20:24:47.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"zh",
"en",
"dataset:TigerResearch/tigerbot-zhihu-zh-10k",
"dataset:TigerResearch/tigerbot-book-qa-1k",
"dataset:TigerResearch/sft_zh",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Hollway | null | null | Hollway/gpt2_finetune | 1 | 2 | transformers | 2023-06-21T11:34:27 | ---
language:
- zh
- en
license: mit
datasets:
- TigerResearch/tigerbot-zhihu-zh-10k
- TigerResearch/tigerbot-book-qa-1k
- TigerResearch/sft_zh
pipeline_tag: text-generation
---
# 中文文本生成
## 1 Usage
### 1.1 Initalization 初始化
!pip install transformers[torch]
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = GPT2Tokenizer.from_pretrained('Hollway/gpt2_finetune')
model = GPT2LMHeadModel.from_pretrained('Hollway/gpt2_finetune').to(device)
```
### 1.2 Inference 基本推理任务
```
def generate(text): # 基本的下文预测任务
inputs = tokenizer(text, return_tensors="pt").to(device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
)
return tokenizer.decode(tokens[0], skip_special_tokens=True)
generate("派蒙是应急食品,但是不能吃派蒙,请分析不能吃的原因。")
```
### 1.3 Chatbot 聊天模式
```
def chat(turns=5): # 多轮对话模式,通过字符串拼接实现。
for step in range(turns):
query = input(">> 用户:")
new_user_input_ids = tokenizer.encode(
f"用户: {query}\n\n系统: ", return_tensors='pt').to(device)
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
base_tokens = bot_input_ids.shape[-1]
chat_history_ids = model.generate(
bot_input_ids,
max_length=base_tokens+64, # 单次回复的最大token数量
do_sample=True,
pad_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(
chat_history_ids[:, bot_input_ids.shape[-1]:][0],
skip_special_tokens=True)
print(f"系统: {response}\n")
chat(turns=5)
``` | 1,789 | [
[
-0.0174560546875,
-0.07305908203125,
0.0090789794921875,
0.0276031494140625,
-0.0197601318359375,
-0.0181732177734375,
-0.014617919921875,
-0.0085296630859375,
-0.001697540283203125,
0.02264404296875,
-0.038604736328125,
-0.039764404296875,
-0.04193115234375,
... |
IIC/mdeberta-v3-base-livingner1 | 2023-06-21T15:28:01.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"biomedical",
"clinical",
"spanish",
"mdeberta-v3-base",
"token-classification",
"es",
"dataset:IIC/livingner1",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | IIC | null | null | IIC/mdeberta-v3-base-livingner1 | 0 | 2 | transformers | 2023-06-21T15:06:45 | ---
language: es
tags:
- biomedical
- clinical
- spanish
- mdeberta-v3-base
license: mit
datasets:
- "IIC/livingner1"
metrics:
- f1
model-index:
- name: IIC/mdeberta-v3-base-livingner1
results:
- task:
type: token-classification
dataset:
name: livingner1
type: IIC/livingner1
split: test
metrics:
- name: f1
type: f1
value: 0.953
pipeline_tag: token-classification
---
# mdeberta-v3-base-livingner1
This model is a finetuned version of mdeberta-v3-base for the livingner1 dataset used in a benchmark in the paper TODO. The model has a F1 of 0.953
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e-05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
| 1,158 | [
[
-0.039337158203125,
-0.0406494140625,
0.0229034423828125,
0.027191162109375,
-0.0390625,
-0.0157318115234375,
0.00966644287109375,
-0.01081085205078125,
0.02105712890625,
0.0289306640625,
-0.0572509765625,
-0.027191162109375,
-0.036895751953125,
-0.008705139... |
IIC/bert-base-spanish-wwm-cased-meddocan | 2023-06-21T15:41:33.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"biomedical",
"clinical",
"spanish",
"bert-base-spanish-wwm-cased",
"token-classification",
"es",
"dataset:bigbio/meddocan",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | IIC | null | null | IIC/bert-base-spanish-wwm-cased-meddocan | 0 | 2 | transformers | 2023-06-21T15:40:42 | ---
language: es
tags:
- biomedical
- clinical
- spanish
- bert-base-spanish-wwm-cased
license: cc-by-4.0
datasets:
- "bigbio/meddocan"
metrics:
- f1
model-index:
- name: IIC/bert-base-spanish-wwm-cased-meddocan
results:
- task:
type: token-classification
dataset:
name: meddocan
type: bigbio/meddocan
split: test
metrics:
- name: f1
type: f1
value: 0.957
pipeline_tag: token-classification
---
# bert-base-spanish-wwm-cased-meddocan
This model is a finetuned version of bert-base-spanish-wwm-cased for the meddocan dataset used in a benchmark in the paper TODO. The model has a F1 of 0.957
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 3e-05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
| 1,202 | [
[
-0.045440673828125,
-0.039825439453125,
0.032012939453125,
0.019622802734375,
-0.03973388671875,
-0.02142333984375,
-0.0186309814453125,
-0.015777587890625,
0.01186370849609375,
0.041015625,
-0.058563232421875,
-0.045684814453125,
-0.0404052734375,
-0.014907... |
IIC/mdeberta-v3-base-pharmaconer | 2023-06-21T16:11:42.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"biomedical",
"clinical",
"spanish",
"mdeberta-v3-base",
"token-classification",
"es",
"dataset:PlanTL-GOB-ES/pharmaconer",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | IIC | null | null | IIC/mdeberta-v3-base-pharmaconer | 0 | 2 | transformers | 2023-06-21T16:09:43 | ---
language: es
tags:
- biomedical
- clinical
- spanish
- mdeberta-v3-base
license: mit
datasets:
- "PlanTL-GOB-ES/pharmaconer"
metrics:
- f1
model-index:
- name: IIC/mdeberta-v3-base-pharmaconer
results:
- task:
type: token-classification
dataset:
name: pharmaconer
type: PlanTL-GOB-ES/pharmaconer
split: test
metrics:
- name: f1
type: f1
value: 0.922
pipeline_tag: token-classification
widget:
- text: "Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D."
- text: " Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales."
- text: "Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos."
---
# mdeberta-v3-base-pharmaconer
This model is a finetuned version of mdeberta-v3-base for the pharmaconer dataset used in a benchmark in the paper TODO. The model has a F1 of 0.922
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 32 |
| learning rate | 1e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
| 1,884 | [
[
-0.0209503173828125,
-0.0303802490234375,
0.044708251953125,
0.01153564453125,
-0.0287017822265625,
-0.02435302734375,
0.007640838623046875,
-0.00531768798828125,
0.01267242431640625,
0.046112060546875,
-0.037109375,
-0.0380859375,
-0.047119140625,
-0.002601... |
IIC/xlm-roberta-large-pharmaconer | 2023-06-26T07:27:29.000Z | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"biomedical",
"clinical",
"spanish",
"xlm-roberta-large",
"token-classification",
"es",
"dataset:PlanTL-GOB-ES/pharmaconer",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | IIC | null | null | IIC/xlm-roberta-large-pharmaconer | 0 | 2 | transformers | 2023-06-21T16:15:06 | ---
language: es
tags:
- biomedical
- clinical
- spanish
- xlm-roberta-large
license: mit
datasets:
- "PlanTL-GOB-ES/pharmaconer"
metrics:
- f1
model-index:
- name: IIC/xlm-roberta-large-pharmaconer
results:
- task:
type: token-classification
dataset:
name: pharmaconer
type: PlanTL-GOB-ES/pharmaconer
split: test
metrics:
- name: f1
type: f1
value: 0.924
pipeline_tag: token-classification
widget:
- text: "Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D."
- text: " Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales."
- text: "Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos."
---
# xlm-roberta-large-pharmaconer
This model is a finetuned version of xlm-roberta-large for the pharmaconer dataset used in a benchmark in the paper TODO. The model has a F1 of 0.924
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 64 |
| learning rate | 3e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
| 1,888 | [
[
-0.01580810546875,
-0.03717041015625,
0.048553466796875,
-0.007904052734375,
-0.0264434814453125,
-0.030059814453125,
-0.0160980224609375,
-0.00815582275390625,
0.0011301040649414062,
0.04803466796875,
-0.032501220703125,
-0.0408935546875,
-0.061309814453125,
... |
UnHolyTrinity/eng_quotes_model | 2023-06-22T05:20:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | UnHolyTrinity | null | null | UnHolyTrinity/eng_quotes_model | 0 | 2 | transformers | 2023-06-21T16:53:44 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: eng_quotes_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng_quotes_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 201 | 3.3414 |
| No log | 2.0 | 402 | 3.3122 |
| 3.4251 | 3.0 | 603 | 3.3079 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,321 | [
[
-0.02484130859375,
-0.047271728515625,
0.0207061767578125,
0.00786590576171875,
-0.0287628173828125,
-0.03778076171875,
-0.0037288665771484375,
-0.01763916015625,
-0.0109710693359375,
0.023193359375,
-0.0513916015625,
-0.040740966796875,
-0.050506591796875,
... |
koreadaeil/my_awesome_model | 2023-06-24T14:15:51.000Z | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | koreadaeil | null | null | koreadaeil/my_awesome_model | 0 | 2 | transformers | 2023-06-21T19:11:38 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes
type: rotten_tomatoes
config: default
split: train[:3000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0100
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 80
- eval_batch_size: 80
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 30 | 0.0223 | 1.0 |
| No log | 2.0 | 60 | 0.0100 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,714 | [
[
-0.0318603515625,
-0.0477294921875,
0.0234375,
0.00102996826171875,
-0.02056884765625,
-0.019866943359375,
-0.0030040740966796875,
-0.00933074951171875,
0.0120391845703125,
0.0250244140625,
-0.0457763671875,
-0.050018310546875,
-0.05767822265625,
-0.01335906... |
agustinl/ppo-LunarLander-v2 | 2023-07-19T01:52:39.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | agustinl | null | null | agustinl/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-06-21T22:38:34 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.93 +/- 12.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
aroot/mbart-finetuned-eng-guj | 2023-06-30T14:30:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | aroot | null | null | aroot/mbart-finetuned-eng-guj | 0 | 2 | transformers | 2023-06-22T00:44:05 | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-guj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-guj
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5996
- Bleu: 1.8882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
| 1,178 | [
[
-0.043853759765625,
-0.053375244140625,
0.01464080810546875,
0.0159759521484375,
-0.0269775390625,
-0.035919189453125,
-0.0175933837890625,
-0.01094818115234375,
0.0095977783203125,
0.0235748291015625,
-0.054595947265625,
-0.03131103515625,
-0.044647216796875,
... |
NanoIsTrash/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-22T05:11:35.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | NanoIsTrash | null | null | NanoIsTrash/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-22T05:10:57 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 670.00 +/- 224.01
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NanoIsTrash -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NanoIsTrash -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga NanoIsTrash
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,768 | [
[
-0.04498291015625,
-0.040679931640625,
0.020263671875,
0.0227813720703125,
-0.01092529296875,
-0.0161590576171875,
0.0098114013671875,
-0.0125885009765625,
0.01215362548828125,
0.0208740234375,
-0.0716552734375,
-0.033294677734375,
-0.02508544921875,
-0.0026... |
rudzhehdehd/To_my_Love | 2023-06-22T08:40:50.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | rudzhehdehd | null | null | rudzhehdehd/To_my_Love | 0 | 2 | transformers | 2023-06-22T06:42:45 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: To_my_Love
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# To_my_Love
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2757 | 1.0 | 860 | 1.8783 |
| 1.8982 | 2.0 | 1720 | 1.7536 |
| 1.8221 | 3.0 | 2580 | 1.7184 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,328 | [
[
-0.03497314453125,
-0.0416259765625,
0.01445770263671875,
0.018157958984375,
-0.02850341796875,
-0.03533935546875,
-0.00348663330078125,
-0.00777435302734375,
-0.0004673004150390625,
0.0174102783203125,
-0.053558349609375,
-0.0380859375,
-0.05487060546875,
-... |
bandrocks/my_awesome_weeknd_clm-model | 2023-06-22T08:32:11.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | bandrocks | null | null | bandrocks/my_awesome_weeknd_clm-model | 0 | 2 | transformers | 2023-06-22T07:47:58 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: my_awesome_weeknd_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_weeknd_clm-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6655 | 1.0 | 821 | 1.2559 |
| 1.3353 | 2.0 | 1642 | 1.1820 |
| 1.2908 | 3.0 | 2463 | 1.1618 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,343 | [
[
-0.040740966796875,
-0.0440673828125,
0.0221099853515625,
0.009002685546875,
-0.029296875,
-0.031646728515625,
-0.00452423095703125,
-0.0228729248046875,
0.00955963134765625,
0.0276641845703125,
-0.061126708984375,
-0.050323486328125,
-0.047607421875,
-0.012... |
madiltalay/layoutlmv2-base-uncased_finetuned_docvqa | 2023-06-26T10:11:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | document-question-answering | madiltalay | null | null | madiltalay/layoutlmv2-base-uncased_finetuned_docvqa | 0 | 2 | transformers | 2023-06-22T11:36:16 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-base-uncased_finetuned_docvqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased_finetuned_docvqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.326 | 0.22 | 50 | 4.4949 |
| 4.292 | 0.44 | 100 | 3.9510 |
| 3.9419 | 0.66 | 150 | 3.9100 |
| 3.6895 | 0.88 | 200 | 3.5035 |
| 3.4052 | 1.11 | 250 | 3.4030 |
| 3.1405 | 1.33 | 300 | 3.2100 |
| 2.8966 | 1.55 | 350 | 2.9803 |
| 2.7874 | 1.77 | 400 | 2.7811 |
| 2.5385 | 1.99 | 450 | 2.4748 |
| 2.1532 | 2.21 | 500 | 2.5843 |
| 1.994 | 2.43 | 550 | 2.5459 |
| 1.8322 | 2.65 | 600 | 2.2316 |
| 1.7005 | 2.88 | 650 | 2.1888 |
| 1.4758 | 3.1 | 700 | 2.4578 |
| 1.3543 | 3.32 | 750 | 2.3368 |
| 1.1939 | 3.54 | 800 | 2.9737 |
| 1.294 | 3.76 | 850 | 2.4907 |
| 1.4519 | 3.98 | 900 | 1.9276 |
| 1.0517 | 4.2 | 950 | 2.9981 |
| 0.8171 | 4.42 | 1000 | 2.5618 |
| 1.0456 | 4.65 | 1050 | 2.3139 |
| 0.9222 | 4.87 | 1100 | 2.4243 |
| 0.758 | 5.09 | 1150 | 2.8167 |
| 0.7203 | 5.31 | 1200 | 2.9342 |
| 0.6748 | 5.53 | 1250 | 2.6396 |
| 0.6821 | 5.75 | 1300 | 2.5629 |
| 0.5898 | 5.97 | 1350 | 3.0276 |
| 0.3135 | 6.19 | 1400 | 3.2611 |
| 0.4407 | 6.42 | 1450 | 3.1793 |
| 0.5303 | 6.64 | 1500 | 3.0511 |
| 0.5294 | 6.86 | 1550 | 3.1106 |
| 0.3149 | 7.08 | 1600 | 3.2933 |
| 0.199 | 7.3 | 1650 | 3.4207 |
| 0.164 | 7.52 | 1700 | 3.4379 |
| 0.5258 | 7.74 | 1750 | 3.1339 |
| 0.336 | 7.96 | 1800 | 3.2394 |
| 0.3294 | 8.19 | 1850 | 3.0956 |
| 0.1587 | 8.41 | 1900 | 3.4282 |
| 0.2375 | 8.63 | 1950 | 3.3718 |
| 0.117 | 8.85 | 2000 | 3.5646 |
| 0.2873 | 9.07 | 2050 | 3.5213 |
| 0.2206 | 9.29 | 2100 | 3.5387 |
| 0.2503 | 9.51 | 2150 | 3.5683 |
| 0.0763 | 9.73 | 2200 | 3.6119 |
| 0.1344 | 9.96 | 2250 | 3.6030 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 3,580 | [
[
-0.037353515625,
-0.031585693359375,
0.0155792236328125,
0.0126953125,
-0.00888824462890625,
-0.016387939453125,
0.00897979736328125,
-0.0016241073608398438,
0.0269775390625,
0.0275115966796875,
-0.045440673828125,
-0.04888916015625,
-0.0421142578125,
-0.021... |
HarshV9/finetuning-sentiment-model-8-labels | 2023-06-23T12:21:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | HarshV9 | null | null | HarshV9/finetuning-sentiment-model-8-labels | 0 | 2 | transformers | 2023-06-22T16:07:12 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-8-labels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-8-labels
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1854
- eval_accuracy: 0.5598
- eval_f1: 0.5598
- eval_runtime: 190.081
- eval_samples_per_second: 198.205
- eval_steps_per_second: 6.197
- epoch: 2.88
- step: 13550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu116
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,341 | [
[
-0.043914794921875,
-0.05303955078125,
0.0141754150390625,
0.022491455078125,
-0.043701171875,
-0.0257415771484375,
-0.0219268798828125,
-0.0079193115234375,
0.00974273681640625,
0.019378662109375,
-0.047576904296875,
-0.0543212890625,
-0.055511474609375,
-0... |
bluemoonwj/movie_title_predictor | 2023-06-22T17:53:17.000Z | [
"transformers",
"pytorch",
"tensorboard",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | bluemoonwj | null | null | bluemoonwj/movie_title_predictor | 0 | 2 | transformers | 2023-06-22T16:58:53 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: movie_title_predictor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# movie_title_predictor
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0373 | 1.0 | 821 | 1.7633 |
| 1.7272 | 2.0 | 1642 | 1.6852 |
| 1.6767 | 3.0 | 2463 | 1.6553 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,359 | [
[
-0.0265045166015625,
-0.04632568359375,
0.017242431640625,
-0.0027256011962890625,
-0.0202789306640625,
-0.023406982421875,
0.005184173583984375,
-0.00901031494140625,
0.0140380859375,
0.034759521484375,
-0.06597900390625,
-0.03857421875,
-0.04559326171875,
... |
battelle/FupBERT | 2023-09-05T16:43:16.000Z | [
"transformers",
"pytorch",
"FupBERT",
"feature-extraction",
"custom_code",
"license:gpl-2.0",
"has_space",
"region:us"
] | feature-extraction | battelle | null | null | battelle/FupBERT | 0 | 2 | transformers | 2023-06-22T17:47:56 | ---
license: gpl-2.0
---
# Model Card for FupBERT
A descriptor free approach to predicting fraction unbound in human plasma.
## Model Details
### Model Description
Chemical specific parameters are either measured _in vitro_ or estimated using quantitative
structure–activity relationship (QSAR) models. The existing body of QSAR work relies on extracting a
set of descriptors or fingerprints, subset selection, and training a machine learning model. In this work,
we used a state-of-the-art natural language processing model, Bidirectional Encoder Representations from Transformers
(BERT), that allowed us to circumvent the need for calculation of these chemical descriptors. In this approach,
simplified molecular-input line-entry system (SMILES) strings were embedded in a high dimensional space using a
two-stage training approach. The model was first pre-trained on a masked SMILES token task and then fine-tuned on
a QSAR prediction task. The pre-training task learned meaningful high dimensional embeddings based upon the relationships
between the chemical tokens in the SMILES strings derived from the "in-stock" portion of the ZINC 15 dataset – a
large dataset of commercially available chemicals. The fine-tuning task then perturbed the pre-trained embeddings
to facilitate prediction of a specific QSAR endpoint of interest. The power of this model stems from the ability
to reuse the pre-trained model for multiple different fine-tuning tasks, reducing the computational burden of developing
multiple models for different endpoints. We used our framework to develop a predictive model for fraction unbound
in human plasma (fup). This approach is flexible, requires minimum domain expertise, and can be generalized for
other parameters of interest for rapid and accurate estimation of absorption, distribution, metabolism, excretion, and toxicity (ADMET).
- **Developed by:** Michael Riedl, Sayak Mukherjee, and Mitch Gauthier
- **Model type:** BERT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Paper:** Riedl, Michael, Sayak Mukherjee, and Mitch Gauthier. "Descriptor-Free Deep Learning QSAR Model for the Fraction Unbound in Human Plasma." Molecular Pharmaceutics (2023).
- **Demo:** https://huggingface.co/spaces/battelle/FupBERT_Space
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{riedl2023descriptor,
title={Descriptor-Free Deep Learning QSAR Model for the Fraction Unbound in Human Plasma},
author={Riedl, Michael and Mukherjee, Sayak and Gauthier, Mitch},
journal={Molecular Pharmaceutics},
publisher={ACS Publications}
}
```
## Model Card Contact
riedl@battelle.org
| 2,774 | [
[
-0.034423828125,
-0.016448974609375,
0.024871826171875,
-0.01392364501953125,
-0.030364990234375,
0.007251739501953125,
0.01468658447265625,
-0.02777099609375,
-0.0023193359375,
0.04425048828125,
-0.03814697265625,
-0.04150390625,
-0.038787841796875,
0.00235... |
valerio-unifei/ppo-Huggy | 2023-06-22T18:44:53.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | valerio-unifei | null | null | valerio-unifei/ppo-Huggy | 0 | 2 | ml-agents | 2023-06-22T18:44:46 | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: valerio-unifei/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,324 | [
[
-0.04150390625,
-0.046142578125,
0.01690673828125,
0.004009246826171875,
-0.01561737060546875,
0.0162353515625,
0.0140533447265625,
-0.02294921875,
0.0419921875,
0.0341796875,
-0.048614501953125,
-0.046234130859375,
-0.03009033203125,
-0.017486572265625,
... |
gaiamolinaro/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-23T04:37:52.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | gaiamolinaro | null | null | gaiamolinaro/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-23T04:37:14 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 676.50 +/- 216.14
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gaiamolinaro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gaiamolinaro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gaiamolinaro
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,771 | [
[
-0.043426513671875,
-0.039306640625,
0.019378662109375,
0.0257415771484375,
-0.01125335693359375,
-0.017608642578125,
0.01020050048828125,
-0.0135345458984375,
0.0129547119140625,
0.022064208984375,
-0.0718994140625,
-0.034271240234375,
-0.02508544921875,
-0... |
rahmas/abusive_content_identification | 2023-06-23T07:54:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | rahmas | null | null | rahmas/abusive_content_identification | 0 | 2 | transformers | 2023-06-23T07:47:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: abusive_content_identification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# abusive_content_identification
This model is a fine-tuned version of [indolem/indobertweet-base-uncased](https://huggingface.co/indolem/indobertweet-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0073
- Accuracy: 0.9982
- Precision: 0.9963
- Recall: 1.0
- F1: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.0666 | 1.0 | 547 | 0.0149 | 0.9973 | 0.9944 | 1.0 | 0.9972 |
| 0.0086 | 2.0 | 1094 | 0.0073 | 0.9982 | 0.9963 | 1.0 | 0.9981 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,637 | [
[
-0.0283966064453125,
-0.035430908203125,
0.0051727294921875,
0.022918701171875,
-0.0305023193359375,
-0.0293426513671875,
-0.01458740234375,
-0.016571044921875,
0.01390838623046875,
0.021026611328125,
-0.046783447265625,
-0.04559326171875,
-0.0499267578125,
... |
elsliew/autotrain-skillsync2-69166137722 | 2023-06-23T10:58:06.000Z | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:elsliew/autotrain-data-skillsync2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | elsliew | null | null | elsliew/autotrain-skillsync2-69166137722 | 0 | 2 | transformers | 2023-06-23T10:56:13 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- elsliew/autotrain-data-skillsync2
co2_eq_emissions:
emissions: 0.3593924337756782
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 69166137722
- CO2 Emissions (in grams): 0.3594
## Validation Metrics
- Loss: 0.884
- Accuracy: 0.685
- Macro F1: 0.643
- Micro F1: 0.685
- Weighted F1: 0.677
- Macro Precision: 0.677
- Micro Precision: 0.685
- Weighted Precision: 0.689
- Macro Recall: 0.642
- Micro Recall: 0.685
- Weighted Recall: 0.685
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/elsliew/autotrain-skillsync2-69166137722
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("elsliew/autotrain-skillsync2-69166137722", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("elsliew/autotrain-skillsync2-69166137722", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,282 | [
[
-0.027923583984375,
-0.0222320556640625,
0.005931854248046875,
0.0108795166015625,
0.005779266357421875,
0.00817108154296875,
-0.0007505416870117188,
-0.0218963623046875,
0.001583099365234375,
0.00467681884765625,
-0.055267333984375,
-0.034515380859375,
-0.05432... |
heon98/my_awesome_pokemon_model | 2023-06-23T13:50:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:pokemon-classification",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | heon98 | null | null | heon98/my_awesome_pokemon_model | 0 | 2 | transformers | 2023-06-23T11:40:02 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pokemon-classification
metrics:
- accuracy
model-index:
- name: my_awesome_pokemon_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: pokemon-classification
type: pokemon-classification
config: full
split: train
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.5852156057494866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_pokemon_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the pokemon-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3447
- Accuracy: 0.5852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.7732 | 1.0 | 61 | 4.7448 | 0.1992 |
| 4.443 | 2.0 | 122 | 4.4606 | 0.4897 |
| 4.2705 | 3.0 | 183 | 4.3447 | 0.5852 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,942 | [
[
-0.0304412841796875,
-0.0428466796875,
0.01085662841796875,
0.00885772705078125,
-0.023040771484375,
-0.03143310546875,
-0.004604339599609375,
-0.0190582275390625,
0.02777099609375,
0.018951416015625,
-0.043792724609375,
-0.042205810546875,
-0.04400634765625,
... |
4i-ai/BERT_disfluency_cls | 2023-08-25T08:09:58.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"disfluency identification",
"en",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | 4i-ai | null | null | 4i-ai/BERT_disfluency_cls | 0 | 2 | transformers | 2023-06-23T14:27:16 | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- disfluency identification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This BERT model classifies a dialogue system's user utterance as fluent or disfluent.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** 4i Intelligent Insights
- **Model type:** BERT base cased
- **Language(s) (NLP):** English
- **License:** cc-by-nc-sa-4.0
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** http://research.4i.ai/code/BERT_disfluency_cls
- **Paper:** https://aclanthology.org/2023.findings-acl.728/
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model is intended to be used for classifying English utterances of users interacting with a dialogue system. In our evaluation, the user utterances were speech transcriptions.
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
This model has not been evaluated to be used on machine-generated text.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model may not be accurate with non-native English speakers.
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model has been fine-tuned on the Fisher English Corpus:
http://github.com/joshua-decoder/fisher-callhome-corpus | 1,767 | [
[
-0.017547607421875,
-0.0447998046875,
0.0112762451171875,
0.0231781005859375,
-0.01154327392578125,
0.00206756591796875,
-0.0006957054138183594,
-0.047454833984375,
0.0023097991943359375,
0.03424072265625,
-0.0428466796875,
-0.048431396484375,
-0.038543701171875... |
michaelfeil/ct2fast-mpt-30b | 2023-06-28T22:14:21.000Z | [
"transformers",
"mpt",
"text-generation",
"ctranslate2",
"int8",
"float16",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:allenai/c4",
"dataset:mc4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack-dedup",
"dataset:allenai/... | text-generation | michaelfeil | null | null | michaelfeil/ct2fast-mpt-30b | 2 | 2 | transformers | 2023-06-23T15:55:16 | ---
license: apache-2.0
tags:
- ctranslate2
- int8
- float16
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- allenai/c4
- mc4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack-dedup
- allenai/s2orc
inference: false
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [mosaicml/mpt-30b](https://huggingface.co/mosaicml/mpt-30b)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-mpt-30b"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-06-23 using
```
ct2-transformers-converter --model mosaicml/mpt-30b --output_dir ~/tmp-ct2fast-mpt-30b --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# MPT-30B
MPT-30B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-30B is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
MPT-30B comes with special features that differentiate it from other LLMs, including an 8k token context window (which can be further extended via finetuning; see [MPT-7B-StoryWriter](https://huggingface.co/mosaicml/mpt-7b-storywriter)), support for context-length extrapolation via [ALiBi](https://arxiv.org/abs/2108.12409), and efficient inference + training via FlashAttention. It also has strong coding abilities thanks to its pretraining mix. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
The size of MPT-30B was also specifically chosen to make it easy to deploy on a single GPU—either 1xA100-80GB in 16-bit precision or 1xA100-40GB in 8-bit precision.
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-30B is:
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-30B:
The following models are finetuned on MPT-30B:
* [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct): a model for short-form instruction following.
Built by finetuning MPT-30B on several carefully curated datasets.
* License: _CC-By-NC-SA-3.0_
* [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-30B on [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
[GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat)
## Model Date
June 22, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: MPT-30B: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially with a sequence length of 4096 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English (200+ words) | 2417.99 B | 33.50% | 335 B | 0.14 |
| c4 - English - SemDedup 80% | 100.42 B | 29.90% | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 8.50% | 85 B | 0.097 |
| The Stack - Selected Languages | 463.78 B | 10.00% | 100 B | 0.22 |
| RedPajama - Wikipedia | 4.87 B | 4.00% | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 4.50% | 45 B | 0.42 |
| Semantic Scholar ORC | 48.95 B | 3.30% | 33 B | 0.67 |
| RedPajama - Books | 26.02 B | 3.00% | 30 B | 1.15 |
| RedPajama - arXiv | 28.10 B | 1.90% | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 1.40% | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the sequence length. To build 8k support into MPT-30B efficiently, we first pre-trained on 1T tokens using sequences that were 2k tokens long, and then trained for an additional 50B tokens using sequences that were 8k tokens long.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)).
### Training Configuration
The model was trained in three stages using the [MosaicML Platform](https://www.mosaicml.com/platform):
(i) First it was trained on 440 A100-40GBs with a batch size of 1760.
(ii) Then, on 216 A100-40GBs with a batch size of 1728.
(iii) Training was completed on 256 H100-80GBs with a batch size of 512 with 8k context length and 50B tokens.
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-30B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
``` | 13,748 | [
[
-0.038909912109375,
-0.040374755859375,
0.020538330078125,
0.035491943359375,
-0.0240020751953125,
0.00080108642578125,
-0.01275634765625,
-0.0254058837890625,
-0.008941650390625,
0.0203704833984375,
-0.03826904296875,
-0.0369873046875,
-0.0445556640625,
-0.... |
dnzblgn/BERT_Text_Classification | 2023-06-23T18:09:17.000Z | [
"keras",
"region:us"
] | null | dnzblgn | null | null | dnzblgn/BERT_Text_Classification | 0 | 2 | keras | 2023-06-23T16:53:59 | ---
{}
---
# BERT Text Classification
This is a BERT-based text classification model trained on the "socialmedia-disaster-tweets" dataset. It performs sentiment analysis to classify tweets as "Relevant" or "Not Relevant" to a disaster event.
## Model Description
The model uses the BERT (Bidirectional Encoder Representations from Transformers) architecture to generate embeddings for the input text. These embeddings are then fed into a sequential Keras model with a dense hidden layer and a sigmoid output layer for binary classification.
## Intended Use
This model is intended to be used for text classification on short text snippets, specifically tweets related to disaster events. It can help in identifying relevant tweets for further analysis and response.
## Limitations and Ethical Considerations
- The model's performance heavily relies on the quality and representativeness of the training data. If the training data is biased or limited, the model's predictions may be biased or inaccurate.
- The model may not generalize well to tweets from domains or topics that significantly differ from the training data.
- Text classification models may not capture the full complexity of human sentiment and can be sensitive to variations in language use.
- It's important to use the model as a tool to aid human decision-making rather than relying solely on its predictions. Human review and context awareness are essential in interpreting and acting upon the model's output.
## Usage
Here's an example of how to use the model for inference:
```python
from transformers import TFAutoModel, AutoTokenizer
import tensorflow as tf
import numpy as np
# Load the pre-trained model and tokenizer
model = TFAutoModel.from_pretrained("dnzblgn/BERT_Text_Classification")
tokenizer = AutoTokenizer.from_pretrained("dnzblgn/BERT_Text_Classification")
# Preprocess the input sentence
input_sentence = " Horrible Accident | Man Died In Wings of AirplaneåÊ(29-07-2015)"
input_sentence = tokenizer.encode_plus(
input_sentence,
add_special_tokens=True,
max_length=768,
padding="longest",
truncation=True,
return_attention_mask=True,
return_tensors="tf",
)
# Make the prediction
prediction = model.predict(input_sentence)[0][0]
label = "Relevant" if prediction == 0 else "Not Relevant"
print("Input Sentence:", input_sentence)
print("Prediction:", label) | 2,387 | [
[
-0.0183868408203125,
-0.0504150390625,
0.0222320556640625,
0.033233642578125,
-0.0191497802734375,
0.0016651153564453125,
-0.0062408447265625,
-0.0243072509765625,
0.0037384033203125,
0.0138092041015625,
-0.044097900390625,
-0.03436279296875,
-0.055908203125,
... |
Xenova/deeplabv3-mobilevit-small | 2023-09-01T23:55:22.000Z | [
"transformers.js",
"onnx",
"mobilevit",
"image-segmentation",
"region:us"
] | image-segmentation | Xenova | null | null | Xenova/deeplabv3-mobilevit-small | 0 | 2 | transformers.js | 2023-06-23T18:47:06 | ---
library_name: transformers.js
pipeline_tag: image-segmentation
---
https://huggingface.co/apple/deeplabv3-mobilevit-small with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 541 | [
[
-0.040435791015625,
0.01444244384765625,
0.03155517578125,
0.0411376953125,
-0.0130767822265625,
0.0043182373046875,
0.0022068023681640625,
-0.001354217529296875,
0.0267333984375,
0.03118896484375,
-0.05078125,
-0.03399658203125,
-0.0322265625,
-0.0084381103... |
Xenova/deeplabv3-mobilevit-x-small | 2023-09-01T23:56:02.000Z | [
"transformers.js",
"onnx",
"mobilevit",
"image-segmentation",
"region:us"
] | image-segmentation | Xenova | null | null | Xenova/deeplabv3-mobilevit-x-small | 0 | 2 | transformers.js | 2023-06-23T18:47:10 | ---
library_name: transformers.js
pipeline_tag: image-segmentation
---
https://huggingface.co/apple/deeplabv3-mobilevit-x-small with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 543 | [
[
-0.04083251953125,
0.0188751220703125,
0.032989501953125,
0.03948974609375,
-0.013153076171875,
0.00848388671875,
0.0026302337646484375,
-0.0037689208984375,
0.0277557373046875,
0.03106689453125,
-0.05322265625,
-0.03265380859375,
-0.03271484375,
-0.00999450... |
Xenova/deeplabv3-mobilevit-xx-small | 2023-09-01T23:55:46.000Z | [
"transformers.js",
"onnx",
"mobilevit",
"image-segmentation",
"region:us"
] | image-segmentation | Xenova | null | null | Xenova/deeplabv3-mobilevit-xx-small | 0 | 2 | transformers.js | 2023-06-23T18:47:12 | ---
library_name: transformers.js
pipeline_tag: image-segmentation
---
https://huggingface.co/apple/deeplabv3-mobilevit-xx-small with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 544 | [
[
-0.041534423828125,
0.0157470703125,
0.031005859375,
0.04266357421875,
-0.012298583984375,
0.00556182861328125,
0.004726409912109375,
-0.0027942657470703125,
0.024810791015625,
0.033172607421875,
-0.0518798828125,
-0.0340576171875,
-0.032440185546875,
-0.007... |
cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020 | 2023-06-23T20:57:35.000Z | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"dataset:tner/tweetner7",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | cardiffnlp | null | null | cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020 | 0 | 2 | transformers | 2023-06-23T20:41:42 | ---
datasets:
- tner/tweetner7
metrics:
- f1
- precision
- recall
model-index:
- name: cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/tweetner7
type: tner/tweetner7
args: tner/tweetner7
metrics:
- name: F1
type: f1
value: 0.6528115974857014
- name: Precision
type: precision
value: 0.6396626345577627
- name: Recall
type: recall
value: 0.6665124884366328
- name: F1 (macro)
type: f1_macro
value: 0.6049985470954377
- name: Precision (macro)
type: precision_macro
value: 0.5897437616700211
- name: Recall (macro)
type: recall_macro
value: 0.6233545992999288
- name: F1 (entity span)
type: f1_entity_span
value: 0.7878581945860234
- name: Precision (entity span)
type: precision_entity_span
value: 0.7719454000665853
- name: Recall (entity span)
type: recall_entity_span
value: 0.804440846536371
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-large-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m) on the
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.6528115974857014
- Precision (micro): 0.6396626345577627
- Recall (micro): 0.6665124884366328
- F1 (macro): 0.6049985470954377
- Precision (macro): 0.5897437616700211
- Recall (macro): 0.6233545992999288
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.5229050279329609
- event: 0.4694835680751174
- group: 0.6115595737810786
- location: 0.651814131126671
- person: 0.8390510948905111
- product: 0.6531234128999492
- work_of_art: 0.4870530209617756
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- F1 (macro):
Full evaluation can be found at [metric file of NER](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/tweetner7']
- dataset_split: train_2020
- dataset_name: None
- local_dataset: None
- model: cardiffnlp/twitter-roberta-large-2022-154m
- crf: True
- max_length: 128
- epoch: 30
- batch_size: 32
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 1
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.3
- max_grad_norm: 10
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
| 5,732 | [
[
-0.03631591796875,
-0.04833984375,
0.018707275390625,
0.0207366943359375,
-0.0135345458984375,
0.0022430419921875,
-0.044708251953125,
-0.03790283203125,
0.036102294921875,
0.0177154541015625,
-0.043182373046875,
-0.04730224609375,
-0.055999755859375,
0.0200... |
cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020 | 2023-06-23T20:54:51.000Z | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"dataset:tner/tweetner7",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | cardiffnlp | null | null | cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020 | 0 | 2 | transformers | 2023-06-23T20:41:43 | ---
datasets:
- tner/tweetner7
metrics:
- f1
- precision
- recall
model-index:
- name: cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/tweetner7
type: tner/tweetner7
args: tner/tweetner7
metrics:
- name: F1
type: f1
value: 0.6419150543257219
- name: Precision
type: precision
value: 0.6451010159990658
- name: Recall
type: recall
value: 0.6387604070305273
- name: F1 (macro)
type: f1_macro
value: 0.5829431071584856
- name: Precision (macro)
type: precision_macro
value: 0.5886989381701707
- name: Recall (macro)
type: recall_macro
value: 0.5796110916728531
- name: F1 (entity span)
type: f1_entity_span
value: 0.7753631609529343
- name: Precision (entity span)
type: precision_entity_span
value: 0.7791661800770758
- name: Recall (entity span)
type: recall_entity_span
value: 0.7715970856944605
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m) on the
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.6419150543257219
- Precision (micro): 0.6451010159990658
- Recall (micro): 0.6387604070305273
- F1 (macro): 0.5829431071584856
- Precision (macro): 0.5886989381701707
- Recall (macro): 0.5796110916728531
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.5127020785219399
- event: 0.43384759233286585
- group: 0.6000666000666002
- location: 0.6535326086956522
- person: 0.8390577234310376
- product: 0.6386386386386387
- work_of_art: 0.40275650842266464
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- F1 (macro):
Full evaluation can be found at [metric file of NER](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/tweetner7']
- dataset_split: train_2020
- dataset_name: None
- local_dataset: None
- model: cardiffnlp/twitter-roberta-base-2022-154m
- crf: True
- max_length: 128
- epoch: 30
- batch_size: 32
- lr: 0.0001
- random_seed: 42
- gradient_accumulation_steps: 1
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.3
- max_grad_norm: 10
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
| 5,728 | [
[
-0.034149169921875,
-0.048004150390625,
0.0171051025390625,
0.021087646484375,
-0.013580322265625,
0.00260162353515625,
-0.042388916015625,
-0.03485107421875,
0.0340576171875,
0.0180511474609375,
-0.044464111328125,
-0.04937744140625,
-0.05584716796875,
0.01... |
koreadaeil/my_awesome_model5 | 2023-06-24T07:43:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | koreadaeil | null | null | koreadaeil/my_awesome_model5 | 0 | 2 | transformers | 2023-06-24T07:41:58 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: my_awesome_model5
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: wnli
split: train[:635]
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.4251968503937008
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7141
- Accuracy: 0.4252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 32 | 0.7115 | 0.4173 |
| No log | 2.0 | 64 | 0.7141 | 0.4252 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,683 | [
[
-0.027069091796875,
-0.04736328125,
0.0157318115234375,
0.0182037353515625,
-0.0230560302734375,
-0.0240325927734375,
-0.0051422119140625,
-0.0122833251953125,
0.0093841552734375,
0.01216888427734375,
-0.045867919921875,
-0.046905517578125,
-0.057037353515625,
... |
chennaiai/my-hotdog-not-hotdog | 2023-06-24T08:39:55.000Z | [
"transformers",
"pytorch",
"tensorboard",
"coreml",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | chennaiai | null | null | chennaiai/my-hotdog-not-hotdog | 0 | 2 | transformers | 2023-06-24T08:35:45 | ---
tags:
- image-classification
- huggingpics
metrics:
- accuracy
model-index:
- name: hotdog-not-hotdog
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.824999988079071
---
# hotdog-not-hotdog
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### hot dog

#### not hot dog
 | 748 | [
[
-0.0504150390625,
-0.051361083984375,
0.004901885986328125,
0.038787841796875,
-0.03424072265625,
0.002971649169921875,
0.00739288330078125,
-0.0213470458984375,
0.049346923828125,
0.0179290771484375,
-0.0211944580078125,
-0.05572509765625,
-0.044525146484375,
... |
SSSIN/my_segment_news_1 | 2023-06-24T11:18:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | SSSIN | null | null | SSSIN/my_segment_news_1 | 0 | 2 | transformers | 2023-06-24T11:06:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_segment_news_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_segment_news_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3054
- Accuracy: 0.7046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 47 | 0.8568 | 0.6555 |
| No log | 2.0 | 94 | 0.7703 | 0.7128 |
| No log | 3.0 | 141 | 0.9174 | 0.7115 |
| No log | 4.0 | 188 | 0.9764 | 0.7268 |
| No log | 5.0 | 235 | 1.1855 | 0.6945 |
| No log | 6.0 | 282 | 1.1718 | 0.7071 |
| No log | 7.0 | 329 | 1.1631 | 0.7246 |
| No log | 8.0 | 376 | 1.2950 | 0.7029 |
| No log | 9.0 | 423 | 1.3254 | 0.7019 |
| No log | 10.0 | 470 | 1.3054 | 0.7046 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,897 | [
[
-0.033599853515625,
-0.04095458984375,
0.01322174072265625,
0.00852203369140625,
-0.0232086181640625,
-0.0196380615234375,
-0.00498199462890625,
-0.0082550048828125,
0.00969696044921875,
0.01678466796875,
-0.051513671875,
-0.0533447265625,
-0.057769775390625,
... |
jeremyvictor/t5-v1_1-base-gramatika-e8-b16 | 2023-06-24T13:26:46.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | jeremyvictor | null | null | jeremyvictor/t5-v1_1-base-gramatika-e8-b16 | 0 | 2 | transformers | 2023-06-24T11:49:44 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-v1_1-base-gramatika-e8-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-v1_1-base-gramatika-e8-b16
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2980
- Rouge1: 37.8004
- Rouge2: 25.1687
- Rougel: 37.0767
- Rougelsum: 37.065
- Gen Len: 18.9591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 4.776 | 0.09 | 74 | 1.0632 | 32.6953 | 20.0972 | 31.9469 | 31.9621 | 18.7484 |
| 1.2729 | 0.18 | 148 | 0.7526 | 36.7533 | 23.3303 | 35.6567 | 35.6663 | 18.9461 |
| 0.9446 | 0.26 | 222 | 0.6354 | 37.1264 | 23.6467 | 36.0249 | 36.0251 | 18.9532 |
| 0.7947 | 0.35 | 296 | 0.5734 | 37.1871 | 23.6899 | 36.1041 | 36.1107 | 18.9479 |
| 0.7537 | 0.44 | 370 | 0.5584 | 37.1245 | 23.4797 | 36.0896 | 36.1022 | 18.9520 |
| 0.6918 | 0.53 | 444 | 0.5143 | 37.3209 | 23.6466 | 36.2475 | 36.2523 | 18.9509 |
| 0.6461 | 0.61 | 518 | 0.4959 | 37.362 | 23.9226 | 36.3161 | 36.3077 | 18.9550 |
| 0.6208 | 0.7 | 592 | 0.4934 | 37.3042 | 23.895 | 36.279 | 36.2776 | 18.9550 |
| 0.578 | 0.79 | 666 | 0.4600 | 36.9323 | 23.2291 | 35.8836 | 35.9033 | 18.9526 |
| 0.5595 | 0.88 | 740 | 0.4325 | 37.3255 | 23.9018 | 36.2997 | 36.2994 | 18.9544 |
| 0.5341 | 0.96 | 814 | 0.4401 | 37.6132 | 24.1158 | 36.5666 | 36.5629 | 18.9473 |
| 0.4909 | 1.05 | 888 | 0.4288 | 37.4095 | 23.9467 | 36.3822 | 36.3773 | 18.9556 |
| 0.484 | 1.14 | 962 | 0.4112 | 37.1324 | 23.6944 | 36.1397 | 36.146 | 18.9562 |
| 0.4529 | 1.23 | 1036 | 0.4173 | 37.3368 | 23.6993 | 36.3614 | 36.3581 | 18.9485 |
| 0.4491 | 1.31 | 1110 | 0.4031 | 37.6721 | 24.3716 | 36.6349 | 36.6283 | 18.9580 |
| 0.4649 | 1.4 | 1184 | 0.3850 | 37.1553 | 23.726 | 36.1654 | 36.1631 | 18.9568 |
| 0.4388 | 1.49 | 1258 | 0.3802 | 37.4997 | 24.1832 | 36.4843 | 36.4895 | 18.9597 |
| 0.436 | 1.58 | 1332 | 0.3751 | 37.7226 | 24.25 | 36.6127 | 36.6266 | 18.9562 |
| 0.4338 | 1.66 | 1406 | 0.3746 | 37.5729 | 24.1241 | 36.5254 | 36.5372 | 18.9562 |
| 0.4226 | 1.75 | 1480 | 0.3648 | 37.4497 | 24.2013 | 36.5387 | 36.5329 | 18.9556 |
| 0.4215 | 1.84 | 1554 | 0.3603 | 37.3854 | 23.9057 | 36.4769 | 36.4907 | 18.9556 |
| 0.4107 | 1.93 | 1628 | 0.3608 | 37.4492 | 24.2621 | 36.5402 | 36.5518 | 18.9574 |
| 0.3955 | 2.01 | 1702 | 0.3555 | 36.899 | 23.6411 | 36.0131 | 36.0335 | 18.9603 |
| 0.3615 | 2.1 | 1776 | 0.3516 | 36.8815 | 23.6418 | 36.0194 | 36.0134 | 18.9568 |
| 0.3641 | 2.19 | 1850 | 0.3494 | 37.6507 | 24.5903 | 36.7702 | 36.7744 | 18.9580 |
| 0.347 | 2.28 | 1924 | 0.3475 | 37.2491 | 23.94 | 36.3766 | 36.3915 | 18.9556 |
| 0.345 | 2.36 | 1998 | 0.3448 | 37.7311 | 24.7039 | 36.8714 | 36.8805 | 18.9597 |
| 0.3447 | 2.45 | 2072 | 0.3428 | 37.3581 | 24.439 | 36.5772 | 36.5706 | 18.9532 |
| 0.3513 | 2.54 | 2146 | 0.3449 | 37.5704 | 24.503 | 36.6679 | 36.6694 | 18.9532 |
| 0.3425 | 2.63 | 2220 | 0.3307 | 37.2403 | 24.0095 | 36.3901 | 36.4088 | 18.9538 |
| 0.3451 | 2.71 | 2294 | 0.3413 | 37.8927 | 24.9543 | 37.0627 | 37.0752 | 18.9515 |
| 0.337 | 2.8 | 2368 | 0.3295 | 37.2903 | 24.0792 | 36.4794 | 36.4851 | 18.9562 |
| 0.3411 | 2.89 | 2442 | 0.3279 | 37.5595 | 24.4696 | 36.6409 | 36.634 | 18.9586 |
| 0.3352 | 2.98 | 2516 | 0.3246 | 37.8787 | 24.9008 | 37.0554 | 37.0518 | 18.9520 |
| 0.2922 | 3.07 | 2590 | 0.3284 | 37.7723 | 24.8132 | 36.9398 | 36.9411 | 18.9556 |
| 0.2877 | 3.15 | 2664 | 0.3263 | 37.8679 | 24.9922 | 37.0879 | 37.086 | 18.9515 |
| 0.2821 | 3.24 | 2738 | 0.3272 | 38.1672 | 25.4381 | 37.3518 | 37.35 | 18.9562 |
| 0.2999 | 3.33 | 2812 | 0.3250 | 37.8501 | 25.0341 | 37.0643 | 37.053 | 18.9556 |
| 0.2953 | 3.42 | 2886 | 0.3223 | 37.8668 | 24.8381 | 37.0085 | 37.0079 | 18.9574 |
| 0.2892 | 3.5 | 2960 | 0.3180 | 37.7468 | 24.8882 | 36.9065 | 36.9151 | 18.9574 |
| 0.2997 | 3.59 | 3034 | 0.3154 | 37.5096 | 24.6657 | 36.6896 | 36.6843 | 18.9591 |
| 0.2924 | 3.68 | 3108 | 0.3153 | 37.8218 | 25.0111 | 37.0717 | 37.0657 | 18.9526 |
| 0.2891 | 3.77 | 3182 | 0.3125 | 37.9909 | 25.1394 | 37.185 | 37.1986 | 18.9532 |
| 0.2836 | 3.85 | 3256 | 0.3142 | 37.9429 | 25.2072 | 37.2037 | 37.2072 | 18.9591 |
| 0.2829 | 3.94 | 3330 | 0.3058 | 37.4522 | 24.6425 | 36.7227 | 36.7314 | 18.9556 |
| 0.2698 | 4.03 | 3404 | 0.3147 | 37.9525 | 25.2168 | 37.1852 | 37.1746 | 18.9562 |
| 0.2472 | 4.12 | 3478 | 0.3156 | 37.8397 | 24.8158 | 37.0507 | 37.0609 | 18.9544 |
| 0.2454 | 4.2 | 3552 | 0.3147 | 37.8964 | 25.1594 | 37.1437 | 37.1277 | 18.9568 |
| 0.2486 | 4.29 | 3626 | 0.3176 | 37.8525 | 25.0361 | 37.0716 | 37.0948 | 18.9568 |
| 0.2419 | 4.38 | 3700 | 0.3171 | 37.8339 | 25.1664 | 37.0724 | 37.0811 | 18.9580 |
| 0.2482 | 4.47 | 3774 | 0.3162 | 37.8943 | 25.2648 | 37.1299 | 37.1326 | 18.9574 |
| 0.2438 | 4.55 | 3848 | 0.3124 | 37.8348 | 25.1174 | 37.0646 | 37.0685 | 18.9538 |
| 0.2546 | 4.64 | 3922 | 0.3116 | 37.7776 | 25.0245 | 37.009 | 37.0062 | 18.9526 |
| 0.2399 | 4.73 | 3996 | 0.3100 | 37.7403 | 24.8735 | 36.9705 | 36.9589 | 18.9538 |
| 0.2439 | 4.82 | 4070 | 0.3063 | 37.6132 | 24.8849 | 36.8696 | 36.8678 | 18.9568 |
| 0.2399 | 4.9 | 4144 | 0.3047 | 38.0775 | 25.4368 | 37.3176 | 37.331 | 18.9538 |
| 0.2453 | 4.99 | 4218 | 0.2980 | 37.8004 | 25.1687 | 37.0767 | 37.065 | 18.9591 |
| 0.2113 | 5.08 | 4292 | 0.3156 | 37.8066 | 25.2105 | 37.0718 | 37.0732 | 18.9568 |
| 0.2112 | 5.17 | 4366 | 0.3140 | 37.9331 | 25.1857 | 37.2142 | 37.2266 | 18.9538 |
| 0.2073 | 5.25 | 4440 | 0.3130 | 37.7596 | 25.0255 | 37.0438 | 37.0355 | 18.9515 |
| 0.2088 | 5.34 | 4514 | 0.3089 | 37.6381 | 24.9435 | 36.9008 | 36.9068 | 18.9562 |
| 0.2096 | 5.43 | 4588 | 0.3133 | 37.6629 | 24.8797 | 36.9224 | 36.9201 | 18.9550 |
| 0.2105 | 5.52 | 4662 | 0.3077 | 37.6381 | 24.8911 | 36.9154 | 36.9082 | 18.9515 |
| 0.2137 | 5.6 | 4736 | 0.3107 | 37.9448 | 25.2433 | 37.1702 | 37.191 | 18.9538 |
| 0.2149 | 5.69 | 4810 | 0.3036 | 37.887 | 25.3403 | 37.1722 | 37.1505 | 18.9574 |
| 0.2113 | 5.78 | 4884 | 0.3071 | 37.75 | 25.2014 | 37.0775 | 37.061 | 18.9568 |
| 0.2112 | 5.87 | 4958 | 0.3055 | 37.9112 | 25.3054 | 37.2048 | 37.1822 | 18.9562 |
| 0.2207 | 5.96 | 5032 | 0.3043 | 37.7232 | 25.0175 | 36.9981 | 36.9904 | 18.9562 |
| 0.1931 | 6.04 | 5106 | 0.3146 | 37.6859 | 24.8467 | 36.9791 | 36.9622 | 18.9532 |
| 0.1794 | 6.13 | 5180 | 0.3192 | 37.6117 | 24.9014 | 36.9037 | 36.8909 | 18.9544 |
| 0.1809 | 6.22 | 5254 | 0.3174 | 37.6985 | 25.0269 | 37.0038 | 36.9698 | 18.9556 |
| 0.187 | 6.31 | 5328 | 0.3179 | 37.905 | 25.2766 | 37.1956 | 37.1917 | 18.9556 |
| 0.1857 | 6.39 | 5402 | 0.3121 | 37.7023 | 25.1466 | 37.0309 | 37.0343 | 18.9532 |
| 0.1852 | 6.48 | 5476 | 0.3160 | 37.9916 | 25.3421 | 37.2952 | 37.2883 | 18.9526 |
| 0.1901 | 6.57 | 5550 | 0.3130 | 37.7959 | 25.1191 | 37.108 | 37.1069 | 18.9550 |
| 0.1746 | 6.66 | 5624 | 0.3149 | 37.8307 | 25.1864 | 37.1278 | 37.111 | 18.9544 |
| 0.1797 | 6.74 | 5698 | 0.3133 | 37.7555 | 25.071 | 37.1049 | 37.0749 | 18.9562 |
| 0.1868 | 6.83 | 5772 | 0.3109 | 37.907 | 25.3167 | 37.2214 | 37.197 | 18.9532 |
| 0.1853 | 6.92 | 5846 | 0.3096 | 37.8557 | 25.2451 | 37.1764 | 37.1619 | 18.9538 |
| 0.1775 | 7.01 | 5920 | 0.3100 | 37.8791 | 25.1896 | 37.1719 | 37.1602 | 18.9532 |
| 0.159 | 7.09 | 5994 | 0.3183 | 37.6891 | 24.9679 | 37.0226 | 36.9983 | 18.9532 |
| 0.1633 | 7.18 | 6068 | 0.3191 | 37.8515 | 25.2206 | 37.1993 | 37.1785 | 18.9556 |
| 0.1623 | 7.27 | 6142 | 0.3178 | 37.7481 | 25.0795 | 37.0553 | 37.037 | 18.9562 |
| 0.1657 | 7.36 | 6216 | 0.3172 | 37.7833 | 25.1949 | 37.1478 | 37.1191 | 18.9532 |
| 0.1607 | 7.44 | 6290 | 0.3192 | 37.9413 | 25.3067 | 37.2541 | 37.2406 | 18.9526 |
| 0.1625 | 7.53 | 6364 | 0.3179 | 37.8266 | 25.2507 | 37.1517 | 37.1373 | 18.9532 |
| 0.1621 | 7.62 | 6438 | 0.3180 | 37.753 | 25.1062 | 37.1077 | 37.0825 | 18.9556 |
| 0.162 | 7.71 | 6512 | 0.3193 | 37.8685 | 25.3361 | 37.2299 | 37.1984 | 18.9526 |
| 0.1598 | 7.79 | 6586 | 0.3189 | 37.8672 | 25.2207 | 37.1865 | 37.1632 | 18.9526 |
| 0.1554 | 7.88 | 6660 | 0.3192 | 37.9556 | 25.3004 | 37.2645 | 37.2502 | 18.9526 |
| 0.1644 | 7.97 | 6734 | 0.3188 | 37.8834 | 25.2903 | 37.2138 | 37.1836 | 18.9526 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.11.0a0+b6df043
- Datasets 2.12.0
- Tokenizers 0.13.3
| 10,787 | [
[
-0.050750732421875,
-0.033111572265625,
0.026947021484375,
0.007720947265625,
-0.00635528564453125,
0.00589752197265625,
0.006671905517578125,
0.002716064453125,
0.053741455078125,
0.0279541015625,
-0.044525146484375,
-0.042755126953125,
-0.042449951171875,
... |
romgrelier/drl_course_dqn | 2023-06-24T17:07:11.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | romgrelier | null | null | romgrelier/drl_course_dqn | 0 | 2 | stable-baselines3 | 2023-06-24T17:06:16 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 968.00 +/- 218.30
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga romgrelier -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga romgrelier -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga romgrelier
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,766 | [
[
-0.043304443359375,
-0.039520263671875,
0.0195770263671875,
0.024993896484375,
-0.01067352294921875,
-0.0179595947265625,
0.01055908203125,
-0.01264190673828125,
0.01255035400390625,
0.022216796875,
-0.07183837890625,
-0.034423828125,
-0.0251617431640625,
-0... |
RogerioFreitas/whisper-medium-portuguese | 2023-06-24T18:39:22.000Z | [
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"whisper-event",
"pt",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | RogerioFreitas | null | null | RogerioFreitas/whisper-medium-portuguese | 0 | 2 | transformers | 2023-06-24T17:42:08 | ---
language: pt
license: apache-2.0
tags:
- generated_from_trainer
- whisper-event
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: openai/whisper-medium
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: pt
split: test
args: pt
metrics:
- name: Wer
type: wer
value: 6.598745817992301
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Modelo Flax do Pierre em Português para Reconhecimento de Fala (ASR)
Este repositório é um fork do repositório original criado por [Pierre Guillou](https://github.com/piegu). Ele contém uma versão convertida do modelo Whisper da OpenAI, fine-tuned no conjunto de dados `common_voice_11_0` para o idioma Português.
## Resultados
O modelo atinge os seguintes resultados no conjunto de avaliação:
- Perda (Loss): 0.2628
- Taxa de Erro de Palavra (Word Error Rate - WER): 6.5987
Para obter mais informações sobre este modelo, consulte este post do autor no blog: [Speech-to-Text & IA | Transcreva qualquer áudio para o português com o Whisper (OpenAI)... sem nenhum custo!](https://medium.com/@pierre_guillou).
Este modelo, batizado de "Portuguese Medium Whisper", é superior ao modelo original Whisper Medium da OpenAI na transcrição de áudios em português (e inclusive melhor que o modelo Whisper Large, que possui um WER de 7.1).
## Treinamento
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0333 | 2.07 | 1500 | 0.2073 | 6.9770 |
| 0.0061 | 5.05 | 3000 | 0.2628 | 6.5987 |
| 0.0007 | 8.03 | 4500 | 0.2960 | 6.6979 |
| 0.0004 | 11.0 | 6000 | 0.3212 | 6.6794 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2 | 2,155 | [
[
-0.0194549560546875,
-0.05230712890625,
0.00673675537109375,
0.0191497802734375,
-0.019256591796875,
-0.0289764404296875,
-0.0225677490234375,
-0.033172607421875,
0.01387786865234375,
0.0382080078125,
-0.04290771484375,
-0.0501708984375,
-0.048919677734375,
... |
97jmlr/ppo-SnowballTarget | 2023-06-24T22:43:03.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 97jmlr | null | null | 97jmlr/ppo-SnowballTarget | 0 | 2 | ml-agents | 2023-06-24T22:42:57 | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 97jmlr/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,361 | [
[
-0.0311431884765625,
-0.040313720703125,
0.0084991455078125,
0.00614166259765625,
-0.021514892578125,
0.0227203369140625,
0.0126190185546875,
-0.0158538818359375,
0.026397705078125,
0.033294677734375,
-0.055694580078125,
-0.05401611328125,
-0.03662109375,
-0... |
anas21/keras-dummy-sequential-demo | 2023-06-30T22:06:42.000Z | [
"keras",
"region:us"
] | null | anas21 | null | null | anas21/keras-dummy-sequential-demo | 0 | 2 | keras | 2023-06-24T23:14:55 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 841 | [
[
-0.037200927734375,
-0.03997802734375,
0.03192138671875,
0.00814056396484375,
-0.043243408203125,
-0.017730712890625,
0.01097869873046875,
-0.0033893585205078125,
0.0204620361328125,
0.030548095703125,
-0.043731689453125,
-0.051177978515625,
-0.040008544921875,
... |
anas21/keras-dummy-functional-demo | 2023-06-25T09:07:24.000Z | [
"keras",
"region:us"
] | null | anas21 | null | null | anas21/keras-dummy-functional-demo | 0 | 2 | keras | 2023-06-24T23:19:07 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 841 | [
[
-0.037200927734375,
-0.03997802734375,
0.03192138671875,
0.00814056396484375,
-0.043243408203125,
-0.017730712890625,
0.01097869873046875,
-0.0033893585205078125,
0.0204620361328125,
0.030548095703125,
-0.043731689453125,
-0.051177978515625,
-0.040008544921875,
... |
97jmlr/pyramids | 2023-06-24T23:32:30.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 97jmlr | null | null | 97jmlr/pyramids | 0 | 2 | ml-agents | 2023-06-24T23:32:23 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 97jmlr/pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,327 | [
[
-0.041168212890625,
-0.03497314453125,
0.001468658447265625,
0.01450347900390625,
-0.01024627685546875,
0.012237548828125,
0.015960693359375,
-0.01519775390625,
0.033203125,
0.0299530029296875,
-0.040985107421875,
-0.05035400390625,
-0.029449462890625,
-0.01... |
Smaraa/bart-text-simplification_1e4_adafactor_newsela | 2023-06-25T17:52:49.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | Smaraa | null | null | Smaraa/bart-text-simplification_1e4_adafactor_newsela | 0 | 2 | transformers | 2023-06-25T11:51:17 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-text-simplification_1e4_adafactor_newsela
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-text-simplification_1e4_adafactor_newsela
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5221
- Rouge1: 53.696
- Rouge2: 36.5456
- Rougel: 50.0629
- Rougelsum: 50.0673
- Gen Len: 18.558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.7479 | 1.0 | 803 | 0.3428 | 55.7433 | 39.7505 | 52.5585 | 52.6043 | 18.5474 |
| 0.2505 | 2.0 | 1606 | 0.3552 | 54.8713 | 38.517 | 51.9121 | 51.9413 | 18.4364 |
| 0.213 | 3.0 | 2409 | 0.3733 | 55.0367 | 38.8217 | 51.5907 | 51.6237 | 18.8225 |
| 0.167 | 4.0 | 3212 | 0.3933 | 55.0962 | 38.7575 | 51.9311 | 51.9376 | 18.7433 |
| 0.1412 | 5.0 | 4015 | 0.4097 | 54.8308 | 38.2353 | 51.5186 | 51.5117 | 18.611 |
| 0.1193 | 6.0 | 4818 | 0.4258 | 53.8669 | 37.2692 | 50.4845 | 50.4928 | 18.6443 |
| 0.1039 | 7.0 | 5621 | 0.4395 | 54.1498 | 37.7107 | 50.9405 | 50.9451 | 18.5728 |
| 0.0928 | 8.0 | 6424 | 0.4502 | 53.9131 | 37.1201 | 50.6696 | 50.6776 | 18.5488 |
| 0.0801 | 9.0 | 7227 | 0.4594 | 53.8123 | 37.0674 | 50.4964 | 50.4957 | 18.4986 |
| 0.0734 | 10.0 | 8030 | 0.4733 | 53.8377 | 36.8825 | 50.3857 | 50.3775 | 18.4569 |
| 0.0648 | 11.0 | 8833 | 0.4747 | 53.3192 | 36.0006 | 49.724 | 49.7651 | 18.4844 |
| 0.0601 | 12.0 | 9636 | 0.4888 | 54.0952 | 36.8581 | 50.6073 | 50.6233 | 18.5714 |
| 0.0558 | 13.0 | 10439 | 0.4903 | 53.2469 | 36.1195 | 49.7181 | 49.7835 | 18.4123 |
| 0.0506 | 14.0 | 11242 | 0.4987 | 53.3193 | 36.3095 | 49.7999 | 49.8537 | 18.4958 |
| 0.0484 | 15.0 | 12045 | 0.5051 | 53.297 | 36.1379 | 49.5479 | 49.5797 | 18.4144 |
| 0.0444 | 16.0 | 12848 | 0.5134 | 53.696 | 36.768 | 50.0134 | 50.0706 | 18.5813 |
| 0.042 | 17.0 | 13651 | 0.5162 | 53.4729 | 36.5564 | 49.8635 | 49.8709 | 18.5269 |
| 0.0404 | 18.0 | 14454 | 0.5165 | 53.5562 | 36.4654 | 49.9419 | 49.9367 | 18.524 |
| 0.0376 | 19.0 | 15257 | 0.5195 | 53.3768 | 36.359 | 49.7394 | 49.7357 | 18.5877 |
| 0.0365 | 20.0 | 16060 | 0.5221 | 53.696 | 36.5456 | 50.0629 | 50.0673 | 18.558 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 3,564 | [
[
-0.049072265625,
-0.047943115234375,
0.01396942138671875,
0.00531768798828125,
-0.0098724365234375,
-0.0005850791931152344,
-0.001209259033203125,
-0.00304412841796875,
0.054443359375,
0.0293426513671875,
-0.048004150390625,
-0.0487060546875,
-0.042755126953125,... |
AlexK-PL/speecht5_tts_fine-tuned_voxpopuli_nl | 2023-06-25T14:39:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"fine_tuned",
"generated_from_trainer",
"nl",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | AlexK-PL | null | null | AlexK-PL/speecht5_tts_fine-tuned_voxpopuli_nl | 0 | 2 | transformers | 2023-06-25T12:16:58 | ---
language:
- nl
license: mit
tags:
- fine_tuned
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.52 | 4.3 | 1000 | 0.4763 |
| 0.5046 | 8.6 | 2000 | 0.4633 |
| 0.4938 | 12.9 | 3000 | 0.4579 |
| 0.4965 | 17.2 | 4000 | 0.4572 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,582 | [
[
-0.032379150390625,
-0.041259765625,
-0.004489898681640625,
0.01525115966796875,
-0.0233917236328125,
-0.020172119140625,
-0.0177154541015625,
-0.0189056396484375,
-0.0004925727844238281,
0.0209808349609375,
-0.043060302734375,
-0.05224609375,
-0.050201416015625... |
nsanghi/distilhubert-finetuned-gtzan | 2023-07-01T15:25:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | nsanghi | null | null | nsanghi/distilhubert-finetuned-gtzan | 0 | 2 | transformers | 2023-06-25T14:44:02 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8042
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0168 | 1.0 | 113 | 2.0642 | 0.45 |
| 1.4374 | 2.0 | 226 | 1.4358 | 0.64 |
| 1.1551 | 3.0 | 339 | 0.9743 | 0.74 |
| 0.7756 | 4.0 | 452 | 0.7805 | 0.81 |
| 0.4436 | 5.0 | 565 | 0.6117 | 0.81 |
| 0.3047 | 6.0 | 678 | 0.7366 | 0.79 |
| 0.2288 | 7.0 | 791 | 0.5297 | 0.86 |
| 0.2728 | 8.0 | 904 | 0.5677 | 0.87 |
| 0.1072 | 9.0 | 1017 | 0.6887 | 0.86 |
| 0.137 | 10.0 | 1130 | 0.9238 | 0.8 |
| 0.021 | 11.0 | 1243 | 0.7738 | 0.84 |
| 0.007 | 12.0 | 1356 | 0.7002 | 0.86 |
| 0.0047 | 13.0 | 1469 | 0.7805 | 0.86 |
| 0.0039 | 14.0 | 1582 | 0.7624 | 0.85 |
| 0.0034 | 15.0 | 1695 | 0.7892 | 0.85 |
| 0.0031 | 16.0 | 1808 | 0.7806 | 0.85 |
| 0.0029 | 17.0 | 1921 | 0.8005 | 0.85 |
| 0.0028 | 18.0 | 2034 | 0.7942 | 0.85 |
| 0.0025 | 19.0 | 2147 | 0.8138 | 0.86 |
| 0.0025 | 20.0 | 2260 | 0.8042 | 0.86 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
| 2,585 | [
[
-0.0406494140625,
-0.039093017578125,
0.0117340087890625,
0.0030498504638671875,
-0.013763427734375,
-0.01476287841796875,
-0.0028820037841796875,
-0.007843017578125,
0.027252197265625,
0.0198822021484375,
-0.05413818359375,
-0.05047607421875,
-0.049285888671875... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.