modelId stringlengths 4 112 | sha stringlengths 40 40 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringclasses 29 values | private bool 1 class | author stringlengths 2 38 โ | config null | id stringlengths 4 112 | downloads float64 0 36.8M โ | likes float64 0 712 โ | library_name stringclasses 17 values | __index_level_0__ int64 0 38.5k | readme stringlengths 0 186k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingtweets/spideythefifth | ab8d2397fda8276b59f4b0860c1af15da6f6cfef | 2022-04-26T02:13:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/spideythefifth | 0 | null | transformers | 37,100 | ---
language: en
thumbnail: http://www.huggingtweets.com/spideythefifth/1650939169930/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1505089505757384712/M9ehrLtd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">๐น๐ณ๏ธโโง๏ธ๐ณ๏ธโ๐ Gandalf the Gay๐ณ๏ธโโง๏ธ๐ณ๏ธโ๐โ ๏ธ</div>
<div style="text-align: center; font-size: 14px;">@spideythefifth</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ๐น๐ณ๏ธโโง๏ธ๐ณ๏ธโ๐ Gandalf the Gay๐ณ๏ธโโง๏ธ๐ณ๏ธโ๐โ ๏ธ.
| Data | ๐น๐ณ๏ธโโง๏ธ๐ณ๏ธโ๐ Gandalf the Gay๐ณ๏ธโโง๏ธ๐ณ๏ธโ๐โ ๏ธ |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 289 |
| Short tweets | 1301 |
| Tweets kept | 1654 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/og5nwknk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @spideythefifth's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2trdlzgq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2trdlzgq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/spideythefifth')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/lustfulliberal-pg13scottwatson | 1eef86a128b72631e6dbece1da82fac2ff122c49 | 2022-04-26T02:59:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lustfulliberal-pg13scottwatson | 0 | null | transformers | 37,101 | ---
language: en
thumbnail: http://www.huggingtweets.com/lustfulliberal-pg13scottwatson/1650941946890/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1231999409916764162/mo9U0uNT_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1114620037300654082/KcWDPQsE_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">(18+ ONLY) The Lustful Liberal - Scorny on Main & The Loony Liberal - Too Old for These Bulltweets</div>
<div style="text-align: center; font-size: 14px;">@lustfulliberal-pg13scottwatson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from (18+ ONLY) The Lustful Liberal - Scorny on Main & The Loony Liberal - Too Old for These Bulltweets.
| Data | (18+ ONLY) The Lustful Liberal - Scorny on Main | The Loony Liberal - Too Old for These Bulltweets |
| --- | --- | --- |
| Tweets downloaded | 3242 | 3240 |
| Retweets | 696 | 749 |
| Short tweets | 333 | 294 |
| Tweets kept | 2213 | 2197 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/r02ekev3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lustfulliberal-pg13scottwatson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29dxdiwg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29dxdiwg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lustfulliberal-pg13scottwatson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
negfir/bert_uncased_L-10_H-128_A-2wiki103 | 7b07db6e95a74473dcc7abe040fdff2dc6b70cdc | 2022-04-26T07:49:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-128_A-2wiki103 | 0 | null | transformers | 37,102 | Entry not found |
peggyhuang/t5-base-canard | ca5abd2b7f31ade290002fe0f3cefc6d7afc3390 | 2022-04-26T09:45:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | peggyhuang | null | peggyhuang/t5-base-canard | 0 | null | transformers | 37,103 | Entry not found |
negfir/bert_uncased_L-6_H-128_A-2wiki103 | c485941bdf136683d985e7b791c419fd974cb44a | 2022-04-26T10:19:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-6_H-128_A-2wiki103 | 0 | null | transformers | 37,104 | Entry not found |
sameearif88/wav2vec2-base-timit-demo-colab | 7c978d9a11e6cfee6ed2c6a4cb592cb0edaf9815 | 2022-04-30T13:08:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 37,105 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
nz/RITA_s | b98e56e26c4a9d63d4ab92490155e7108f5b6c2a | 2022-04-26T14:13:19.000Z | [
"pytorch",
"rita",
"transformers"
] | null | false | nz | null | nz/RITA_s | 0 | null | transformers | 37,106 | Entry not found |
negfir/bert_uncased_L-4_H-768_A-12wiki103 | 3657fe61daede3aaaa3896a85c7884181daf7213 | 2022-04-26T12:54:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-4_H-768_A-12wiki103 | 0 | null | transformers | 37,107 | Entry not found |
hbruce11216/april26-finetuned-mlm | d3aa9a3e43b984344d31716366955146a0d8c1ec | 2022-04-26T13:14:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | hbruce11216 | null | hbruce11216/april26-finetuned-mlm | 0 | null | transformers | 37,108 | Entry not found |
Saisam/Inquirer_ner | 896a0da2add37b60196cd8a9da218fe12f8a3718 | 2022-04-26T14:51:41.000Z | [
"pytorch",
"en",
"dataset:conll2003",
"flair",
"license:afl-3.0"
] | null | false | Saisam | null | Saisam/Inquirer_ner | 0 | null | flair | 37,109 | ---
tags:
- flair
language: en
datasets:
- conll2003
license: afl-3.0
---
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("Saisam/Inquirer_ner")
# make example sentence
sentence = Sentence("George Washington went to Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
``` |
Saisam/Inquirer_ner_loc | 1f6b6a2aaa28696557ef83450fcaa8b50a7e7d1b | 2022-04-28T14:01:12.000Z | [
"pytorch",
"en",
"dataset:conll2003",
"flair"
] | null | false | Saisam | null | Saisam/Inquirer_ner_loc | 0 | null | flair | 37,110 | ---
tags:
- flair
language: en
datasets:
- conll2003
---
# Flair NER fine-tuned on Private Dataset
This is specifically Designed on locations. the tag is <unk>
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("Saisam/Inquirer_ner_loc")
# make example sentence
sentence = Sentence("George Washington went to Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
``` |
negfir/bert_uncased_L-4_H-512_A-8wiki103 | c384556956d2bea89cacb94bed58e2ffa6826a26 | 2022-04-26T14:38:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-4_H-512_A-8wiki103 | 0 | null | transformers | 37,111 | Entry not found |
negfir/bert_uncased_L-4_H-256_A-4wiki103 | d55c87b9b4da2dc229198df54d4d1a2b3d3e90e2 | 2022-04-26T15:46:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-4_H-256_A-4wiki103 | 0 | null | transformers | 37,112 | Entry not found |
negfir/bert_uncased_L-4_H-128_A-2wiki103 | 8c6118291e98181553081915c28e9f6cf36c7457 | 2022-04-26T16:41:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-4_H-128_A-2wiki103 | 0 | null | transformers | 37,113 | Entry not found |
lsb/wav2vec2-base-pem23-oldvocab-la | 4231c90505a3ff4eb024023e43698ff3e4b02eca | 2022-04-26T22:22:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lsb | null | lsb/wav2vec2-base-pem23-oldvocab-la | 0 | null | transformers | 37,114 | Entry not found |
ofirzaf/bert-large-uncased-squad | 57953ddeaa2307e97a82ece0f361d892fc938cb3 | 2022-04-26T23:10:06.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ofirzaf | null | ofirzaf/bert-large-uncased-squad | 0 | null | transformers | 37,115 | Entry not found |
nizamudma/t5-small-finetuned-cnn-3 | c68f1be9c6a73ed37e1e0a68d94c479448ade540 | 2022-04-27T08:55:11.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nizamudma | null | nizamudma/t5-small-finetuned-cnn-3 | 0 | null | transformers | 37,116 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.5495
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6633
- Rouge1: 24.5495
- Rouge2: 11.8286
- Rougel: 20.2968
- Rougelsum: 23.1682
- Gen Len: 18.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.7951 | 1.0 | 35890 | 1.6633 | 24.5495 | 11.8286 | 20.2968 | 23.1682 | 18.9993 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
rahulgkatre/DialoGPT-homer | dd2771d187da22af38c3d00239853fcea5686ee9 | 2022-04-27T02:55:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | rahulgkatre | null | rahulgkatre/DialoGPT-homer | 0 | null | transformers | 37,117 | Entry not found |
rahulgkatre/DialoGPT-bart | 636495665f64302e2386197bf7ce2c2479436455 | 2022-04-27T03:45:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | rahulgkatre | null | rahulgkatre/DialoGPT-bart | 0 | null | transformers | 37,118 | Entry not found |
faisalahmad/summarizer1 | aa087032f064e18b70a35fdc4fe34da594049ba8 | 2022-04-27T15:53:08.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:faisalahmad/autotrain-data-nsut-nlp-project-textsummarization",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | faisalahmad | null | faisalahmad/summarizer1 | 0 | null | transformers | 37,119 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- faisalahmad/autotrain-data-nsut-nlp-project-textsummarization
co2_eq_emissions: 736.9366247330848
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 791824379
- CO2 Emissions (in grams): 736.9366247330848
## Validation Metrics
- Loss: 1.7805895805358887
- Rouge1: 37.8222
- Rouge2: 16.7598
- RougeL: 31.2959
- RougeLsum: 31.3048
- Gen Len: 19.7213
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/faisalahmad/autotrain-nsut-nlp-project-textsummarization-791824379
``` |
faisalahmad/summarizer2 | 3088d5fa10a312c52d9d7eb1d8118c08e2ffc51e | 2022-04-28T17:48:14.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:faisalahmad/autotrain-data-nsut-nlp-project-textsummarization",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | faisalahmad | null | faisalahmad/summarizer2 | 0 | null | transformers | 37,120 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- faisalahmad/autotrain-data-nsut-nlp-project-textsummarization
co2_eq_emissions: 4444.804304528572
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 791824381
- CO2 Emissions (in grams): 4444.804304528572
## Validation Metrics
- Loss: 1.4599040746688843
- Rouge1: 46.5461
- Rouge2: 23.8595
- RougeL: 38.526
- RougeLsum: 38.5219
- Gen Len: 23.468
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/faisalahmad/autotrain-nsut-nlp-project-textsummarization-791824381
``` |
huggingtweets/pollinations_ai | 144da4869e18909febbac2d87d3680842ce583e1 | 2022-04-27T09:18:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/pollinations_ai | 0 | null | transformers | 37,121 | ---
language: en
thumbnail: http://www.huggingtweets.com/pollinations_ai/1651051095670/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1417602105192468480/UZFqVCxA_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pollinations</div>
<div style="text-align: center; font-size: 14px;">@pollinations_ai</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pollinations.
| Data | Pollinations |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 32 |
| Short tweets | 783 |
| Tweets kept | 2435 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3663gbqn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pollinations_ai's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ds23cvg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ds23cvg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pollinations_ai')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nz/RITA_xl | 78dcfeabecbcbe9146823030874370e6b721ef15 | 2022-04-27T16:32:43.000Z | [
"pytorch",
"rita",
"transformers"
] | null | false | nz | null | nz/RITA_xl | 0 | null | transformers | 37,122 | Entry not found |
dbmdz/flair-hipe-2022-ajmc-de-64k | 4741a4f6208bb9afa3ef76b63065378e59ffe6e6 | 2022-04-27T13:07:45.000Z | [
"pytorch",
"license:mit"
] | null | false | dbmdz | null | dbmdz/flair-hipe-2022-ajmc-de-64k | 0 | null | null | 37,123 | ---
license: mit
---
|
kvnaraya/DialoGPT-small-michael | e277c9ffb0a58c1b22cb68d3f23b88b3587c0ece | 2022-04-27T14:05:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kvnaraya | null | kvnaraya/DialoGPT-small-michael | 0 | null | transformers | 37,124 | ---
tags:
- conversational
---
#Michael Scott DialoGPT Model |
dbmdz/flair-hipe-2022-ajmc-en-64k | 1bb4b88f059731561a58989b7fc3085233a0ea68 | 2022-04-27T14:03:04.000Z | [
"pytorch",
"license:mit"
] | null | false | dbmdz | null | dbmdz/flair-hipe-2022-ajmc-en-64k | 0 | null | null | 37,125 | ---
license: mit
---
|
kvnaraya/DialoGPT-small-jim | 717cf5cebce75b47240fdaaf0a2546112ab43a00 | 2022-04-27T15:22:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | kvnaraya | null | kvnaraya/DialoGPT-small-jim | 0 | null | transformers | 37,126 | Entry not found |
stevems1/bert-base-uncased-French123 | 7ec127167519e7c14df284f32d0f887f4408f373 | 2022-04-27T14:55:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | stevems1 | null | stevems1/bert-base-uncased-French123 | 0 | null | transformers | 37,127 | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-French123
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-French123
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
obokkkk/mbart-large-cc25-finetuned-en-to-ko2 | 913fd60d91c2f4c1820ca069d8dd4bf5fcd35b2d | 2022-04-27T17:49:20.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | obokkkk | null | obokkkk/mbart-large-cc25-finetuned-en-to-ko2 | 0 | null | transformers | 37,128 | ---
tags:
- generated_from_trainer
model-index:
- name: mbart-large-cc25-finetuned-en-to-ko2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-en-to-ko2
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
obokkkk/wav2vec2-base-960h-finetuned_common_voice2 | 53d5b71135ac105a0217e7388fce3c21feb5b028 | 2022-04-27T18:42:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | obokkkk | null | obokkkk/wav2vec2-base-960h-finetuned_common_voice2 | 0 | null | transformers | 37,129 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-960h-finetuned_common_voice2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-960h-finetuned_common_voice2
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dbmdz/flair-hipe-2022-ajmc-fr-64k | 18d8999cf6fc9ecddfc1bc7add36590b982a954e | 2022-04-27T18:48:55.000Z | [
"pytorch",
"license:mit"
] | null | false | dbmdz | null | dbmdz/flair-hipe-2022-ajmc-fr-64k | 0 | null | null | 37,130 | ---
license: mit
---
|
SerdarHelli/Brain-MRI-GAN | fb54244836038e5489116bbd56e282b10fd320d8 | 2022-04-27T20:32:07.000Z | [
"brainMRI",
"GAN",
"medicalimaging",
"pytorch"
] | null | false | SerdarHelli | null | SerdarHelli/Brain-MRI-GAN | 0 | null | null | 37,131 | ---
tags:
- brainMRI
- GAN
- medicalimaging
- pytorch
metrics:
- fid50k
---
The model's kernels etc. source code ==> https://github.com/NVlabs/stylegan3 |
zasheza/wav2vec2-base-timit-demo-colab | ef7b7b356e7440dc45f4a4e1fc05a78e281e13ad | 2022-04-30T00:09:46.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zasheza | null | zasheza/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 37,132 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-large-cv | 7c73042f93e1dc07f73c3bd435d0c1f8d3dd5744 | 2022-04-30T01:32:35.000Z | [
"pytorch"
] | null | false | jiobiala24 | null | jiobiala24/wav2vec2-large-cv | 0 | null | null | 37,133 | |
inhee/opus-mt-ko-en-finetuned-ko-to-en | 693c48f8c436edcc845ef8f92720608a2a2d2b2c | 2022-04-28T04:20:08.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | inhee | null | inhee/opus-mt-ko-en-finetuned-ko-to-en | 0 | null | transformers | 37,134 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned-ko-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2159
- Bleu: 43.3502
- Gen Len: 3.5474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 0.96 | 20 | 1.3139 | 37.8375 | 3.5612 |
| No log | 1.96 | 40 | 1.2849 | 40.9049 | 3.5566 |
| No log | 2.96 | 60 | 1.2653 | 40.3399 | 3.565 |
| No log | 3.96 | 80 | 1.2516 | 42.7497 | 3.5563 |
| No log | 4.96 | 100 | 1.2395 | 42.5064 | 3.5478 |
| No log | 5.96 | 120 | 1.2311 | 43.2749 | 3.5477 |
| No log | 6.96 | 140 | 1.2232 | 42.0691 | 3.5472 |
| No log | 7.96 | 160 | 1.2193 | 43.5797 | 3.5525 |
| No log | 8.96 | 180 | 1.2169 | 43.2313 | 3.547 |
| No log | 9.96 | 200 | 1.2159 | 43.3502 | 3.5474 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
juierror/thai-news-summarization | 93287aed749b3d54f88476b215586c178b01cdaf | 2022-05-06T14:39:25.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | juierror | null | juierror/thai-news-summarization | 0 | null | transformers | 37,135 | ---
license: mit
---
# How to use
```python3
from transformers import MT5Tokenizer, MT5ForConditionalGeneration
tokenizer = MT5Tokenizer.from_pretrained('juierror/thai-news-summarization')
model = MT5ForConditionalGeneration.from_pretrained('juierror/thai-news-summarization')
text = "some news with head line"
tokenized_text = tokenizer(text, truncation=True, padding=True, return_tensors='pt')
source_ids = tokenized_text['input_ids'].to("cpu", dtype = torch.long)
source_mask = tokenized_text['attention_mask'].to("cpu", dtype = torch.long)
generated_ids = model.generate(
input_ids = source_ids,
attention_mask = source_mask,
max_length=512,
num_beams=5,
repetition_penalty=1,
length_penalty=1,
early_stopping=True,
no_repeat_ngram_size=2
)
pred = tokenizer.decode(generated_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
``` |
pfactorial/checkpoint-50-epoch-2 | 00ce62284063e75156ed49757e8f0c3c2b4bcabe | 2022-04-29T13:04:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pfactorial | null | pfactorial/checkpoint-50-epoch-2 | 0 | null | transformers | 37,136 | --- |-
Model card metadata documentation and specifications moved to https://github.com/huggingface/huggingface_hub/
The canonical documentation about model cards is now located at https://huggingface.co/docs/hub/model-repos and you can open a PR to improve the docs in the same repository https://github.com/huggingface/huggingface_hub/tree/main/docs/hub
You can also find a spec of the metadata at https://github.com/huggingface/huggingface_hub/blob/main/README.md.
|
moma1820/xxmlr | 40193b918b89ebe11a2dccf471a776d3427e86dc | 2022-04-28T17:45:46.000Z | [
"pytorch",
"xlm-roberta-xl",
"feature-extraction",
"transformers"
] | feature-extraction | false | moma1820 | null | moma1820/xxmlr | 0 | null | transformers | 37,137 | Entry not found |
it5/it5-efficient-small-el32-formal-to-informal | 32f9b4ae22aca13119dfb7a947042d7c3e718712 | 2022-04-29T14:19:40.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:yahoo/xformal_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"transformers",
"italian",
"sequence-to-sequence",
"style-transfer",
"efficient",
"formality-style-transfer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | it5 | null | it5/it5-efficient-small-el32-formal-to-informal | 0 | null | transformers | 37,138 | ---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- efficient
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "Questa performance รจ a dir poco spiacevole."
- text: "In attesa di un Suo cortese riscontro, Le auguriamo un piacevole proseguimento di giornata."
- text: "Questa visione mi procura una goduria indescrivibile."
- text: "qualora ciรฒ possa interessarti, ti pregherei di contattarmi."
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-formal-to-informal
results:
- task:
type: formality-style-transfer
name: "Formal-to-informal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.459
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.244
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.435
name: "Avg. Test RougeL"
- type: bertscore
value: 0.739
name: "Avg. Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
---
# IT5 Cased Small Efficient EL32 for Formal-to-informal Style Transfer ๐ค
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32)
model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
f2i = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-formal-to-informal')
f2i("Vi ringrazio infinitamente per vostra disponibilitร ")
>>> [{"generated_text": "e grazie per la vostra disponibilitร !"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5-efficient-small-el32-formal-to-informal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5-efficient-small-el32-formal-to-informal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3 |
it5/it5-efficient-small-el32-news-summarization | b2bbc59818a75fdabd0fd41368b956241631965d | 2022-04-29T15:18:38.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:ARTeLab/fanpage",
"dataset:ARTeLab/ilpost",
"arxiv:2203.03759",
"arxiv:2109.10686",
"transformers",
"italian",
"sequence-to-sequence",
"fanpage",
"efficient",
"ilpost",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | it5 | null | it5/it5-efficient-small-el32-news-summarization | 0 | 1 | transformers | 37,139 | ---
language:
- it
license: apache-2.0
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
tags:
- italian
- sequence-to-sequence
- fanpage
- efficient
- ilpost
- summarization
widget:
- text: "Non lo vuole sposare. Eโ quanto emerge allโinterno dellโultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda lโidea del matrimonio per qualche anno ancora. La soubrette, che รจ stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perchรฉ รจ sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi piรน desiderati al mondo, lโex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo lโultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, perรฒ, si รจ sbagliato. A mettere le cose bene in chiaro รจ la Fico che, intervistata dallโemittente radiofonica Rtl 102.5, dice: ร presto per sposarsi, siamo ancora molto giovani. ร giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perchรฉ no, ci si puรฒ anche pensare. Quando si รจ giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di piรน di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si รจ giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: piรน per la sua vita privata che come giocatore. Per me puรฒ anche andare in uno strip club, se non fa niente di male, con gli amici, perรฒ devo dire che alla fine torna sempre da me, sono la sua preferita."
- text: "Valerio รจ giovanissimo ma giร una star. Fuori dallโAriston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui รจ forte del suo talento e sicuro. Partecipa in gara tra i โbigโ di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu รจ stato eliminato. Ma non รจ detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa รจ successo alla giuria visto che sei stato eliminato anche se lโesibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sarร il ballottaggioโฆ Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivareโฆ ho fatto piรน di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. Lโumiltร ? Prima di tutto. Sennรฒ non sarei qui."
- text: "Lโazienda statunitense Broadcom, uno dei piรน grandi produttori di semiconduttori al mondo, ha presentato unโofferta per acquisire Qualcomm, altra grande societร degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se lโoperazione dovesse essere approvata, sarebbe una delle piรน grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe giร preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi allโacquisizione perchรฉ il prezzo offerto รจ di poco superiore a quello dellโattuale valore delle azioni dellโazienda. Ci potrebbero essere inoltre complicazioni sul piano dellโantitrust da valutare, prima di unโeventuale acquisizione."
- text: "Dal 31 maggio รจ infine partita la piattaforma ITsART, a piรน di un anno da quando โ durante il primo lockdown โ il ministro della Cultura Dario Franceschini ne aveva parlato come di ยซuna sorta di Netflix della culturaยป, pensata per ยซoffrire a tutto il mondo la cultura italiana a pagamentoยป. ร presto per dare giudizi definitivi sulla piattaforma, e di certo sarร difficile farlo anche piรน avanti senza numeri precisi. Al momento, lโunica cosa che si puรฒ fare รจ guardare comโรจ fatto il sito, contare quanti contenuti ci sono (circa 700 โtitoliโ, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietร . Intanto, una cosa notata da piรน parti รจ che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-news-summarization
results:
- task:
type: news-summarization
name: "News Summarization"
dataset:
type: newssum-it
name: "NewsSum-IT"
metrics:
- type: rouge1
value: 0.354
name: "Test Rouge1"
- type: rouge2
value: 0.172
name: "Test Rouge2"
- type: rougeL
value: 0.278
name: "Test RougeL"
- type: bertscore
value: 0.410
name: "Avg. Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Cased Small Efficient EL32 for News Summarization โ๏ธ๐๏ธ ๐ฎ๐น
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/it5-efficient-small-el32-news-summarization')
newsum("Dal 31 maggio รจ infine partita la piattaforma ITsART, a piรน di un anno da quando โ durante il primo lockdown โ il ministro della Cultura Dario Franceschini ne aveva parlato come di ยซuna sorta di Netflix della culturaยป, pensata per ยซoffrire a tutto il mondo la cultura italiana a pagamentoยป. ร presto per dare giudizi definitivi sulla piattaforma, e di certo sarร difficile farlo anche piรน avanti senza numeri precisi. Al momento, lโunica cosa che si puรฒ fare รจ guardare comโรจ fatto il sito, contare quanti contenuti ci sono (circa 700 โtitoliโ, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietร . Intanto, una cosa notata da piรน parti รจ che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
it5/it5-efficient-small-el32-question-generation | 54a1566817defee9c8cab4617ef5a0125a82bd0d | 2022-04-29T14:34:01.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:squad_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"transformers",
"Italian",
"efficient",
"sequence-to-sequence",
"question-generation",
"squad_it",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | it5 | null | it5/it5-efficient-small-el32-question-generation | 0 | null | transformers | 37,140 | ---
language:
- it
license: apache-2.0
datasets:
- squad_it
tags:
- Italian
- efficient
- sequence-to-sequence
- question-generation
- squad_it
- text2text-generation
widget:
- text: "Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto piรน autorevole di allora รจ venuto dalla facoltร di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causรฒ una \"grande pestilenza nell' aria\". Questa relazione รจ diventata la prima e piรน diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria piรน accettata. Oggi, questo รจ conosciuto come la teoria di Miasma. La parola \"peste\" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che รจ diventato il termine medico. Risposta: re di Francia"
- text: "Il 14 aprile 2011, ABC ha annullato le lunghe opere di sapone All My Children e One Life to Live dopo 41 e 43 anni in onda, rispettivamente (in seguito al contraccolpo dei tifosi, ABC ha venduto i diritti ad entrambi gli spettacoli a Prospect Park, che alla fine ha rilanciato i saponi su Hulu per un' ulteriore stagione nel 2013 e con entrambe le societร che si citano in giudizio per accuse di interferenza con il processo di rilancio degli spettacoli, mancato pagamento delle tasse di licenza. Il talk/lifestyle show che ha sostituito One Life to Live, The Revolution, non รจ riuscito a generare giudizi soddisfacenti ed รจ stato a sua volta annullato dopo soli sette mesi. La stagione 2011-12 ha visto l' ABC cadere al quarto posto nel 18-49 demografico nonostante rinnovando una manciata di nuovi spettacoli (compresi i drammi matricole Scandal, Revenge e Once Upon a Time) per la seconda stagione. Risposta: Hulu"
- text: "L' American Broadcasting Company (ABC) (stlized nel suo logo come abc dal 1957) รจ una rete televisiva commerciale americana trasmissione televisiva che รจ di proprietร del Disney-ABC Television Group, una controllata della divisione Disney Media Networks di The Walt Disney Company. La rete fa parte delle grandi reti televisive Big Three. La rete ha sede a Columbus Avenue e West 66th Street a Manhattan, con ulteriori uffici e stabilimenti di produzione a New York City, Los Angeles e Burbank, California. Risposta: Manhattan"
- text: "La disobbedienza civile non rivoluzionaria รจ una semplice disobbedienza delle leggi sulla base del fatto che sono giudicate \"sbagliate\" da una coscienza individuale, o come parte di uno sforzo per rendere alcune leggi inefficaci, per causarne l' abrogazione, o per esercitare pressioni per ottenere i propri desideri politici su qualche altra questione. La disobbedienza civile rivoluzionaria รจ piรน che altro un tentativo attivo di rovesciare un governo (o di cambiare le tradizioni culturali, i costumi sociali, le credenze religiose, ecc. La rivoluzione non deve necessariamente essere politica, cioรจ \"rivoluzione culturale\", implica semplicemente un cambiamento radicale e diffuso in una sezione del tessuto sociale). Gli atti di Gandhi sono stati descritti come disobbedienza civile rivoluzionaria. ร stato affermato che gli ungheresi sotto Ferenc Deรกk hanno diretto una disobbedienza civile rivoluzionaria contro il governo austriaco. Thoreau ha anche scritto di disobbedienza civile realizzando \"rivoluzione pacifica\". Howard Zinn, Harvey Wheeler e altri hanno identificato il diritto sposato nella Dichiarazione d' Indipendenza di \"alterare o abolire\" un governo ingiusto come principio di disobbedienza civile. Risposta: Ferenc Deรกk"
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-question-generation
results:
- task:
type: question-generation
name: "Question generation"
dataset:
type: squad_it
name: "SQuAD-IT"
metrics:
- type: rouge1
value: 0.382
name: "Test Rouge1"
- type: rouge2
value: 0.201
name: "Test Rouge2"
- type: rougeL
value: 0.357
name: "Test RougeL"
- type: bertscore
value: 0.517
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
---
# IT5 Cased Small Efficient EL32 for Question Generation ๐ญ ๐ฎ๐น
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on question generation on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qg = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-question-generation')
qg("Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto piรน autorevole di allora รจ venuto dalla facoltร di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causรฒ una "grande pestilenza nell\' aria". Questa relazione รจ diventata la prima e piรน diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria piรน accettata. Oggi, questo รจ conosciuto come la teoria di Miasma. La parola "peste" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che รจ diventato il termine medico. Risposta: re di Francia")
>>> [{"generated_text": "Per chi รจ stato redatto il referto medico?"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-question-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-question-generation")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
it5/it5-efficient-small-el32-ilgiornale-to-repubblica | d1daa4a17f3c89ca6119b66a969126051cff5847 | 2022-04-29T14:43:32.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:gsarti/change_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"transformers",
"italian",
"sequence-to-sequence",
"newspaper",
"efficient",
"ilgiornale",
"repubblica",
"style-transfer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | it5 | null | it5/it5-efficient-small-el32-ilgiornale-to-repubblica | 0 | null | transformers | 37,141 | ---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- newspaper
- efficient
- ilgiornale
- repubblica
- style-transfer
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarร formalizzata oggi dal dipartimento di stato e sarร accompagnata da nuove e piรน severe sanzioni. 'Il livello piรน alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilitร dell'attuale crisi sull'amministrazione Obama. Poi si รจ scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento รจ all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord รจ giร pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson รจ solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perchรฉ gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo รจ un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirร a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che รจ vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer รจ stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto piรน economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle puรฒ provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto piรน devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una cittร del centro Italia: รจ stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina รจ quasi 'acqua fresca', anzi, proprio per la sua economicitร , in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attivitร investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltร di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicitร รจ molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrร ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio รจ precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari allโ82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta รจ stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata lโarea dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni allโanno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima รจ la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, lโUmbria, la Valle dโAosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico cโรจ lโazione dellโuomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrร nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si รจ trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perchรฉ dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi รจ stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perchรฉ rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , Marรญa Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
- headline-headline-consistency-classifier
- headline-article-consistency-classifier
model-index:
- name: it5-efficient-small-el32-ilgiornale-to-repubblica
results:
- task:
type: headline-style-transfer-ilgiornale-to-repubblica
name: "Headline style transfer (Il Giornale to Repubblica)"
dataset:
type: gsarti/change_it
name: "CHANGE-IT"
metrics:
- type: rouge1
value: 0.286
name: "Test Rouge1"
- type: rouge2
value: 0.099
name: "Test Rouge2"
- type: rougeL
value: 0.253
name: "Test RougeL"
- type: bertscore
value: 0.422
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
- type: headline-headline-consistency-classifier
value: 0.836
name: "Test Headline-Headline Consistency Accuracy"
- type: headline-article-consistency-classifier
value: 0.763
name: "Test Headline-Article Consistency Accuracy"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Cased Small Efficient EL32 for News Headline Style Transfer (Il Giornale to Repubblica) ๐๏ธโก๏ธ๐๏ธ ๐ฎ๐น
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
The model is trained to generate a headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
g2r = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-ilgiornale-to-repubblica')
g2r("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrร nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si รจ trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perchรฉ dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi รจ stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perchรฉ rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , Marรญa Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-ilgiornale-to-repubblica")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-ilgiornale-to-repubblica")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
stevems1/bert-base-uncased-ShreeGanesh | 08414391b08667dd05043fbb66ab169b5deb483e | 2022-04-28T15:16:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | stevems1 | null | stevems1/bert-base-uncased-ShreeGanesh | 0 | null | transformers | 37,142 | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-ShreeGanesh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-ShreeGanesh
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dbmdz/flair-hipe-2022-ajmc-de | 1d2a58da0bf07de9d2862981f47cc298fef0b6f4 | 2022-04-28T14:28:45.000Z | [
"pytorch",
"license:mit"
] | null | false | dbmdz | null | dbmdz/flair-hipe-2022-ajmc-de | 0 | null | null | 37,143 | ---
license: mit
---
|
princeton-nlp/efficient_mlm_m0.30 | e3ad93a3a8b53a50e38ac007282e865d5162c0cb | 2022-04-28T18:57:39.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"transformers",
"autotrain_compatible"
] | fill-mask | false | princeton-nlp | null | princeton-nlp/efficient_mlm_m0.30 | 0 | null | transformers | 37,144 | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
huggingtweets/inversebrah | 2800ff0ae48c29bc3c241bcdfdcd7f1b5baf273a | 2022-04-28T20:06:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/inversebrah | 0 | null | transformers | 37,145 | ---
language: en
thumbnail: http://www.huggingtweets.com/inversebrah/1651176371994/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1497137019880804355/71KiqAN1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">smolting (wassie, verse)</div>
<div style="text-align: center; font-size: 14px;">@inversebrah</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from smolting (wassie, verse).
| Data | smolting (wassie, verse) |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 1700 |
| Short tweets | 816 |
| Tweets kept | 713 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/540r5fzt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @inversebrah's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2oz9x9co) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2oz9x9co/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/inversebrah')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
phosseini/atomic-bert-large-full | 829c3607d790efd4b4270ae9f4fa410c54b3bcd2 | 2022-04-28T21:56:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | phosseini | null | phosseini/atomic-bert-large-full | 0 | null | transformers | 37,146 | Entry not found |
huggingtweets/usmnt | bb1bffc21c43f292853a572b6ff22865c2676667 | 2022-05-04T16:09:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/usmnt | 0 | null | transformers | 37,147 | ---
language: en
thumbnail: http://www.huggingtweets.com/usmnt/1651680543545/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1410587808666955776/mWkKWw1U_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">USMNT</div>
<div style="text-align: center; font-size: 14px;">@usmnt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from USMNT.
| Data | USMNT |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 600 |
| Short tweets | 215 |
| Tweets kept | 2435 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/22ipg0a6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usmnt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2nbn1lat) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2nbn1lat/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/usmnt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mT0/mt0_large_translated_t0_ckpt_1012500 | 919b87869b58b10056ca1d2d98a6d6e7aed81160 | 2022-04-29T05:17:12.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mT0 | null | mT0/mt0_large_translated_t0_ckpt_1012500 | 0 | null | transformers | 37,148 | Entry not found |
norefly/opus-mt-ko-en-finetuned-ko-to-en3 | bb42420bb23f483bdbc632bef6f11173b2e7ef2c | 2022-04-29T11:48:26.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | norefly | null | norefly/opus-mt-ko-en-finetuned-ko-to-en3 | 0 | null | transformers | 37,149 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned-ko-to-en3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1864
- Bleu: 0.7037
- Gen Len: 11.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 0.99 | 119 | 4.4541 | 0.0 | 5.0 |
| No log | 1.99 | 238 | 2.4214 | 0.3414 | 16.0 |
| No log | 2.99 | 357 | 2.2158 | 0.3212 | 15.0 |
| No log | 3.99 | 476 | 2.1737 | 0.3283 | 12.0 |
| 3.2958 | 4.99 | 595 | 2.1864 | 0.7037 | 11.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mT0/mt0_large_translated_t0_ckpt_1025000 | 21b2dcd4898fe36ba301b351ac6f8730ec2f1a4f | 2022-04-29T05:48:55.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mT0 | null | mT0/mt0_large_translated_t0_ckpt_1025000 | 0 | null | transformers | 37,150 | Entry not found |
momo/MOTOD-large | 9397c29aa60f660267f920040cd5d61d6160b636 | 2022-04-29T07:06:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | momo | null | momo/MOTOD-large | 0 | null | transformers | 37,151 | Entry not found |
inhee/m2m100_418M-finetuned-ko-to-en3 | e8fa94c8a6e884d7a78e66ad718472eacf3e8ea9 | 2022-04-29T14:42:44.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | inhee | null | inhee/m2m100_418M-finetuned-ko-to-en3 | 0 | null | transformers | 37,152 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-ko-to-en3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-ko-to-en3
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5175
- Bleu: 75.215
- Gen Len: 9.726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 0.99 | 103 | 2.7756 | 8.9955 | 9.425 |
| No log | 1.99 | 206 | 0.7248 | 63.7645 | 9.6421 |
| No log | 2.99 | 309 | 0.5175 | 75.215 | 9.726 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/cokedupoptions-greg16676935420-parikpatelcfa | 3873f7ade7f6743c9506cd5a4798ef77e9cd7f68 | 2022-04-29T15:09:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cokedupoptions-greg16676935420-parikpatelcfa | 0 | null | transformers | 37,153 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1514648481281056772/ACunKh0I_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484924573032148993/qdB7hbSU_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1341030286386192386/TzEiVCaJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">greg & John W. Rich (Fake Tech Exec) & Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth)</div>
<div style="text-align: center; font-size: 14px;">@cokedupoptions-greg16676935420-parikpatelcfa</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from greg & John W. Rich (Fake Tech Exec) & Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth).
| Data | greg | John W. Rich (Fake Tech Exec) | Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth) |
| --- | --- | --- | --- |
| Tweets downloaded | 3247 | 3247 | 3250 |
| Retweets | 27 | 202 | 22 |
| Short tweets | 664 | 331 | 719 |
| Tweets kept | 2556 | 2714 | 2509 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/snhk0760/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cokedupoptions-greg16676935420-parikpatelcfa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/iresidwo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/iresidwo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cokedupoptions-greg16676935420-parikpatelcfa')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
dbmdz/flair-hipe-2022-ajmc-all-64k | 002b8dd69ffbc578fd53c53972fb2e9a511a58c0 | 2022-04-29T08:54:47.000Z | [
"pytorch",
"license:mit"
] | null | false | dbmdz | null | dbmdz/flair-hipe-2022-ajmc-all-64k | 0 | null | null | 37,154 | ---
license: mit
---
|
usama4512/out | 329a949efd217b02e972dffc2710e8033a414cce | 2022-04-29T09:37:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | usama4512 | null | usama4512/out | 0 | null | transformers | 37,155 | Entry not found |
oceanpty/mbert-squad | 66d9eef1737fcb8e2aa2f424087d18f2444eeb09 | 2022-04-29T13:27:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | oceanpty | null | oceanpty/mbert-squad | 0 | null | transformers | 37,156 | Entry not found |
hassnain/wav2vec2-base-timit-demo-colab | 5ee2e73e356cdbbd6fcb66b0c45097cb80666bf7 | 2022-04-30T20:20:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 37,157 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
lsb/wav2vec2-large-pem123-960h-la | a856e0c1fad6205d6e1822906a4d82ec167b6a29 | 2022-05-01T16:12:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lsb | null | lsb/wav2vec2-large-pem123-960h-la | 0 | null | transformers | 37,158 | Entry not found |
sameearif88/wav2vec2-base-timit-demo-colab1 | a715c2e18c34346f4f9f210a195a72145d0b3443 | 2022-05-01T06:15:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab1 | 0 | null | transformers | 37,159 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7411
- Wer: 0.5600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0773 | 13.89 | 500 | 3.1073 | 1.0 |
| 1.2444 | 27.78 | 1000 | 0.7411 | 0.5600 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
inhee/m2m100_418M-finetuned-ko-to-en4 | 2036ea606111fda03c96c58319b31f27e4e5d4c5 | 2022-04-30T12:30:56.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | inhee | null | inhee/m2m100_418M-finetuned-ko-to-en4 | 0 | null | transformers | 37,160 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-ko-to-en4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-ko-to-en4
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4598
- Bleu: 85.3745
- Gen Len: 9.7522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 105 | 1.8667 | 24.5072 | 9.523 |
| No log | 2.0 | 210 | 0.8581 | 57.9973 | 9.2779 |
| No log | 3.0 | 315 | 0.6587 | 69.4588 | 9.7399 |
| No log | 4.0 | 420 | 0.5762 | 74.5636 | 9.6775 |
| 1.4539 | 5.0 | 525 | 0.5254 | 78.8897 | 9.6946 |
| 1.4539 | 6.0 | 630 | 0.4952 | 81.0054 | 9.7073 |
| 1.4539 | 7.0 | 735 | 0.4773 | 83.0792 | 9.7233 |
| 1.4539 | 8.0 | 840 | 0.4669 | 84.4309 | 9.7429 |
| 1.4539 | 9.0 | 945 | 0.4616 | 85.0965 | 9.749 |
| 0.144 | 10.0 | 1050 | 0.4598 | 85.3745 | 9.7522 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
moma1820/new_sen_xlmr | d5595e5dbfbfb9b2dc34ae9308c3f45154b31915 | 2022-04-29T16:43:02.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | moma1820 | null | moma1820/new_sen_xlmr | 0 | null | transformers | 37,161 | Entry not found |
mkarthik/distilbert-base-uncased-finetuned-product | a1f60af838df87314b5dc444e2728f90464db0dc | 2022-05-02T04:28:39.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mkarthik | null | mkarthik/distilbert-base-uncased-finetuned-product | 0 | null | transformers | 37,162 | Entry not found |
snowood1/ConfliBERT-scr-cased | ee6f9a95eddb30b375d79855bbe6a75262973a84 | 2022-05-11T16:53:30.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | snowood1 | null | snowood1/ConfliBERT-scr-cased | 0 | null | transformers | 37,163 | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/ |
snowood1/ConfliBERT-cont-cased | 403596ab1f479c6d2a226015904dc1e65ce2df02 | 2022-05-11T16:52:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | snowood1 | null | snowood1/ConfliBERT-cont-cased | 0 | null | transformers | 37,164 | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/
|
tonydiana1/distilgpt2-finetuned-wikitext2 | f5dd58ce4073266f2c7fd4a05f5b2b01d5956f8f | 2022-04-30T01:00:42.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | tonydiana1 | null | tonydiana1/distilgpt2-finetuned-wikitext2 | 0 | null | transformers | 37,165 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.76 | 1.0 | 2334 | 3.6658 |
| 3.6526 | 2.0 | 4668 | 3.6468 |
| 3.6004 | 3.0 | 7002 | 3.6425 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mdroth/dummy-model_R91m | e217793081581bd6a21eb0737b7ea854be6084d4 | 2022-04-30T01:15:10.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mdroth | null | mdroth/dummy-model_R91m | 0 | null | transformers | 37,166 | Entry not found |
phosseini/atomic-roberta-large-full | b91263cab6c6c089e6a512e9ed297e135de2d07c | 2022-04-30T06:12:27.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | phosseini | null | phosseini/atomic-roberta-large-full | 0 | null | transformers | 37,167 | Entry not found |
tmabraham/selfie2anime_cyclegan | f8663c6e9639be70f3c8755856320e96ab94e2a5 | 2022-04-30T09:40:05.000Z | [
"pytorch"
] | null | false | tmabraham | null | tmabraham/selfie2anime_cyclegan | 0 | null | null | 37,168 | Entry not found |
rankarusu/AnonI | d6c64e6ce2b959f0d44dafb7e38d03ade2e600bb | 2022-05-15T11:21:58.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | rankarusu | null | rankarusu/AnonI | 0 | null | transformers | 37,169 | Entry not found |
moaiz237/wav2vec2-base-timit-moaiz_exp1 | 8bf322656202e7156424fbeccc3a2fd32ecb50d1 | 2022-04-30T15:13:12.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | moaiz237 | null | moaiz237/wav2vec2-base-timit-moaiz_exp1 | 0 | null | transformers | 37,170 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-moaiz_exp1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-moaiz_exp1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6910
- Wer: 0.5549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7261 | 13.89 | 500 | 2.4864 | 0.9942 |
| 1.0036 | 27.78 | 1000 | 0.6910 | 0.5549 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab0 | 8b764eeb1f9febe492c27303f3cb04ac86641020 | 2022-04-30T21:06:14.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab0 | 0 | null | transformers | 37,171 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab0
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7798
- Wer: 0.5194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0731 | 13.89 | 500 | 3.1154 | 1.0 |
| 1.2294 | 27.78 | 1000 | 0.7017 | 0.5466 |
| 0.3404 | 41.67 | 1500 | 0.7798 | 0.5194 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
moaiz237/wav2vec2-base-timit-moaiz_exp2 | 62201f57b9d62065431bb8a03d3b6f95c24c62d1 | 2022-04-30T16:23:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | moaiz237 | null | moaiz237/wav2vec2-base-timit-moaiz_exp2 | 0 | null | transformers | 37,172 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-moaiz_exp2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-moaiz_exp2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1884
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.15 | 13.89 | 500 | 3.2020 | 1.0 |
| 3.1522 | 27.78 | 1000 | 3.1884 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
rickySaka/en-md | eab56ad0a875744d5218c346ab99a9a86f190161 | 2022-04-30T16:25:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | rickySaka | null | rickySaka/en-md | 0 | null | transformers | 37,173 | Entry not found |
hassnain/wav2vec2-base-timit-demo-colab0 | f55f537fd0300dbe84e1243e79f1a9d9cf4af32a | 2022-04-30T21:39:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab0 | 0 | null | transformers | 37,174 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab0
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1808
- Wer: 0.7734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8077 | 7.04 | 500 | 3.1554 | 1.0 |
| 2.8549 | 14.08 | 1000 | 2.0683 | 1.0846 |
| 1.3297 | 21.13 | 1500 | 1.2084 | 0.7984 |
| 0.6725 | 28.17 | 2000 | 1.1808 | 0.7734 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hli/distilbert-base-uncased-finetuned-imdb | cb2ab1ccf2f019be4b83d296dcfbfab742e76732 | 2022-05-01T04:59:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | hli | null | hli/distilbert-base-uncased-finetuned-imdb | 0 | null | transformers | 37,175 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
huggingtweets/chubbiverse | 9c92a3901a6494af66b4643eea43d4fed6293517 | 2022-05-01T05:19:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/chubbiverse | 0 | null | transformers | 37,176 | ---
language: en
thumbnail: http://www.huggingtweets.com/chubbiverse/1651382374986/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1479680767261229056/JH8LZA4w_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Chubbiverse</div>
<div style="text-align: center; font-size: 14px;">@chubbiverse</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Chubbiverse.
| Data | Chubbiverse |
| --- | --- |
| Tweets downloaded | 3220 |
| Retweets | 881 |
| Short tweets | 559 |
| Tweets kept | 1780 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ywslmnc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chubbiverse's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34yoo9j7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34yoo9j7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chubbiverse')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mriggs/tgb_old | 62b1f103bb08532a4ed83472d54386e64178d929 | 2022-05-01T06:19:46.000Z | [
"pytorch",
"flaubert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mriggs | null | mriggs/tgb_old | 0 | null | transformers | 37,177 | Entry not found |
hassnain/wav2vec2-base-timit-demo-colab7 | da41ad8f69e62b36e4e484c0338165bb2d315225 | 2022-05-01T09:02:18.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab7 | 0 | null | transformers | 37,178 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab7
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1687
- Wer: 0.6478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8409 | 7.04 | 500 | 3.1487 | 1.0 |
| 2.6259 | 14.08 | 1000 | 1.5598 | 0.8730 |
| 1.083 | 21.13 | 1500 | 1.0600 | 0.7347 |
| 0.6061 | 28.17 | 2000 | 1.0697 | 0.7006 |
| 0.4022 | 35.21 | 2500 | 1.0617 | 0.6913 |
| 0.2884 | 42.25 | 3000 | 1.1962 | 0.6768 |
| 0.225 | 49.3 | 3500 | 1.1753 | 0.6567 |
| 0.1852 | 56.34 | 4000 | 1.1687 | 0.6478 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab6 | 44eda9b9cefd32bc3e7283c74298fd39ab3767ec | 2022-05-01T10:12:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab6 | 0 | null | transformers | 37,179 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab6
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6532
- Wer: 0.5394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1200
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2874 | 13.89 | 500 | 3.1571 | 1.0 |
| 1.3896 | 27.78 | 1000 | 0.6532 | 0.5394 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab10 | 75ebd14dc09b4b37577760b048b3cc2201f841b8 | 2022-05-01T11:00:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab10 | 0 | null | transformers | 37,180 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab10
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4460
- Wer: 0.3425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9891 | 3.52 | 500 | 3.1554 | 1.0 |
| 1.71 | 7.04 | 1000 | 0.7122 | 0.5811 |
| 0.6164 | 10.56 | 1500 | 0.5149 | 0.4880 |
| 0.4188 | 14.08 | 2000 | 0.4726 | 0.4344 |
| 0.3038 | 17.61 | 2500 | 0.4765 | 0.4092 |
| 0.2312 | 21.13 | 3000 | 0.4387 | 0.3765 |
| 0.1867 | 24.65 | 3500 | 0.4411 | 0.3583 |
| 0.1582 | 28.17 | 4000 | 0.4460 | 0.3425 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab9 | d7d9dafb6127a0b8cd68fb7797e7c963241e90e5 | 2022-05-01T15:58:30.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab9 | 0 | null | transformers | 37,181 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab9
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1922
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 5.0683 | 1.42 | 500 | 3.2471 | 1.0 |
| 3.1349 | 2.85 | 1000 | 3.2219 | 1.0 |
| 3.1317 | 4.27 | 1500 | 3.2090 | 1.0 |
| 3.1262 | 5.7 | 2000 | 3.2152 | 1.0 |
| 3.1307 | 7.12 | 2500 | 3.2147 | 1.0 |
| 3.1264 | 8.55 | 3000 | 3.2072 | 1.0 |
| 3.1279 | 9.97 | 3500 | 3.2158 | 1.0 |
| 3.1287 | 11.4 | 4000 | 3.2190 | 1.0 |
| 3.1256 | 12.82 | 4500 | 3.2069 | 1.0 |
| 3.1254 | 14.25 | 5000 | 3.2134 | 1.0 |
| 3.1259 | 15.67 | 5500 | 3.2231 | 1.0 |
| 3.1269 | 17.09 | 6000 | 3.2005 | 1.0 |
| 3.1279 | 18.52 | 6500 | 3.1988 | 1.0 |
| 3.1246 | 19.94 | 7000 | 3.1929 | 1.0 |
| 3.128 | 21.37 | 7500 | 3.1864 | 1.0 |
| 3.1245 | 22.79 | 8000 | 3.1868 | 1.0 |
| 3.1266 | 24.22 | 8500 | 3.1852 | 1.0 |
| 3.1239 | 25.64 | 9000 | 3.1855 | 1.0 |
| 3.125 | 27.07 | 9500 | 3.1917 | 1.0 |
| 3.1233 | 28.49 | 10000 | 3.1929 | 1.0 |
| 3.1229 | 29.91 | 10500 | 3.1922 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab11 | 340659b9221149af85daa6b844a274798ac978bf | 2022-05-01T10:54:00.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab11 | 0 | null | transformers | 37,182 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab11
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6269
- Wer: 0.7418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.6439 | 7.04 | 500 | 3.3083 | 1.0 |
| 2.3763 | 14.08 | 1000 | 1.5059 | 0.8146 |
| 1.0161 | 21.13 | 1500 | 1.5101 | 0.7488 |
| 0.6195 | 28.17 | 2000 | 1.6269 | 0.7418 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab7 | 65bc38291a6ee75c61c53d060580fbc31fa77239 | 2022-05-01T11:12:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab7 | 0 | null | transformers | 37,183 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab7
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6917
- Wer: 0.5426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1400
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1854 | 13.89 | 500 | 3.1687 | 1.0 |
| 1.7033 | 27.78 | 1000 | 0.7289 | 0.5659 |
| 0.4208 | 41.67 | 1500 | 0.6917 | 0.5426 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab30 | 7b756680fd8555704f0100d08144f47eeaadcf68 | 2022-05-01T12:46:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab30 | 0 | null | transformers | 37,184 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab30
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8496
- Wer: 0.6534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2705 | 14.71 | 500 | 3.1073 | 1.0 |
| 1.3631 | 29.41 | 1000 | 0.8496 | 0.6534 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/sandspiel_feed | eb5d0954adc263abd6e08220170426bc94514f04 | 2022-05-01T11:28:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/sandspiel_feed | 0 | null | transformers | 37,185 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1073861926097117184/FB3bBgcN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sandspiel</div>
<div style="text-align: center; font-size: 14px;">@sandspiel_feed</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sandspiel.
| Data | sandspiel |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 2 |
| Short tweets | 1506 |
| Tweets kept | 1692 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fvrcwe0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sandspiel_feed's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24l7h3az) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24l7h3az/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sandspiel_feed')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hassnain/wav2vec2-base-timit-demo-colab40 | 5867386a076822a5af398f7cefb7bd8f26c9b09b | 2022-05-01T12:54:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab40 | 0 | null | transformers | 37,186 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab40
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7341
- Wer: 0.5578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0438 | 13.89 | 500 | 3.0671 | 1.0 |
| 1.0734 | 27.78 | 1000 | 0.7341 | 0.5578 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab50 | 216b3f34494d11ecebfa6c05c786479e3c9a5042 | 2022-05-01T13:32:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab50 | 0 | null | transformers | 37,187 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab50
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2257
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.4568 | 7.04 | 500 | 3.3002 | 1.0 |
| 3.1795 | 14.08 | 1000 | 3.2170 | 1.0 |
| 3.1607 | 21.13 | 1500 | 3.2119 | 1.0 |
| 3.1537 | 28.17 | 2000 | 3.2257 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
nepp1d0/prot_bert-finetuned-smiles-bindingDB | 1f3c0ff7eb4b3b15f4a75da3225bb000f68e0a62 | 2022-05-05T23:43:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | nepp1d0 | null | nepp1d0/prot_bert-finetuned-smiles-bindingDB | 0 | null | transformers | 37,188 | ---
tags:
- generated_from_trainer
model-index:
- name: prot_bert-finetuned-smiles-bindingDB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prot_bert-finetuned-smiles-bindingDB
This model is a fine-tuned version of [Rostlab/prot_bert](https://huggingface.co/Rostlab/prot_bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6942 | 1.0 | 10000 | 1.4153 |
| 1.3261 | 2.0 | 20000 | 1.2679 |
| 1.2467 | 3.0 | 30000 | 1.2300 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sameearif88/wav2vec2-base-timit-demo-colab11 | c145e87a35b4c8ee86a1dfe9eda35ff538e1ff73 | 2022-05-01T11:54:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab11 | 0 | null | transformers | 37,189 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab11
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4922
- Wer: 0.4348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2269 | 3.52 | 500 | 1.1191 | 0.7121 |
| 0.8297 | 7.04 | 1000 | 0.6064 | 0.5228 |
| 0.4988 | 10.56 | 1500 | 0.5057 | 0.4627 |
| 0.3635 | 14.08 | 2000 | 0.4922 | 0.4348 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/fana | 95873f928477691fd4c90d360d48e75d4fd28532 | 2022-05-01T11:23:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/fana | 0 | null | transformers | 37,190 | ---
language: en
thumbnail: http://www.huggingtweets.com/fana/1651404215785/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1498253613105299456/QOtx4xi-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Maria Confusรฃo</div>
<div style="text-align: center; font-size: 14px;">@fana</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Maria Confusรฃo.
| Data | Maria Confusรฃo |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 207 |
| Short tweets | 985 |
| Tweets kept | 2052 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jyz1j51/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fana's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/13zcy7x6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/13zcy7x6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fana')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hassnain/wav2vec2-base-timit-demo-colab51 | 0fd29e499242245ff069cdff0059c24b1827b364 | 2022-05-01T11:59:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab51 | 0 | null | transformers | 37,191 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab51
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8395
- Wer: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.481 | 7.04 | 500 | 3.2834 | 1.0 |
| 2.2521 | 14.08 | 1000 | 1.6333 | 0.8093 |
| 0.9467 | 21.13 | 1500 | 1.7458 | 0.7560 |
| 0.5888 | 28.17 | 2000 | 1.8395 | 0.7480 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab70 | b931fb1501357b79614b6c92abd38413417179ff | 2022-05-01T14:11:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab70 | 0 | null | transformers | 37,192 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab70
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab70
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7439
- Wer: 0.5149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8646 | 7.04 | 500 | 3.1467 | 1.0 |
| 1.678 | 14.08 | 1000 | 0.8738 | 0.6511 |
| 0.5083 | 21.13 | 1500 | 0.7404 | 0.5504 |
| 0.2923 | 28.17 | 2000 | 0.7439 | 0.5149 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab52 | 17bd5317a526b052922f7bf968d9f50234570270 | 2022-05-01T12:59:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab52 | 0 | null | transformers | 37,193 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab52
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab52
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7941
- Wer: 0.7501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3424 | 7.04 | 500 | 3.3225 | 1.0 |
| 2.518 | 14.08 | 1000 | 1.5884 | 0.8300 |
| 1.0217 | 21.13 | 1500 | 1.6643 | 0.7719 |
| 0.6074 | 28.17 | 2000 | 1.7941 | 0.7501 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab12 | 0a6c6468ea3f09e93dca1a3cbe80642df02fff76 | 2022-05-01T14:25:58.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab12 | 0 | null | transformers | 37,194 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab12
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4831
- Wer: 0.3546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 420
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1683 | 3.52 | 500 | 1.3684 | 0.7364 |
| 0.7614 | 7.04 | 1000 | 0.6008 | 0.5218 |
| 0.4721 | 10.56 | 1500 | 0.5319 | 0.4614 |
| 0.3376 | 14.08 | 2000 | 0.5234 | 0.4308 |
| 0.2508 | 17.61 | 2500 | 0.5109 | 0.3998 |
| 0.1978 | 21.13 | 3000 | 0.5037 | 0.3721 |
| 0.1645 | 24.65 | 3500 | 0.4918 | 0.3622 |
| 0.1449 | 28.17 | 4000 | 0.4831 | 0.3546 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab90 | f99889d4d7ecda6f0060c22ab55320903645ee32 | 2022-05-01T17:08:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab90 | 0 | null | transformers | 37,195 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab90
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab90
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6766
- Wer: 0.4479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0217 | 7.04 | 500 | 3.2571 | 1.0 |
| 1.271 | 14.08 | 1000 | 0.6501 | 0.5874 |
| 0.4143 | 21.13 | 1500 | 0.5943 | 0.5360 |
| 0.2446 | 28.17 | 2000 | 0.6285 | 0.5028 |
| 0.1653 | 35.21 | 2500 | 0.6553 | 0.4992 |
| 0.1295 | 42.25 | 3000 | 0.6735 | 0.4705 |
| 0.1033 | 49.3 | 3500 | 0.6792 | 0.4539 |
| 0.0886 | 56.34 | 4000 | 0.6766 | 0.4479 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
fahadtouseef/wav2vec2-base-timit-demo-colab_1 | 34a35012f2eb2a5e7ac36443c692ae2ebd693e3c | 2022-05-01T23:57:32.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | fahadtouseef | null | fahadtouseef/wav2vec2-base-timit-demo-colab_1 | 0 | null | transformers | 37,196 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab_1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3233
- Wer: 0.2574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0949 | 3.52 | 500 | 1.1140 | 0.7136 |
| 0.7584 | 7.04 | 1000 | 0.5312 | 0.5154 |
| 0.4254 | 10.56 | 1500 | 0.4489 | 0.4401 |
| 0.2708 | 14.08 | 2000 | 0.4108 | 0.3770 |
| 0.1855 | 17.61 | 2500 | 0.3881 | 0.3257 |
| 0.139 | 21.13 | 3000 | 0.3666 | 0.2958 |
| 0.1057 | 24.65 | 3500 | 0.3351 | 0.2748 |
| 0.0855 | 28.17 | 4000 | 0.3233 | 0.2574 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab53 | d201b3a7b9dfaa74529bf2025e37b9ba54c4cf83 | 2022-05-01T17:13:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab53 | 0 | null | transformers | 37,197 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab53
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2003
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.619 | 7.04 | 500 | 3.2338 | 1.0 |
| 3.1855 | 14.08 | 1000 | 3.1968 | 1.0 |
| 3.1669 | 21.13 | 1500 | 3.1796 | 1.0 |
| 3.1586 | 28.17 | 2000 | 3.2003 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
kompactss/JeBERT_je_ko | 244d7c3d647e10803f9dbec2b8bed1562e98c66b | 2022-05-16T06:11:10.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | kompactss | null | kompactss/JeBERT_je_ko | 0 | 0 | transformers | 37,198 | ---
license: afl-3.0
---
# ๐ ์ ์ฃผ ๋ฐฉ์ธ ๋ฒ์ญ ๋ชจ๋ธ ๐
- ์ ์ฃผ์ด -> ํ์ค์ด
- Made by. ๊ตฌ๋ฆ ์์ฐ์ด์ฒ๋ฆฌ ๊ณผ์ 3๊ธฐ 3์กฐ!!
- github link : https://github.com/Goormnlpteam3/JeBERT
## 1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
## 2. Dataset
- Jit Dataset
- AI HUB(+์๋์ ๋ฌธ์)
## 3. Hyper Parameters
- Epoch : 10 epochs(best at 8 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
## 4. BLEU Score
- Jit + AI HUB(+์๋์ ๋ฌธ์) Dataset : 79.0
---
### CREDIT
- ์ฃผํ์ค : wngudwns2798@gmail.com
- ๊ฐ๊ฐ๋ : 1st9aram@gmail.com
- ๊ณ ๊ด์ฐ : rhfprl11@gmail.com
- ๊น์์ฐ : s01090445778@gmail.com
- ์ด์๊ฒฝ : hjtwin2@gmail.com
- ์กฐ์ฑ์ : eun102476@gmail.com |
jcai1/distilbert-base-uncased-finetuned-imdb | 9fadc53627db4e1fea7eb91588bf25d9808c0ad5 | 2022-05-01T15:16:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jcai1 | null | jcai1/distilbert-base-uncased-finetuned-imdb | 0 | null | transformers | 37,199 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.