modelId stringlengths 4 112 | sha stringlengths 40 40 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringclasses 29 values | private bool 1 class | author stringlengths 2 38 ⌀ | config null | id stringlengths 4 112 | downloads float64 0 36.8M ⌀ | likes float64 0 712 ⌀ | library_name stringclasses 17 values | readme stringlengths 0 186k | embedding list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingtweets/coffee__burger | ca8f5e0c262ae77d1b9198007589167dd5fcb932 | 2022-03-01T09:06:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/coffee__burger | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/coffee__burger/1646125569654/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/794725967948181506/Zn4x_F6i_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Coffee Burger</div>
<div style="text-align: center; font-size: 14px;">@coffee__burger</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Coffee Burger.
| Data | Coffee Burger |
| --- | --- |
| Tweets downloaded | 2471 |
| Retweets | 525 |
| Short tweets | 337 |
| Tweets kept | 1609 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ad82qis/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @coffee__burger's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kxzm2oz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kxzm2oz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/coffee__burger')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.0065192836336791515,
0.11932571232318878,
-0.019081642851233482,
0.057026270776987076,
0.17888934910297394,
-0.016840258613228798,
-0.04770481586456299,
0.029656870290637016,
0.08088652044534683,
-0.061612971127033234,
-0.006022762972861528,
0.06528326869010925,
0.014400646090507507,
-0... |
huggingtweets/berniesanders-cnn-dril | 575e2ad494733509ce6742c0d8e210c974e0ceca | 2022-03-01T09:43:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/berniesanders-cnn-dril | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/berniesanders-cnn-dril/1646127802129/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278259160644227073/MfCyF7CG_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bernie Sanders & wint & CNN</div>
<div style="text-align: center; font-size: 14px;">@berniesanders-cnn-dril</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bernie Sanders & wint & CNN.
| Data | Bernie Sanders | wint | CNN |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3229 | 3250 |
| Retweets | 429 | 473 | 30 |
| Short tweets | 10 | 300 | 6 |
| Tweets kept | 2811 | 2456 | 3214 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yapgpjj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-cnn-dril's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hmm651a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hmm651a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/berniesanders-cnn-dril')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.02101590856909752,
0.12086694687604904,
0.007290153298527002,
0.04191555827856064,
0.17103758454322815,
-0.008005129173398018,
-0.0417025052011013,
0.0438830740749836,
0.0657825842499733,
-0.053007885813713074,
0.0012873061932623386,
0.07982650399208069,
0.023114673793315887,
-0.0414407... |
huggingtweets/berniesanders-dril | 33afaa0d841cd7a3b56fd8e491ec80a255ada2b0 | 2022-03-01T10:13:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/berniesanders-dril | 0 | null | transformers | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Bernie Sanders</div>
<div style="text-align: center; font-size: 14px;">@berniesanders-dril</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Bernie Sanders.
| Data | wint | Bernie Sanders |
| --- | --- | --- |
| Tweets downloaded | 3229 | 3250 |
| Retweets | 473 | 429 |
| Short tweets | 300 | 10 |
| Tweets kept | 2456 | 2811 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/yw6378l1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-dril's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pydufi9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pydufi9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/berniesanders-dril')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.01622924394905567,
0.11763330549001694,
-0.000580057967454195,
0.039915889501571655,
0.1507682353258133,
-0.013500853441655636,
-0.04196615517139435,
0.0451381541788578,
0.05423502251505852,
-0.05082166939973831,
0.007962827570736408,
0.0834508091211319,
0.022287700325250626,
-0.0227819... |
huggingtweets/janieclone | 1a6d8a7aa7fd819487b7d4d248791de48524737a | 2022-07-13T17:02:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/janieclone | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/janieclone/1657731718034/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1536389142287892481/N6kCwACw_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Columbine Janie</div>
<div style="text-align: center; font-size: 14px;">@janieclone</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Columbine Janie.
| Data | Columbine Janie |
| --- | --- |
| Tweets downloaded | 2409 |
| Retweets | 1025 |
| Short tweets | 332 |
| Tweets kept | 1052 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jcqf2hu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @janieclone's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/u7quubhw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/u7quubhw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/janieclone')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.004488136153668165,
0.1200566440820694,
-0.02241513878107071,
0.053904421627521515,
0.17615385353565216,
-0.020352138206362724,
-0.04191267490386963,
0.027850421145558357,
0.07459458708763123,
-0.06014028564095497,
-0.001779436250217259,
0.0694633349776268,
0.011668912135064602,
-0.0229... |
xdmason/pretrainedCas | d66136319fbd11c6544dad149765829297facd60 | 2022-03-02T00:58:13.000Z | [
"pytorch",
"gpt2",
"transformers",
"conversational"
] | conversational | false | xdmason | null | xdmason/pretrainedCas | 0 | null | transformers | ---
tags:
- conversational
---
# pretrained Cas Model | [
-0.05980618670582771,
-0.009064498357474804,
0.04027834162116051,
-0.006917286664247513,
-0.018317734822630882,
-0.031955134123563766,
0.011932210996747017,
0.048995669931173325,
0.057327695190906525,
-0.06097473204135895,
0.02904084511101246,
-0.019880186766386032,
0.02782277576625347,
0.... |
jiobiala24/wav2vec2-base-checkpoint-14 | 9031ee79209a12fa11467679412f99eefbfdd2af | 2022-03-02T15:13:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-14 | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-14
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-13](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-13) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2822
- Wer: 0.4068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1996 | 1.59 | 1000 | 0.7181 | 0.4079 |
| 0.1543 | 3.17 | 2000 | 0.7735 | 0.4113 |
| 0.1171 | 4.76 | 3000 | 0.8152 | 0.4045 |
| 0.0969 | 6.35 | 4000 | 0.8575 | 0.4142 |
| 0.082 | 7.94 | 5000 | 0.9005 | 0.4124 |
| 0.074 | 9.52 | 6000 | 0.9232 | 0.4151 |
| 0.0653 | 11.11 | 7000 | 0.9680 | 0.4223 |
| 0.0587 | 12.7 | 8000 | 1.0633 | 0.4232 |
| 0.0551 | 14.29 | 9000 | 1.0875 | 0.4171 |
| 0.0498 | 15.87 | 10000 | 1.0281 | 0.4105 |
| 0.0443 | 17.46 | 11000 | 1.2164 | 0.4274 |
| 0.0421 | 19.05 | 12000 | 1.1868 | 0.4191 |
| 0.0366 | 20.63 | 13000 | 1.1678 | 0.4173 |
| 0.0366 | 22.22 | 14000 | 1.2444 | 0.4187 |
| 0.0346 | 23.81 | 15000 | 1.2042 | 0.4169 |
| 0.0316 | 25.4 | 16000 | 1.3019 | 0.4127 |
| 0.0296 | 26.98 | 17000 | 1.2001 | 0.4081 |
| 0.0281 | 28.57 | 18000 | 1.2822 | 0.4068 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
-0.09164606779813766,
-0.042407795786857605,
-0.02843100018799305,
-0.06328387558460236,
-0.02039538510143757,
0.03126981109380722,
-0.027118591591715813,
-0.05796762928366661,
-0.046810783445835114,
-0.07062751799821854,
0.03306102007627487,
-0.10538333654403687,
0.034461550414562225,
-0.... |
prk/roberta-base-squad2-finetuned-squad | 15b151de471fcc120a3fecf27c4d2891c0b01336 | 2022-03-03T10:26:14.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | prk | null | prk/roberta-base-squad2-finetuned-squad | 0 | null | transformers | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-squad2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned-squad
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on a custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 0.1894 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| [
-0.09750828146934509,
-0.08162820339202881,
-0.04218582808971405,
0.05870451033115387,
0.0007039267220534384,
0.04670732095837593,
-0.030021673068404198,
-0.010697068646550179,
-0.06610985100269318,
-0.06567761301994324,
0.03136611357331276,
-0.0672873705625534,
-0.02263585291802883,
-0.03... |
nimrah/wav2vec2-large-xls-r-300m-turkish-colab | 0f3b3b889009da84a585add22e109e41053b2e46 | 2022-03-02T08:18:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-turkish-colab | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2970
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 6.1837 | 3.67 | 400 | 3.2970 | 1.0 |
| 0.0 | 7.34 | 800 | 3.2970 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| [
-0.0768856629729271,
-0.03883035480976105,
-0.030690930783748627,
-0.039337944239377975,
-0.009685122407972813,
0.03474631533026695,
-0.083122119307518,
-0.04876821115612984,
-0.0007747256895527244,
-0.09544719755649567,
-0.001069720252417028,
-0.10634362697601318,
-0.015500417910516262,
-... |
facebook/maskformer-swin-tiny-ade | 80bb6d935ed12f2f2dfabbf44772a33821aac9f0 | 2022-04-04T16:02:00.000Z | [
"pytorch",
"maskformer",
"dataset:ade-20k",
"arxiv:2107.06278",
"transformers",
"vision",
"image-segmentatiom",
"license:apache-2.0"
] | null | false | facebook | null | facebook/maskformer-swin-tiny-ade | 0 | null | transformers | ---
license: apache-2.0
tags:
- vision
- image-segmentatiom
datasets:
- ade-20k
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# Mask
Mask model trained on ade-20k. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing Mask did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses semantic segmentation with a mask classification paradigm instead.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=maskformer) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to feature_extractor for postprocessing
>>> output = feature_extractor.post_process_segmentation(outputs)
>>> output = feature_extractor.post_process_semantic_segmentation(outputs)
>>> output = feature_extractor.post_process_panoptic_segmentation(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
-0.012441972270607948,
-0.0027257937472313643,
0.01874072477221489,
-0.019840262830257416,
0.08273293077945709,
-0.12577751278877258,
-0.0017926556756719947,
-0.05151427537202835,
-0.07647693902254105,
-0.07776772975921631,
0.08002308011054993,
-0.11705184727907181,
-0.016448669135570526,
... |
nimrah/wav2vec2-large-xls-r-300m-turkish-colab-4 | d597872df47dad4f9b80e88d855689c1929a9f4f | 2022-03-02T15:54:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-turkish-colab-4 | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab-4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| [
-0.06431767344474792,
-0.04943537712097168,
-0.04662887752056122,
-0.041174303740262985,
-0.015185320749878883,
0.03387666493654251,
-0.0912044569849968,
-0.07033681124448776,
-0.026948140934109688,
-0.10110875964164734,
0.031730107963085175,
-0.09556472301483154,
-0.01818574033677578,
-0.... |
mcdzwil/distilbert-base-uncased-finetuned-ner | bb59e31745413ef43c63e8461b4a671649fa2e70 | 2022-03-02T16:35:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | mcdzwil | null | mcdzwil/distilbert-base-uncased-finetuned-ner | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1830
- Precision: 0.9171
- Recall: 0.7099
- F1: 0.8003
- Accuracy: 0.9316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.2903 | 0.7952 | 0.7063 | 0.7481 | 0.9136 |
| No log | 2.0 | 96 | 0.2015 | 0.9154 | 0.7075 | 0.7981 | 0.9298 |
| No log | 3.0 | 144 | 0.1830 | 0.9171 | 0.7099 | 0.8003 | 0.9316 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| [
-0.07559188455343246,
-0.0310110691934824,
-0.028990933671593666,
0.02665000967681408,
-0.004521005321294069,
-0.014971318654716015,
-0.0422944575548172,
0.035916347056627274,
-0.08100143820047379,
-0.15345892310142517,
0.05890941619873047,
-0.04925976321101189,
0.024766523391008377,
-0.04... |
repro-rights-amicus-briefs/legal-bert-base-uncased-finetuned-RRamicus | af97cbc05339b4c75862c20d8bb04f499c610741 | 2022-03-03T20:21:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | repro-rights-amicus-briefs | null | repro-rights-amicus-briefs/legal-bert-base-uncased-finetuned-RRamicus | 0 | null | transformers | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: legal-bert-base-uncased-finetuned-RRamicus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-bert-base-uncased-finetuned-RRamicus
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 928
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.021 | 1.0 | 1118 | 1.3393 |
| 1.2272 | 2.0 | 2236 | 1.2612 |
| 1.2467 | 3.0 | 3354 | 1.2403 |
| 1.2149 | 4.0 | 4472 | 1.2276 |
| 1.1855 | 5.0 | 5590 | 1.2101 |
| 1.1674 | 6.0 | 6708 | 1.2020 |
| 1.1508 | 7.0 | 7826 | 1.1893 |
| 1.1386 | 8.0 | 8944 | 1.1870 |
| 1.129 | 9.0 | 10062 | 1.1794 |
| 1.1193 | 10.0 | 11180 | 1.1759 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| [
-0.08846750110387802,
-0.022420210763812065,
-0.020220769569277763,
0.04217033460736275,
0.0342339426279068,
0.08610548824071884,
-0.023148924112319946,
0.02854025550186634,
-0.03853131830692291,
-0.034483835101127625,
0.031080085784196854,
-0.07160300761461258,
0.019049737602472305,
-0.00... |
huggingtweets/xqc | 3b78597ad334ae43c3f557b9daef464464345613 | 2022-03-03T04:24:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/xqc | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/xqc/1646281436978/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1188911868863221772/fpcyKuIW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">xQc</div>
<div style="text-align: center; font-size: 14px;">@xqc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from xQc.
| Data | xQc |
| --- | --- |
| Tweets downloaded | 3203 |
| Retweets | 128 |
| Short tweets | 406 |
| Tweets kept | 2669 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w7gqt7r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @xqc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3j2p63io) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3j2p63io/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/xqc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.005985277704894543,
0.11712227016687393,
-0.017592592164874077,
0.05649298429489136,
0.17321161925792694,
-0.017237523570656776,
-0.04019381105899811,
0.027328090742230415,
0.0732344537973404,
-0.058410901576280594,
-0.0024917684495449066,
0.07407543808221817,
0.0189126655459404,
-0.030... |
mmaguero/gn-bert-base-cased | 9d03ff9190236e4b6732bb87d1b9e67f875a2f38 | 2022-03-06T08:05:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"gn",
"dataset:wikipedia",
"dataset:wiktionary",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | mmaguero | null | mmaguero/gn-bert-base-cased | 0 | null | transformers | ---
language: gn
license: mit
datasets:
- wikipedia
- wiktionary
widget:
- text: "Paraguay ha'e peteĩ táva oĩva [MASK] retãme "
---
# BERT-i-base-cased (gnBERT-base-cased)
A pre-trained BERT model for **Guarani** (12 layers, cased). Trained on Wikipedia + Wiktionary (~800K tokens).
| [
-0.11329707503318787,
-0.08753903955221176,
0.025314999744296074,
-0.049573253840208054,
-0.019827265292406082,
0.04030081257224083,
0.06729986518621445,
0.07745622843503952,
0.02547452040016651,
0.01567932218313217,
0.059734221547842026,
0.009910968132317066,
0.02818039432168007,
0.055612... |
tiot07/wav2vec2-base-timit-demo-colab-large | b9b08abfe84a6bad1ed2d66445e05b24968caaf1 | 2022-03-04T09:34:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | tiot07 | null | tiot07/wav2vec2-base-timit-demo-colab-large | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
nimrah/wav2vec2-large-xls-r-300m-hindi_home-colab-11 | a918b00fa991213a5a23a5c20448c006a994fe27 | 2022-03-04T16:41:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-hindi_home-colab-11 | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi_home-colab-11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi_home-colab-11
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7649
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.03
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.5971 | 44.43 | 400 | 3.7649 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
-0.07129580527544022,
-0.03565020114183426,
-0.04352912679314613,
-0.04366620257496834,
-0.0003337668895255774,
0.032729145139455795,
-0.0894002914428711,
-0.06938333809375763,
-0.0381583645939827,
-0.08103393763303757,
0.038958583027124405,
-0.11987114697694778,
-0.009419703856110573,
-0.... |
nimrah/wav2vec2-large-xls-r-300m-turkish-colab-9 | 8935c0128bfdaed4737e783700cfdd2d4db85325 | 2022-03-04T18:24:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-turkish-colab-9 | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab-9
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.03
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| [
-0.07183621823787689,
-0.05060435086488724,
-0.04547012597322464,
-0.03981895372271538,
-0.010916474275290966,
0.03634333610534668,
-0.09084241837263107,
-0.06543824076652527,
-0.03043351322412491,
-0.10486031323671341,
0.030908912420272827,
-0.0947641134262085,
-0.021565448492765427,
-0.0... |
petrichorRainbow/mrf-T5 | 403dc9990544b8fd803c2cbc0d4690c4bdd5c6f8 | 2022-03-07T18:59:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | petrichorRainbow | null | petrichorRainbow/mrf-T5 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
infinitylyj/DialogGPT-small-rick | a76452c69f5a4a0c6c1bf20e8dd235b3c6571895 | 2022-03-05T06:55:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | infinitylyj | null | infinitylyj/DialogGPT-small-rick | 0 | null | transformers | ---
tags:
- conversational
---
# Rick DialogGPT Model | [
-0.08014168590307236,
-0.04179958626627922,
0.027739550918340683,
-0.041317831724882126,
0.005204062908887863,
-0.02696971222758293,
0.09505808353424072,
0.04334753006696701,
0.05147028714418411,
-0.040089912712574005,
-0.029903901740908623,
-0.009975440800189972,
0.028712036088109016,
0.0... |
naam/xlm-roberta-base-finetuned-panx-de | 9674c14b9cfbb6f7c0c97de5b204e4994ca8342a | 2022-03-05T13:48:33.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | naam | null | naam/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8594910162670748
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2556 | 1.0 | 525 | 0.1629 | 0.8218 |
| 0.1309 | 2.0 | 1050 | 0.1378 | 0.8522 |
| 0.0812 | 3.0 | 1575 | 0.1348 | 0.8595 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
-0.0533202700316906,
-0.034104205667972565,
-0.025541486218571663,
0.004075115546584129,
0.057174235582351685,
0.054545920342206955,
0.0037443810142576694,
0.0713094100356102,
-0.02375185675919056,
-0.0760650634765625,
-0.006913640070706606,
-0.10474429279565811,
0.03991062194108963,
-0.05... |
infinitylyj/DialogGPT-medium-general | a4c065d70fc00ceeca9265886b46876924b03975 | 2022-03-05T13:45:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | infinitylyj | null | infinitylyj/DialogGPT-medium-general | 0 | null | transformers | ---
tags:
- conversational
---
# General DialogGPT Model
| [
-0.044171370565891266,
-0.05378704518079758,
0.05439339950680733,
-0.02100096456706524,
-0.025954388082027435,
-0.06284632533788681,
0.09783926606178284,
0.05642659589648247,
0.06344834715127945,
-0.05301745980978012,
-0.04697169363498688,
-0.009544037282466888,
-0.01682170480489731,
0.045... |
nimrah/my-wav2vec2-base-timit-demo-colab-my | 6d864f73896c0afcd833cb6d1fb787c50ab66c6a | 2022-03-05T17:06:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/my-wav2vec2-base-timit-demo-colab-my | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my-wav2vec2-base-timit-demo-colab-my
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-wav2vec2-base-timit-demo-colab-my
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5569
- Wer: 0.3481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4083 | 4.0 | 500 | 1.0932 | 0.7510 |
| 0.5536 | 8.0 | 1000 | 0.4965 | 0.4819 |
| 0.2242 | 12.0 | 1500 | 0.4779 | 0.4077 |
| 0.1249 | 16.0 | 2000 | 0.4921 | 0.4006 |
| 0.0844 | 20.0 | 2500 | 0.4809 | 0.3753 |
| 0.0613 | 24.0 | 3000 | 0.5307 | 0.3680 |
| 0.0459 | 28.0 | 3500 | 0.5569 | 0.3481 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| [
-0.051659852266311646,
-0.04170955717563629,
-0.019090836867690086,
0.027278456836938858,
0.023331495001912117,
-0.005203220061957836,
-0.051372379064559937,
-0.059704817831516266,
-0.04067826271057129,
-0.10312173515558243,
0.02441832423210144,
-0.13156098127365112,
0.014287657104432583,
... |
huggingtweets/ragnar_furup | de6725c9b840c44248a33362e3898e8a6f894ac2 | 2022-03-05T18:34:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ragnar_furup | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/ragnar_furup/1646505291174/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500138558765608969/Qgc4pMtC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">R4 G4.mp3🌻</div>
<div style="text-align: center; font-size: 14px;">@ragnar_furup</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from R4 G4.mp3🌻.
| Data | R4 G4.mp3🌻 |
| --- | --- |
| Tweets downloaded | 1695 |
| Retweets | 889 |
| Short tweets | 104 |
| Tweets kept | 702 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3eum19q4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ragnar_furup's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/30kqu5u4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/30kqu5u4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ragnar_furup')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.0174795463681221,
0.12170061469078064,
-0.014689738862216473,
0.04979099705815315,
0.1802307516336441,
-0.014293287880718708,
-0.03535056859254837,
0.028297554701566696,
0.07575476169586182,
-0.05979112908244133,
-0.0007900978671386838,
0.0673612579703331,
0.013895373791456223,
-0.02549... |
sunitha/CV_Merge_DS | a17c761d54f9a8c00f9732197cab9ff97a9f2113 | 2022-03-06T05:09:45.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/CV_Merge_DS | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
lilitket/wav2vec2-large-xls-r-300m-hy-colab | 3a2b5dd220468147023c6a5ba666e2090e5e558d | 2022-03-06T10:17:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-hy-colab | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
lilitket/wav2vec2-large-xls-r-300m-hypy-colab | 6182d8179eb267e89868912ee616001e1af834d1 | 2022-03-09T18:55:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-hypy-colab | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
osanseviero/xlm-roberta-base-finetuned-panx-de-fr | 5910b67637bec88e50820f01988dbd4109895377 | 2022-03-06T21:30:10.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | osanseviero | null | osanseviero/xlm-roberta-base-finetuned-panx-de-fr | 0 | null | transformers | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1754
- F1: 0.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2815 | 1.0 | 1430 | 0.2079 | 0.8067 |
| 0.1521 | 2.0 | 2860 | 0.1759 | 0.8525 |
| 0.093 | 3.0 | 4290 | 0.1754 | 0.8616 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.18.0
- Tokenizers 0.10.3
| [
-0.05303028225898743,
-0.06420004367828369,
-0.06892268359661102,
0.022517532110214233,
0.022459547966718674,
0.07885328680276871,
-0.03802017495036125,
0.019493283703923225,
-0.025040678679943085,
-0.05998154357075691,
0.004737746901810169,
-0.0694461390376091,
0.03407544642686844,
-0.074... |
tau/fewsion_debug | 2f56b0dc9e7a8f777e016c69870eacb124be50b3 | 2022-03-07T10:56:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/fewsion_debug | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
voidful/speechmix_eed_fixed | f87da2b979118fe8d3a984f8c3cd72ffceddec4a | 2022-03-07T14:17:04.000Z | [
"pytorch"
] | null | false | voidful | null | voidful/speechmix_eed_fixed | 0 | null | null | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
vocab-transformers/msmarco-distilbert-custom_word2vec256k | 36e2bd2647762004a73e95f38f9aef9e03bfe696 | 2022-03-07T14:56:18.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | vocab-transformers | null | vocab-transformers/msmarco-distilbert-custom_word2vec256k | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
peggyhuang/finetune-bert-base-v3 | f4d4cda6123bb12e088e0192fc5830ea4a001262 | 2022-03-07T18:23:42.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | peggyhuang | null | peggyhuang/finetune-bert-base-v3 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
rockmiin/QMSum-dpr-query-encoder | a402f2c77483d5c7429729ea080c46c2293c2759 | 2022-03-08T02:00:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | rockmiin | null | rockmiin/QMSum-dpr-query-encoder | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
rockmiin/QMSum-dpr-passage-encoder | e35ac25b89869d432695fca742ef6c156b963aa4 | 2022-03-08T02:09:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | rockmiin | null | rockmiin/QMSum-dpr-passage-encoder | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
oskrmiguel/t5-small-finetuned-es-to-pt | a5fdfeb64e1e0fc900c6aba6b0215c3b99ee484a | 2022-03-08T03:15:16.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:tatoeba",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | oskrmiguel | null | oskrmiguel/t5-small-finetuned-es-to-pt | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tatoeba
metrics:
- bleu
model-index:
- name: t5-small-finetuned-es-to-pt
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: tatoeba
type: tatoeba
args: es-pt
metrics:
- name: Bleu
type: bleu
value: 15.0473
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-es-to-pt
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the tatoeba dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5557
- Bleu: 15.0473
- Gen Len: 15.8693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 2.2027 | 1.0 | 1907 | 1.7884 | 11.6192 | 15.8829 |
| 1.9296 | 2.0 | 3814 | 1.6034 | 14.201 | 15.8935 |
| 1.8364 | 3.0 | 5721 | 1.5557 | 15.0473 | 15.8693 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
-0.07892639935016632,
0.0040426068007946014,
0.008898040279746056,
0.012269804254174232,
-0.013400332070887089,
0.012455316260457039,
0.009018323384225368,
0.01822611689567566,
0.004980119876563549,
-0.08219308406114578,
0.054532479494810104,
-0.12715141475200653,
-0.0316767692565918,
-0.0... |
huggingtweets/fitdollar | 7e2d3f0f7735b472bcb1fc1dc8d60078fdfa8bac | 2022-03-08T05:18:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/fitdollar | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/fitdollar/1646716677087/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421952831796350976/rFuw5k2v_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Fit$</div>
<div style="text-align: center; font-size: 14px;">@fitdollar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Fit$.
| Data | Fit$ |
| --- | --- |
| Tweets downloaded | 1235 |
| Retweets | 139 |
| Short tweets | 219 |
| Tweets kept | 877 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nxpnpfh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fitdollar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3f78vjfv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3f78vjfv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fitdollar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.006233927793800831,
0.11391711235046387,
-0.018981343135237694,
0.057330090552568436,
0.17386572062969208,
-0.026751339435577393,
-0.04094920679926872,
0.01695488579571247,
0.07876879721879959,
-0.062048934400081635,
-0.005242371466010809,
0.06958536803722382,
0.00895320251584053,
-0.02... |
jiobiala24/wav2vec2-base-cv-10000 | ca850d61e9bd27a5d5042ab2b1bc431a266a2549 | 2022-03-08T13:08:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-cv-10000 | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-cv-10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cv-10000
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-cv](https://huggingface.co/jiobiala24/wav2vec2-base-cv) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3393
- Wer: 0.3684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4243 | 1.6 | 1000 | 0.7742 | 0.4210 |
| 0.3636 | 3.2 | 2000 | 0.8621 | 0.4229 |
| 0.2638 | 4.8 | 3000 | 0.9328 | 0.4094 |
| 0.2273 | 6.4 | 4000 | 0.9556 | 0.4087 |
| 0.187 | 8.0 | 5000 | 0.9093 | 0.4019 |
| 0.1593 | 9.6 | 6000 | 0.9842 | 0.4029 |
| 0.1362 | 11.2 | 7000 | 1.0651 | 0.4077 |
| 0.1125 | 12.8 | 8000 | 1.0550 | 0.3959 |
| 0.103 | 14.4 | 9000 | 1.1919 | 0.4002 |
| 0.0948 | 16.0 | 10000 | 1.1901 | 0.3983 |
| 0.0791 | 17.6 | 11000 | 1.1091 | 0.3860 |
| 0.0703 | 19.2 | 12000 | 1.2823 | 0.3904 |
| 0.0641 | 20.8 | 13000 | 1.2625 | 0.3817 |
| 0.057 | 22.4 | 14000 | 1.2821 | 0.3776 |
| 0.0546 | 24.0 | 15000 | 1.2975 | 0.3770 |
| 0.0457 | 25.6 | 16000 | 1.2998 | 0.3714 |
| 0.0433 | 27.2 | 17000 | 1.3574 | 0.3721 |
| 0.0423 | 28.8 | 18000 | 1.3393 | 0.3684 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
-0.10150294750928879,
-0.03794481232762337,
-0.04525429755449295,
-0.0339229479432106,
-0.02443184331059456,
0.058174144476652145,
-0.061768949031829834,
-0.04579982906579971,
-0.05138268694281578,
-0.07621586322784424,
0.03453299403190613,
-0.13615252077579498,
0.026175355538725853,
-0.05... |
kevinjesse/roberta-MT4TS | 348c5b28ff4ffd206d59c22b1073a0b2d697830d | 2022-03-09T20:20:41.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kevinjesse | null | kevinjesse/roberta-MT4TS | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
kevinjesse/polygot-MT4TS | e89b517f46214d5b8869c2ac71591f63d18ee042 | 2022-03-09T19:31:30.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kevinjesse | null | kevinjesse/polygot-MT4TS | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
kevinjesse/graphpolygot-MT4TS | 9263bb0cc9133c14037baed784b2657af7288385 | 2022-03-09T18:44:52.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kevinjesse | null | kevinjesse/graphpolygot-MT4TS | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
huggingtweets/betonkoepfin-littlehorney-plusbibi1 | 3900535a143cbe4e05ce6dfb014b374fddc64f90 | 2022-03-08T07:46:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/betonkoepfin-littlehorney-plusbibi1 | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/betonkoepfin-littlehorney-plusbibi1/1646725560421/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1386970823681052680/oA_4HBKl_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1425205160578588673/LBMG1HOO_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500892464772751365/6uhqt-Jx_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bibi und Anna & Betty S. & Vanny_Bunny™</div>
<div style="text-align: center; font-size: 14px;">@betonkoepfin-littlehorney-plusbibi1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bibi und Anna & Betty S. & Vanny_Bunny™.
| Data | Bibi und Anna | Betty S. | Vanny_Bunny™ |
| --- | --- | --- | --- |
| Tweets downloaded | 1818 | 3243 | 3185 |
| Retweets | 9 | 213 | 494 |
| Short tweets | 341 | 552 | 339 |
| Tweets kept | 1468 | 2478 | 2352 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nxb6yoh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @betonkoepfin-littlehorney-plusbibi1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/365gy60z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/365gy60z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/betonkoepfin-littlehorney-plusbibi1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.02987140230834484,
0.12401729077100754,
-0.0010727151529863477,
0.04614847153425217,
0.17343340814113617,
-0.0025841151364147663,
-0.036615900695323944,
0.05242140591144562,
0.06587166339159012,
-0.043024931102991104,
0.007078375667333603,
0.07829609513282776,
0.030957823619246483,
-0.0... |
kamilali/distilbert-base-uncased-finetuned-custom | eecdf367580c719ace3227bdd6ee80f8c7ec8446 | 2022-03-08T08:57:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | kamilali | null | kamilali/distilbert-base-uncased-finetuned-custom | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-custom
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 368 | 1.1128 |
| 2.1622 | 2.0 | 736 | 0.8494 |
| 1.2688 | 3.0 | 1104 | 0.7808 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
-0.061067961156368256,
-0.01155890990048647,
0.034894999116659164,
0.057375404983758926,
0.03038952313363552,
0.027027996256947517,
-0.00726669654250145,
-0.006832229904830456,
-0.06323622167110443,
-0.06999505311250687,
0.0660117045044899,
-0.062220487743616104,
0.026968766003847122,
-0.0... |
openclimatefix/graph-weather-forecaster-0.25deg | 9343fc4999c12c6b335d77eb2ab41a652b22eb05 | 2022-03-09T16:19:40.000Z | [
"pytorch"
] | null | false | openclimatefix | null | openclimatefix/graph-weather-forecaster-0.25deg | 0 | null | null | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
openclimatefix/graph-weather-forecaster-0.5deg | e0c5813dfc61fe708b73927ad1a463a126fb75f1 | 2022-03-09T16:15:51.000Z | [
"pytorch"
] | null | false | openclimatefix | null | openclimatefix/graph-weather-forecaster-0.5deg | 0 | null | null | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
openclimatefix/graph-weather-forecaster-1.0deg | 524b072a6e8fc6f712596778e3d732130f695fee | 2022-07-04T06:24:35.000Z | [
"pytorch"
] | null | false | openclimatefix | null | openclimatefix/graph-weather-forecaster-1.0deg | 0 | null | null | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
gayanin/bart-med-term-mlm | 8cebf37973de5866357347a909f7bfc125c8d12a | 2022-03-08T15:46:48.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | gayanin | null | gayanin/bart-med-term-mlm | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-med-term-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-med-term-mlm
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2506
- Rouge2 Precision: 0.8338
- Rouge2 Recall: 0.6005
- Rouge2 Fmeasure: 0.6775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.3426 | 1.0 | 15827 | 0.3029 | 0.8184 | 0.5913 | 0.6664 |
| 0.2911 | 2.0 | 31654 | 0.2694 | 0.8278 | 0.5963 | 0.6727 |
| 0.2571 | 3.0 | 47481 | 0.2549 | 0.8318 | 0.5985 | 0.6753 |
| 0.2303 | 4.0 | 63308 | 0.2506 | 0.8338 | 0.6005 | 0.6775 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
-0.010086137801408768,
-0.04698475822806358,
-0.06375687569379807,
0.043885648250579834,
0.033154767006635666,
0.004345668479800224,
-0.03173476457595825,
0.0016760083381086588,
-0.09146084636449814,
-0.10786262154579163,
0.06561479717493057,
-0.08193530142307281,
0.028206394985318184,
-0.... |
huggingtweets/feufillet-greatestquotes-hostagekiller | 64db1cdb4ca37b1625556d1f388b47ade20fec0b | 2022-03-08T13:28:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/feufillet-greatestquotes-hostagekiller | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/feufillet-greatestquotes-hostagekiller/1646746104400/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1197820815636672513/JSCZmPDf_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1473236995497500675/FtwXDZld_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000520968918/d38fd96468e9ba14c1f9f022eb0c4e61_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sexy.funny.cute.pix & HUSSY2K. & Great Minds Quotes</div>
<div style="text-align: center; font-size: 14px;">@feufillet-greatestquotes-hostagekiller</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sexy.funny.cute.pix & HUSSY2K. & Great Minds Quotes.
| Data | sexy.funny.cute.pix | HUSSY2K. | Great Minds Quotes |
| --- | --- | --- | --- |
| Tweets downloaded | 3091 | 3191 | 3200 |
| Retweets | 149 | 865 | 0 |
| Short tweets | 576 | 374 | 2 |
| Tweets kept | 2366 | 1952 | 3198 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3afdee2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @feufillet-greatestquotes-hostagekiller's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/25fcmxer) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/25fcmxer/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/feufillet-greatestquotes-hostagekiller')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.03143100440502167,
0.11846262216567993,
0.006445084232836962,
0.04277439042925835,
0.17017491161823273,
-0.0043327054008841515,
-0.036200862377882004,
0.05248529836535454,
0.06721346825361252,
-0.04284391179680824,
0.007892495952546597,
0.07998225092887878,
0.026664743199944496,
-0.0299... |
sh0416/clrcmd | 43df4478803e2c2763a42b7cd0907200dfe5ba57 | 2022-03-08T14:28:09.000Z | [
"pytorch",
"license:cc-by-nc-sa-4.0"
] | null | false | sh0416 | null | sh0416/clrcmd | 0 | null | null | ---
license: cc-by-nc-sa-4.0
---
| [
-0.04723339527845383,
0.025953227654099464,
-0.09948673844337463,
-0.03253987058997154,
0.05198120325803757,
0.035326000303030014,
0.03477821871638298,
-0.0377240888774395,
-0.015796229243278503,
0.04078708216547966,
0.004578083287924528,
-0.04642094671726227,
0.0043295398354530334,
0.0100... |
13hannes11/master_thesis_models | 3ed3f87ac04b13c8c2659df55943ca1625e4631b | 2022-06-28T21:14:01.000Z | [
"tensorboard",
"focus-prediction",
"microscopy",
"pytorch",
"license:mit"
] | null | false | 13hannes11 | null | 13hannes11/master_thesis_models | 0 | null | null | ---
name: "K-POP"
license: "mit"
metrics:
- MAE
- PLCC
- SRCC
- R2
tags:
- focus-prediction
- microscopy
- pytorch
---
# K-POP: Predicting Distance to Focal Plane for Kato-Katz Prepared Microscopy Slides Using Deep Learning
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a><a href="https://pytorchlightning.ai/">
<img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
<a href="https://hydra.cc/"><img alt="Config: Hydra" src="https://img.shields.io/badge/Config-Hydra-89b8cd"></a>
## Description
This repository contains the models and training pipeline for my master thesis. The main repository is hosted on [GitHub](https://github.com/13hannes11/master_thesis_code).
The project structure is based on the template by [ashleve](https://github.com/ashleve/lightning-hydra-template).
The metadata is stored in `data/focus150/`. The relevant files are `test_metadata.csv`, `train_metadata.csv` and `validation_metadata.csv`. Image data (of 150 x 150 px images) is not published together with this repository therefore training runs are not possible to do without it. The layout of the metadata files is as follows
```csv
,image_path,scan_uuid,study_id,focus_height,original_filename,stack_id,obj_name
0,31/b0d4005e-57d0-4516-a239-abe02a8d0a67/I02413_X009_Y014_Z5107_750_300.jpg,b0d4005e-57d0-4516-a239-abe02a8d0a67,31,-0.013672000000000017,I02413_X009_Y014_Z5107.jpg,1811661,schistosoma
1,31/274d8969-aa7c-4ac0-be60-e753579393ad/I01981_X019_Y014_Z4931_450_0.jpg,274d8969-aa7c-4ac0-be60-e753579393ad,31,-0.029296999999999962,I01981_X019_Y014_Z4931.jpg,1661371,schistosoma
...
```
## How to run
Train model with chosen experiment configuration from `configs/experiment/`
```bash
python train.py experiment=focusResNet_150
```
Train with hyperparameter search from `configs/hparams_search/`
```bash
python train.py -m hparams_search=focusResNetMSE_150
```
You can override any parameter from command line like this
```bash
python train.py trainer.max_epochs=20 datamodule.batch_size=64
```
## Jupyter notebooks
Figures and other evaluation code was run in Jupyter notebooks. These are available at `notebooks/` | [
-0.10120882093906403,
-0.052288930863142014,
0.04425789788365364,
-0.017563095316290855,
0.06927339732646942,
-0.01354087982326746,
0.014350650832057,
-0.012528184801340103,
0.03277980163693428,
-0.04646649956703186,
0.03867501765489578,
-0.048484109342098236,
-0.053143471479415894,
0.0972... |
kevinjesse/codeberta-MT4TS | 69bcf0d6d1aeb11ba321f24d6c454edd593a3008 | 2022-03-09T18:18:24.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kevinjesse | null | kevinjesse/codeberta-MT4TS | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
kj141/distilbert-base-uncased-finetuned-squad | 66bbd31d99ca681235b2a5ca3ec1fd2ad610946a | 2022-03-23T19:48:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | kj141 | null | kj141/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| [
-0.0699382796883583,
-0.046042609959840775,
-0.02390577830374241,
0.039351560175418854,
0.03942247852683067,
0.006466332357376814,
-0.024306148290634155,
0.010742603801190853,
-0.11551156640052795,
-0.10443965345621109,
0.08442723751068115,
-0.03468851000070572,
-0.009167770855128765,
-0.0... |
huak95/mt-align-finetuned-LST-en-to-th | 6bba8d437958f2f7421c4052b2941832d8fd0de2 | 2022-03-09T20:41:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/mt-align-finetuned-LST-en-to-th | 0 | null | transformers | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt-align-finetuned-LST-en-to-th
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt-align-finetuned-LST-en-to-th
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 77 | 1.6042 | 13.1732 | 26.144 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
-0.066687673330307,
-0.08658313751220703,
0.005011525470763445,
0.01497487723827362,
0.00593609968200326,
0.07817800343036652,
-0.05382339656352997,
-0.016062047332525253,
-0.03394651412963867,
-0.0705627053976059,
0.060298264026641846,
-0.06705527007579803,
-0.05159959942102432,
-0.070256... |
huggingtweets/aniraster_ | 4710a24284b1df2462ba6b6abc86087af26ec27b | 2022-03-09T09:03:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/aniraster_ | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/aniraster_/1646816595677/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1460097593015472141/Yt6YwEU1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Aniraster</div>
<div style="text-align: center; font-size: 14px;">@aniraster_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Aniraster.
| Data | Aniraster |
| --- | --- |
| Tweets downloaded | 2581 |
| Retweets | 169 |
| Short tweets | 660 |
| Tweets kept | 1752 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nr4gbjn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aniraster_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g7h1bov) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g7h1bov/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aniraster_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.009957809001207352,
0.11795016378164291,
-0.009289153851568699,
0.05536070093512535,
0.17710402607917786,
-0.01851068064570427,
-0.0356675423681736,
0.028706813231110573,
0.0724964514374733,
-0.05658600106835365,
-0.0015752617036923766,
0.07163742184638977,
0.018494125455617905,
-0.0270... |
l53513955/PAQ_256 | 9d609fb6fae14b5488c9d9e56d8acd57a60718c5 | 2022-03-09T09:09:48.000Z | [
"pytorch",
"albert",
"feature-extraction",
"transformers"
] | feature-extraction | false | l53513955 | null | l53513955/PAQ_256 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
paopow/t5_base | bd0edc2c21f093fb5bfdda5b5b19bc107d894929 | 2022-03-09T14:47:49.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | paopow | null | paopow/t5_base | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
petrichorRainbow/mrf-bert | 1d811b93ee4a1346bcdd5ee564725891c038e8d6 | 2022-03-09T17:12:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | petrichorRainbow | null | petrichorRainbow/mrf-bert | 0 | null | transformers | ---
license: apache-2.0
---
| [
0.04086383432149887,
0.04840587452054024,
-0.01111048087477684,
-0.0822305753827095,
0.03046034276485443,
-0.024620788171887398,
-0.00873124971985817,
-0.032080959528684616,
-0.009516960941255093,
0.014524046331644058,
0.06244279816746712,
-0.03306293115019798,
-0.057087719440460205,
-0.02... |
petrichorRainbow/mrf-covid-bert | 75848a3e0b2660c38cd16ed5cba68d7ff338da4c | 2022-03-09T17:24:51.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | petrichorRainbow | null | petrichorRainbow/mrf-covid-bert | 0 | null | transformers | ---
license: apache-2.0
---
| [
0.04086383432149887,
0.04840587452054024,
-0.01111048087477684,
-0.0822305753827095,
0.03046034276485443,
-0.024620788171887398,
-0.00873124971985817,
-0.032080959528684616,
-0.009516960941255093,
0.014524046331644058,
0.06244279816746712,
-0.03306293115019798,
-0.057087719440460205,
-0.02... |
pong/opus-mt-en-mul-finetuned-en-to-th | 982b3a991c31c9c1ced377cd888db23a882a8889 | 2022-03-09T18:01:13.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pong | null | pong/opus-mt-en-mul-finetuned-en-to-th | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
huak95/mt-align-finetuned-SUM3-th-to-en | 73315f4d73c141692f30ab40ce0fcc26ddd44896 | 2022-03-09T20:51:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/mt-align-finetuned-SUM3-th-to-en | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
tiot07/0310 | b3bde3621555d53102a423ae2a788cf86870af05 | 2022-03-10T06:39:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | tiot07 | null | tiot07/0310 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
huak95/mt-align-LST_classic-th-to-en-pt2 | 9fc1605167b4ad23a52439c3061221a02c438617 | 2022-03-10T09:13:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/mt-align-LST_classic-th-to-en-pt2 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
huak95/LST_classic-th-to-en-pt2.1 | df12a09d1ed3811d7a41fe4c955559dac6979507 | 2022-03-10T09:19:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/LST_classic-th-to-en-pt2.1 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
spasis/distilbert-base-uncased-finetuned-imdb-accelerate | 8e82bdacadfe25ea0d87278fdecc3ccbe7445dce | 2022-03-10T12:04:06.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | spasis | null | spasis/distilbert-base-uncased-finetuned-imdb-accelerate | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
timkakhanovich/finetuned-asr | 73d64f6e2504c7b4eea8d8545cf9808e632d6dbc | 2022-03-10T10:53:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | timkakhanovich | null | timkakhanovich/finetuned-asr | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
huak95/TNANA-attacut-th-to-en | 87859e56b8929f990770230f2a41da535388bbe3 | 2022-03-10T15:40:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/TNANA-attacut-th-to-en | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
huggingtweets/atarifounders | ea560d60fa2eebbbbdaa2be2c3656ba64890f9ea | 2022-03-26T03:45:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/atarifounders | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/atarifounders/1648266306699/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1507523916981583875/6n7ng67H_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">koala/claw/soppy</div>
<div style="text-align: center; font-size: 14px;">@atarifounders</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from koala/claw/soppy.
| Data | koala/claw/soppy |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 129 |
| Short tweets | 883 |
| Tweets kept | 2227 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gsc0jwi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atarifounders's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/atarifounders')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.009511476382613182,
0.11994477361440659,
-0.00439990870654583,
0.04655216634273529,
0.1713816374540329,
-0.018245479092001915,
-0.03809492290019989,
0.02026471495628357,
0.07363276183605194,
-0.0571078360080719,
-0.00226423516869545,
0.06764383614063263,
0.02308635227382183,
-0.02696725... |
lijingxin/xlm-roberta-base-finetuned-panx-fr | 75fe94e417bc22e5dd77d3a3fbf8d5b5d9b34916 | 2022-03-11T02:19:48.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | lijingxin | null | lijingxin/xlm-roberta-base-finetuned-panx-fr | 0 | null | transformers | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.838255033557047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2691
- F1: 0.8383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5851 | 1.0 | 191 | 0.3202 | 0.8011 |
| 0.256 | 2.0 | 382 | 0.2862 | 0.8344 |
| 0.1725 | 3.0 | 573 | 0.2691 | 0.8383 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
-0.045022014528512955,
-0.035891689360141754,
-0.023598918691277504,
0.010802659206092358,
0.057332511991262436,
0.06362485885620117,
0.014041268266737461,
0.07621962577104568,
-0.012813928537070751,
-0.07827265560626984,
-0.019438346847891808,
-0.11145775020122528,
0.044217485934495926,
-... |
huak95/TNANA_V2-attacut-th-to-en-pt2 | 1d1c1359298e83bbbf90ccf0927a5b8e922983f9 | 2022-03-11T17:29:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/TNANA_V2-attacut-th-to-en-pt2 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
zuppif/maskformer-swin-small-coco | 81ccd61f1115c48ca4db493c3ec00cb3501f8f50 | 2022-03-11T14:23:35.000Z | [
"pytorch",
"maskformer",
"transformers"
] | null | false | zuppif | null | zuppif/maskformer-swin-small-coco | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
zuppif/maskformer-swin-large-ade | 038c928b990e04a7f3433324bb9ee783c9b33004 | 2022-03-11T14:28:26.000Z | [
"pytorch",
"maskformer",
"transformers"
] | null | false | zuppif | null | zuppif/maskformer-swin-large-ade | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
zuppif/maskformer-swin-tiny-ade | dc866fbdeafe659f6ed8879e75892f77e9a9e751 | 2022-03-11T15:01:00.000Z | [
"pytorch",
"maskformer",
"transformers"
] | null | false | zuppif | null | zuppif/maskformer-swin-tiny-ade | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
huggingtweets/thed3linquent_ | 948e6f9133e95f9cab3f4baeae17613a8ca63df8 | 2022-03-11T22:57:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/thed3linquent_ | 0 | null | transformers | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1502166273064517632/RdLwNuR6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">rogue⛓🐕|| BIRFDAY BOY</div>
<div style="text-align: center; font-size: 14px;">@thed3linquent_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from rogue⛓🐕|| BIRFDAY BOY.
| Data | rogue⛓🐕|| BIRFDAY BOY |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 334 |
| Short tweets | 710 |
| Tweets kept | 2202 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tal3g38/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thed3linquent_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1aw76tml) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1aw76tml/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/thed3linquent_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.022128116339445114,
0.13631819188594818,
-0.000931762857362628,
0.035307835787534714,
0.15597471594810486,
-0.020769890397787094,
-0.03704472631216049,
0.03156740963459015,
0.07514894753694534,
-0.04795964062213898,
0.012098696082830429,
0.08254002779722214,
0.01925734058022499,
-0.0351... |
lilitket/wav2vec2-large-xls-r-300m-hyAM_batch2 | 1f08ccc5853ef5080f49f51a765bbd2cd8ec962f | 2022-03-12T14:52:57.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-hyAM_batch2 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
lilitket/wav2vec2-large-xls-r-300m-hyAM_batch4_lr2 | 690aba7a14a0c95db306468cbd784d2bcc11fe03 | 2022-03-12T16:03:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-hyAM_batch4_lr2 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
lilitket/wav2vec2-large-xls-r-300m-hyAM_batch4_lr8 | 16beeb0aefdd2bcc3e9e5cb780a1e27c49e01634 | 2022-03-12T20:58:57.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-hyAM_batch4_lr8 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
lilitket/300m-hyAM_batch4_lr8_warmup4000 | 7c7525017d51f3e7476633a17ae1d06c440fc931 | 2022-03-17T18:50:33.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/300m-hyAM_batch4_lr8_warmup4000 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
zdepablo/xlm-roberta-base-finetuned-panx-de | eb5298cbd737fbcf33cf9f7678affd139691e912 | 2022-03-12T18:25:42.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | zdepablo | null | zdepablo/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8594910162670748
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2556 | 1.0 | 525 | 0.1629 | 0.8218 |
| 0.1309 | 2.0 | 1050 | 0.1378 | 0.8522 |
| 0.0812 | 3.0 | 1575 | 0.1348 | 0.8595 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
-0.0533202700316906,
-0.034104205667972565,
-0.025541486218571663,
0.004075115546584129,
0.057174235582351685,
0.054545920342206955,
0.0037443810142576694,
0.0713094100356102,
-0.02375185675919056,
-0.0760650634765625,
-0.006913640070706606,
-0.10474429279565811,
0.03991062194108963,
-0.05... |
zdepablo/xlm-roberta-base-finetuned-panx-de-fr | fbeb4772ce785f68908426f3b13ddd7df6b59191 | 2022-03-12T18:54:00.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | zdepablo | null | zdepablo/xlm-roberta-base-finetuned-panx-de-fr | 0 | null | transformers | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1664
- F1: 0.8556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2846 | 1.0 | 715 | 0.1837 | 0.8247 |
| 0.1446 | 2.0 | 1430 | 0.1617 | 0.8409 |
| 0.0923 | 3.0 | 2145 | 0.1664 | 0.8556 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
-0.053180158138275146,
-0.06449662148952484,
-0.0641133263707161,
0.02312505804002285,
0.019442640244960785,
0.07941672950983047,
-0.03394714742898941,
0.019437041133642197,
-0.026436680927872658,
-0.05929853767156601,
0.003736605169251561,
-0.07050970941781998,
0.03823040425777435,
-0.078... |
lilitket/xls-r-300m-hyAM_batch1_lr2e-05_warmup400 | db529f41916cf30ce2ceff9f1c9a6e1be7ccba74 | 2022-03-13T07:14:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/xls-r-300m-hyAM_batch1_lr2e-05_warmup400 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
lilitket/xls-r-300m-hyAM_batch1_lr1e-05_warmup400 | e685207d23c9448938072f973c5b467e896d9f39 | 2022-03-13T07:41:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/xls-r-300m-hyAM_batch1_lr1e-05_warmup400 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
holtin/distilbert-base-uncased-finetuned-squad | f0919e96377969142d6c032af9fa355ebb1496bd | 2022-04-07T06:18:52.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | holtin | null | holtin/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
lilitket/xls-r-300m-hyAM_batch1_lr6e-06_warmup400 | 50c1e94bfd9a4222e7d26ebe4ab59a80f6194f8a | 2022-03-20T20:17:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/xls-r-300m-hyAM_batch1_lr6e-06_warmup400 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
sanchit-gandhi/wav2vec2-2-roberta-no-adapter-long-run | d126f4a7fdf2bde7ba506959857bf654f02eb442 | 2022-03-14T11:01:26.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-roberta-no-adapter-long-run | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
huggingtweets/mikepompeo | 39ec8a5587a6779f92817b10fd3ef6b9ef84d119 | 2022-03-13T14:28:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mikepompeo | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/mikepompeo/1647181695747/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1498704685875744769/r3jThh-E_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mike Pompeo</div>
<div style="text-align: center; font-size: 14px;">@mikepompeo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mike Pompeo.
| Data | Mike Pompeo |
| --- | --- |
| Tweets downloaded | 1899 |
| Retweets | 68 |
| Short tweets | 60 |
| Tweets kept | 1771 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ll5re58/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mikepompeo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zi1wgzl5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zi1wgzl5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mikepompeo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.007180988322943449,
0.1158202514052391,
-0.01912488229572773,
0.055642545223236084,
0.17390792071819305,
-0.025345979258418083,
-0.04184368997812271,
0.021798977628350258,
0.07846799492835999,
-0.06179427728056908,
-0.0038595693185925484,
0.0687488466501236,
0.010394426994025707,
-0.025... |
newtonkwan/gpt2-ft-with-non-challenging | 6c1222d90d860aaeb135cce6b000dddd23348efa | 2022-03-13T21:31:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | newtonkwan | null | newtonkwan/gpt2-ft-with-non-challenging | 0 | null | transformers | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-ft-with-non-challenging
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-ft-with-non-challenging
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 4.0984 |
| No log | 2.0 | 2 | 4.0802 |
| No log | 3.0 | 3 | 4.0443 |
| No log | 4.0 | 4 | 3.9906 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
-0.08644260466098785,
-0.053532183170318604,
-0.002576593542471528,
0.07699459046125412,
0.05530783161520958,
-0.04200327768921852,
0.004801612813025713,
0.0129501111805439,
-0.09411855041980743,
-0.1152707040309906,
-0.015150241553783417,
-0.04753340408205986,
-0.019848832860589027,
-0.04... |
lilitket/20220313-221906 | 4e8edea25bf164e0a8ed1f0b5ec22ee51d88be19 | 2022-03-14T04:27:43.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220313-221906 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
huggingtweets/ayurastro | b91d7fa463d6aacdf3de36d014a4fd562a6b630e | 2022-03-13T23:27:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ayurastro | 0 | null | transformers | ---
language: en
thumbnail: http://www.huggingtweets.com/ayurastro/1647214031676/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/493786234221641730/OFQm2K8M_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AyurAstro®</div>
<div style="text-align: center; font-size: 14px;">@ayurastro</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AyurAstro®.
| Data | AyurAstro® |
| --- | --- |
| Tweets downloaded | 1437 |
| Retweets | 112 |
| Short tweets | 65 |
| Tweets kept | 1260 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36zw53cv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ayurastro's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nhbmyyli) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nhbmyyli/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ayurastro')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
-0.017972638830542564,
0.11950712651014328,
-0.01189893763512373,
0.051581040024757385,
0.18536272644996643,
-0.012559955008327961,
-0.03650633245706558,
0.029033374041318893,
0.07261240482330322,
-0.06265134364366531,
0.0016788722714409232,
0.06753724068403244,
0.013128560967743397,
-0.02... |
tau/fewsion_1024_0.3_2100 | c82a58ef2aeb9b3372631dd1040feaae35f9bb05 | 2022-03-14T08:36:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/fewsion_1024_0.3_2100 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
tau/t5_1024_0.3_2400 | 4b3fb9e72af44a3c1f99415ec4949ddf28707576 | 2022-03-14T08:46:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/t5_1024_0.3_2400 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
lilitket/20220314-084929 | 76c5be10e2c9b620885461e93f6de52ea1c15da8 | 2022-03-14T13:26:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220314-084929 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
sanchit-gandhi/wav2vec2-2-bert-large-no-adapter | b11802c5a1eadd0abd0c3b9e3027a7caa819c225 | 2022-03-15T17:22:33.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-bert-large-no-adapter | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
peterhsu/codeparrot-ds | ea65cf18f515ffe2eda0a72ea58ed0d7f9f526ad | 2022-03-14T23:00:48.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | peterhsu | null | peterhsu/codeparrot-ds | 0 | null | transformers | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4939 | 0.93 | 5000 | 1.9729 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
-0.10604214668273926,
-0.04384544491767883,
-0.0043021212331950665,
0.05247730761766434,
0.05091359466314316,
-0.04545219987630844,
-0.01425615232437849,
0.0036055739037692547,
-0.09325546771287918,
-0.10692642629146576,
0.028798505663871765,
-0.0322023406624794,
-0.015269603580236435,
-0.... |
newtonkwan/gpt2-xl-ft-with-non-challenging-25k | 3d10551c6ecab21243f47a46f2e41545e616a560 | 2022-03-15T00:06:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | newtonkwan | null | newtonkwan/gpt2-xl-ft-with-non-challenging-25k | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
tau/t5_1024_0.3_7950 | 619e06eb26ab187968ed87b3dfde7d024465ea8f | 2022-03-15T07:29:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/t5_1024_0.3_7950 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
Norod78/ml-generated-muppets-rudalle | 43559f240be193f32836a24406d6e6736a42cad0 | 2022-03-15T10:02:58.000Z | [
"pytorch",
"license:mit"
] | null | false | Norod78 | null | Norod78/ml-generated-muppets-rudalle | 0 | null | null | ---
license: mit
---
Muppet image generator, based on ruDALL-E.
You can perform inference using this [Colab notebook](https://github.com/Norod/my-colab-experiments/blob/master/ruDALLE_muppets_norod78.ipynb)

| [
-0.0917273685336113,
-0.026910357177257538,
0.042478691786527634,
-0.016852473840117455,
0.04027630016207695,
-0.014194248244166374,
0.04393816366791725,
-0.015065728686749935,
-0.0003473139659035951,
-0.040208958089351654,
0.0012124924687668681,
-0.06383911520242691,
0.07187681645154953,
... |
zuppif/resnetd-18 | 0d36c4fbc31431b03072141da0e4ba0a55a7af0f | 2022-03-17T09:08:23.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnetd-18 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
zuppif/resnetd-101 | 232531b093321fe8f34fd4a28d5c7fc9564a8907 | 2022-03-17T09:13:10.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnetd-101 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
zuppif/resnetd-200 | 41253945cdbde0dce274d7413e99e97f64c4d424 | 2022-03-17T09:18:51.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnetd-200 | 0 | null | transformers | Entry not found | [
0.0461147278547287,
-0.038838207721710205,
-0.01049656979739666,
-0.03682169318199158,
0.011261860840022564,
0.013094935566186905,
0.0019101888174191117,
-0.013979103416204453,
0.027092741802334785,
-0.015212527476251125,
0.017284274101257324,
-0.08189476281404495,
0.03817418962717056,
-0.... |
spasis/marian-finetuned-kde4-en-to-fr | 40cbbd3582645298cb26de24efd54ae12e7ae605 | 2022-03-15T17:39:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"tanslation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | spasis | null | spasis/marian-finetuned-kde4-en-to-fr | 0 | null | transformers | ---
license: apache-2.0
tags:
- tanslation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
-0.0019952645525336266,
-0.025639906525611877,
0.023540252819657326,
-0.023186910897493362,
-0.011192424222826958,
0.09412544965744019,
-0.028538668528199196,
-0.04248844087123871,
-0.011472877115011215,
-0.05825106427073479,
0.01643148623406887,
-0.10434503853321075,
-0.043210554867982864,
... |
moralstories/gpt2_action_context-consequence | 284a29966aaa68ab47729808b3b22cbac493f06f | 2022-03-15T18:13:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:afl-3.0"
] | text-generation | false | moralstories | null | moralstories/gpt2_action_context-consequence | 0 | null | transformers | ---
license: afl-3.0
---
| [
0.027447504922747612,
0.0013088852865621448,
-0.10637658089399338,
-0.04375426098704338,
0.07398077100515366,
0.03330794721841812,
0.01909494958817959,
-0.07335776090621948,
0.016507230699062347,
0.030171729624271393,
-0.055575042963027954,
-0.05307106301188469,
-0.07680817693471909,
0.041... |
facebook/regnet-x-016 | 5f7992cd8a33f3be2417b0a7b91f349ca6ad2932 | 2022-06-30T10:14:50.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-x-016 | 0 | null | transformers | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). | [
-0.09716660529375076,
0.007682012394070625,
0.014121271669864655,
0.023276086896657944,
0.06860209256410599,
-0.05333618447184563,
-0.014848208986222744,
-0.01589207723736763,
-0.011465679854154587,
-0.030643023550510406,
0.06083591654896736,
-0.012407905422151089,
0.05800802633166313,
0.0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.