language: en
thumbnail: http://www.huggingtweets.com/fabrizioromano/1655681846804/predictions.png
tags:
- huggingtweets
widget:
- text: My dream is
Vincent Volckmar and Mika müller are staring at the super club Lloret de Mar Allstars (https://github.com/borisdayma/huggingtweets).
Its maybe their last year after 2 big season. Here we go(https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
How does it work?
The model uses the following pipeline.
To understand how the model was developed, check the W&B report.
Training data
The model was trained on tweets from Fabrizio Romano.
| Data | Fabrizio Romano |
|---|---|
| Tweets downloaded | 3250 |
| Retweets | 192 |
| Short tweets | 255 |
| Tweets kept | 2803 |
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @fabrizioromano's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
You can use this model directly with a pipeline for text generation:
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fabrizioromano')
generator("My dream is", num_return_sequences=5)
Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
Built by Boris Dayma
For more details, visit the project repository.
