|
|
--- |
|
|
license: cc |
|
|
datasets: |
|
|
- HiTZ/euscrawl |
|
|
language: |
|
|
- eu |
|
|
metrics: |
|
|
- perplexity |
|
|
library_name: transformers |
|
|
pipeline_tag: text-generation |
|
|
--- |
|
|
# Model Card for GPT2 Eus Euscrawl |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
Pretrained GPT2 small model (124M parameters) on Basque language using a causal language modeling (CLM) objective. The English version of GPT2 was introduced in |
|
|
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) |
|
|
and first released at [this page](https://openai.com/blog/better-language-models/). The team releasing GPT-2 also wrote a |
|
|
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. |
|
|
|
|
|
# Model Details |
|
|
|
|
|
## Model Description |
|
|
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
GPT-2 is a transformers model pretrained on a very large corpus of Basque data in a self-supervised fashion. This |
|
|
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots |
|
|
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, |
|
|
it was trained to guess the next word in sentences. |
|
|
|
|
|
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, |
|
|
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the |
|
|
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. |
|
|
|
|
|
This way, the model learns an inner representation of the English language that can then be used to extract features |
|
|
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a |
|
|
prompt. |
|
|
|
|
|
This is the **smallest** version of GPT-2, with 124M parameters. |
|
|
|
|
|
- **Developed by:** [github.com/juletx](https://github.com/juletx) |
|
|
- **Model type:** GPT2 |
|
|
- **Language(s) (NLP):** Basque (eu) |
|
|
- **License:** cc |
|
|
|
|
|
## Model Sources [optional] |
|
|
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
|
|
- **Repository:** [github.com/juletx/phd](https://github.com/juletx/phd) |
|
|
- **Paper [optional]:** [More Information Needed] |
|
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
|
|
# Uses |
|
|
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
|
|
## Direct Use |
|
|
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
|
|
You can use this model directly with a pipeline for text generation. |
|
|
|
|
|
## Downstream Use [optional] |
|
|
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
|
|
You can also fine-tune it to a downstream task. See the |
|
|
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. |
|
|
|
|
|
## Out-of-Scope Use |
|
|
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
# Bias, Risks, and Limitations |
|
|
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
|
|
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of |
|
|
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their |
|
|
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): |
|
|
|
|
|
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases |
|
|
> that require the generated text to be true. |
|
|
> |
|
|
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do |
|
|
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a |
|
|
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, |
|
|
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar |
|
|
> levels of caution around use cases that are sensitive to biases around human attributes. |
|
|
|
|
|
Here's an example of how the model can have biased predictions: |
|
|
|
|
|
```python |
|
|
>>> from transformers import pipeline, set_seed |
|
|
>>> generator = pipeline('text-generation', model='gpt2') |
|
|
>>> set_seed(42) |
|
|
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5) |
|
|
|
|
|
[{'generated_text': 'The White man worked as a mannequin for'}, |
|
|
{'generated_text': 'The White man worked as a maniser of the'}, |
|
|
{'generated_text': 'The White man worked as a bus conductor by day'}, |
|
|
{'generated_text': 'The White man worked as a plumber at the'}, |
|
|
{'generated_text': 'The White man worked as a journalist. He had'}] |
|
|
|
|
|
>>> set_seed(42) |
|
|
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) |
|
|
|
|
|
[{'generated_text': 'The Black man worked as a man at a restaurant'}, |
|
|
{'generated_text': 'The Black man worked as a car salesman in a'}, |
|
|
{'generated_text': 'The Black man worked as a police sergeant at the'}, |
|
|
{'generated_text': 'The Black man worked as a man-eating monster'}, |
|
|
{'generated_text': 'The Black man worked as a slave, and was'}] |
|
|
``` |
|
|
|
|
|
This bias will also affect all fine-tuned versions of this model. |
|
|
|
|
|
## Recommendations |
|
|
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
|
|
Use the code below to get started with the model. |
|
|
|
|
|
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we |
|
|
set a seed for reproducibility: |
|
|
|
|
|
```python |
|
|
>>> from transformers import pipeline, set_seed |
|
|
>>> generator = pipeline('text-generation', model='gpt2') |
|
|
>>> set_seed(42) |
|
|
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) |
|
|
|
|
|
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, |
|
|
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, |
|
|
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, |
|
|
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, |
|
|
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] |
|
|
``` |
|
|
|
|
|
Here is how to use this model to get the features of a given text in PyTorch: |
|
|
|
|
|
```python |
|
|
from transformers import GPT2Tokenizer, GPT2Model |
|
|
tokenizer = GPT2Tokenizer.from_pretrained('gpt2') |
|
|
model = GPT2Model.from_pretrained('gpt2') |
|
|
text = "Replace me by any text you'd like." |
|
|
encoded_input = tokenizer(text, return_tensors='pt') |
|
|
output = model(**encoded_input) |
|
|
``` |
|
|
|
|
|
# Training Details |
|
|
|
|
|
## Training Data |
|
|
|
|
|
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
|
|
EusCrawl (http://www.ixa.eus/euscrawl/) is a high-quality corpus for Basque comprising 12.5 million documents |
|
|
and 423 million tokens, totalling 2.1 GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to |
|
|
extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to |
|
|
general purpose approaches. [Dataset Card](https://huggingface.co/datasets/HiTZ/euscrawl) |
|
|
|
|
|
## Training Procedure |
|
|
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
|
|
### Preprocessing [optional] |
|
|
|
|
|
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a |
|
|
vocabulary size of 50,304. The inputs are sequences of 1024 consecutive tokens. |
|
|
|
|
|
### Training Hyperparameters |
|
|
|
|
|
- **Training regime:** bf16 mixed precission <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
|
|
### Speeds, Sizes, Times [optional] |
|
|
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
# Evaluation |
|
|
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
|
|
## Testing Data, Factors & Metrics |
|
|
|
|
|
### Testing Data |
|
|
|
|
|
<!-- This should link to a Data Card if possible. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Factors |
|
|
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Metrics |
|
|
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Results |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Summary |
|
|
|
|
|
|
|
|
|
|
|
# Model Examination [optional] |
|
|
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
# Environmental Impact |
|
|
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
|
|
- **Hardware Type:** [More Information Needed] |
|
|
- **Hours used:** [More Information Needed] |
|
|
- **Cloud Provider:** [More Information Needed] |
|
|
- **Compute Region:** [More Information Needed] |
|
|
- **Carbon Emitted:** [More Information Needed] |
|
|
|
|
|
# Technical Specifications [optional] |
|
|
|
|
|
## Model Architecture and Objective |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Compute Infrastructure |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Hardware |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Software |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
# Citation [optional] |
|
|
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
```bibtex |
|
|
@article{radford2019language, |
|
|
title={Language Models are Unsupervised Multitask Learners}, |
|
|
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, |
|
|
year={2019} |
|
|
} |
|
|
``` |
|
|
|
|
|
**APA:** |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
# Glossary [optional] |
|
|
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
# More Information [optional] |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
# Model Card Authors [optional] |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
# Model Card Contact |
|
|
|
|
|
[More Information Needed] |