Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: en
|
| 3 |
+
|
| 4 |
+
license: mit
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
# bettercallbloom-560m
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
Finetuned bloom-560m model on the PileOfLaw - r/legal_advice
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
## Model description
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
## Intended uses & limitations
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
### How to use
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
| 28 |
+
|
| 29 |
+
```python
|
| 30 |
+
from transformers import GPT2Tokenizer, GPT2Model
|
| 31 |
+
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
|
| 32 |
+
model = GPT2Model.from_pretrained('gpt2')
|
| 33 |
+
text = "Replace me by any text you'd like."
|
| 34 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
| 35 |
+
output = model(**encoded_input)
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
and in TensorFlow:
|
| 39 |
+
|
| 40 |
+
```python
|
| 41 |
+
from transformers import GPT2Tokenizer, TFGPT2Model
|
| 42 |
+
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
|
| 43 |
+
model = TFGPT2Model.from_pretrained('gpt2')
|
| 44 |
+
text = "Replace me by any text you'd like."
|
| 45 |
+
encoded_input = tokenizer(text, return_tensors='tf')
|
| 46 |
+
output = model(encoded_input)
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
### Limitations and bias
|
| 50 |
+
|
| 51 |
+
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
|
| 52 |
+
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
|
| 53 |
+
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
|
| 54 |
+
|
| 55 |
+
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
|
| 56 |
+
> that require the generated text to be true.
|
| 57 |
+
>
|
| 58 |
+
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
|
| 59 |
+
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
|
| 60 |
+
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
|
| 61 |
+
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
|
| 62 |
+
> levels of caution around use cases that are sensitive to biases around human attributes.
|
| 63 |
+
|
| 64 |
+
Here's an example of how the model can have biased predictions:
|
| 65 |
+
|
| 66 |
+
```python
|
| 67 |
+
>>> from transformers import pipeline, set_seed
|
| 68 |
+
>>> generator = pipeline('text-generation', model='gpt2')
|
| 69 |
+
>>> set_seed(42)
|
| 70 |
+
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
|
| 71 |
+
|
| 72 |
+
[{'generated_text': 'The White man worked as a mannequin for'},
|
| 73 |
+
{'generated_text': 'The White man worked as a maniser of the'},
|
| 74 |
+
{'generated_text': 'The White man worked as a bus conductor by day'},
|
| 75 |
+
{'generated_text': 'The White man worked as a plumber at the'},
|
| 76 |
+
{'generated_text': 'The White man worked as a journalist. He had'}]
|
| 77 |
+
|
| 78 |
+
>>> set_seed(42)
|
| 79 |
+
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
|
| 80 |
+
|
| 81 |
+
[{'generated_text': 'The Black man worked as a man at a restaurant'},
|
| 82 |
+
{'generated_text': 'The Black man worked as a car salesman in a'},
|
| 83 |
+
{'generated_text': 'The Black man worked as a police sergeant at the'},
|
| 84 |
+
{'generated_text': 'The Black man worked as a man-eating monster'},
|
| 85 |
+
{'generated_text': 'The Black man worked as a slave, and was'}]
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
This bias will also affect all fine-tuned versions of this model.
|
| 89 |
+
|
| 90 |
+
## Training data
|
| 91 |
+
|
| 92 |
+
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
|
| 93 |
+
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
|
| 94 |
+
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
|
| 95 |
+
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
|
| 96 |
+
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
|
| 97 |
+
|
| 98 |
+
## Training procedure
|
| 99 |
+
|
| 100 |
+
### Preprocessing
|
| 101 |
+
|
| 102 |
+
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
|
| 103 |
+
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
|
| 104 |
+
|
| 105 |
+
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
|
| 106 |
+
details of training.
|
| 107 |
+
|
| 108 |
+
## Evaluation results
|
| 109 |
+
|
| 110 |
+
The model achieves the following results without any fine-tuning (zero-shot):
|
| 111 |
+
|
| 112 |
+
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|
| 113 |
+
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
|
| 114 |
+
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
|
| 115 |
+
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
|