Commit
·
125d698
1
Parent(s):
58dc9f3
readme.md add
Browse files
README.md
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: gr
|
| 3 |
+
-
|
| 4 |
+
thumbnail: https://huggingface.co/macedonizer/gr-roberta-base/lets-talk-about-nlp-gr.jpg
|
| 5 |
+
license: Apache 2.0
|
| 6 |
+
datasets:
|
| 7 |
+
- wiki-gr
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
# gr-gpt2
|
| 11 |
+
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
|
| 12 |
+
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
|
| 13 |
+
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
|
| 14 |
+
and first released at [this page](https://openai.com/blog/better-language-models/).
|
| 15 |
+
|
| 16 |
+
## Model description
|
| 17 |
+
mk-gpt2 is a transformers model pretrained on a very large corpus of Macedonian data in a self-supervised fashion. This
|
| 18 |
+
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
|
| 19 |
+
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
|
| 20 |
+
it was trained to guess the next word in sentences.
|
| 21 |
+
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
|
| 22 |
+
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
|
| 23 |
+
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
|
| 24 |
+
This way, the model learns an inner representation of the Macedonian language that can then be used to extract features
|
| 25 |
+
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
|
| 26 |
+
prompt.
|
| 27 |
+
|
| 28 |
+
### How to use
|
| 29 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
| 30 |
+
|
| 31 |
+
import random
|
| 32 |
+
from transformers import AutoTokenizer, AutoModelWithLMHead
|
| 33 |
+
|
| 34 |
+
tokenizer = AutoTokenizer.from_pretrained('macedonizer/gr-gpt2') \
|
| 35 |
+
model = AutoModelWithLMHead.from_pretrained('macedonizer/gr-gpt2')
|
| 36 |
+
|
| 37 |
+
input_text = 'Η Αθήνα είναι'
|
| 38 |
+
|
| 39 |
+
if len(input_text) == 0: \
|
| 40 |
+
encoded_input = tokenizer(input_text, return_tensors="pt") \
|
| 41 |
+
output = model.generate( \
|
| 42 |
+
bos_token_id=random.randint(1, 50000), \
|
| 43 |
+
do_sample=True, \
|
| 44 |
+
top_k=50, \
|
| 45 |
+
max_length=1024, \
|
| 46 |
+
top_p=0.95, \
|
| 47 |
+
num_return_sequences=1, \
|
| 48 |
+
) \
|
| 49 |
+
else: \
|
| 50 |
+
encoded_input = tokenizer(input_text, return_tensors="pt") \
|
| 51 |
+
output = model.generate( \
|
| 52 |
+
**encoded_input, \
|
| 53 |
+
bos_token_id=random.randint(1, 50000), \
|
| 54 |
+
do_sample=True, \
|
| 55 |
+
top_k=50, \
|
| 56 |
+
max_length=1024, \
|
| 57 |
+
top_p=0.95, \
|
| 58 |
+
num_return_sequences=1, \
|
| 59 |
+
)
|
| 60 |
+
|
| 61 |
+
decoded_output = [] \
|
| 62 |
+
for sample in output: \
|
| 63 |
+
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
|
| 64 |
+
|
| 65 |
+
print(decoded_output)
|