NataliaH commited on
Commit
7360167
·
1 Parent(s): 4f80429

Updated model card

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ tags:
4
+ - language-model
5
+ - transformer-decoder
6
+ - tiny-shakespeare
7
+ license: mit
8
+ datasets:
9
+ - tiny_shakespeare
10
+ model_description: |
11
+ This is a small autoregressive language model based on the Transformer architecture trained on the Tiny Shakespeare dataset.
12
+
13
+ ## Model Description
14
+ The model is a custom implementation of a TransformerDecoderModel, which uses a decoder-only architecture similar to GPT-2.
15
+ It was trained on the Tiny Shakespeare dataset to generate text in the style of William Shakespeare.
16
+
17
+ ## Training Details
18
+ The model was trained and tracked using [Weights & Biases](https://wandb.ai/honcharova-de-hannover/LanguageModel_Project?nw=nwuserhoncharovade).
19
+
20
+ ## How to Use
21
+ To generate text with this model, you can load it and the tokenizer as follows:
22
+
23
+ ```python
24
+ from transformers import AutoTokenizer
25
+ from transformers import GPT2LMHeadModel
26
+
27
+ # Load the model and tokenizer
28
+ model = GPT2LMHeadModel.from_pretrained('NataliaH/TransformerDecoderModel')
29
+ tokenizer = AutoTokenizer.from_pretrained('NataliaH/TransformerDecoderModel')
30
+
31
+ # Provide input text and generate output
32
+ input_text = 'To be or not to be'
33
+ inputs = tokenizer(input_text, return_tensors='pt')
34
+ outputs = model.generate(**inputs)
35
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
36
+ ```