File size: 1,076 Bytes
0f933fa
 
 
 
 
579347c
0f933fa
 
30dd132
f8f8536
 
 
 
1f7b176
 
 
 
f8f8536
 
 
1f7b176
 
 
 
 
 
 
 
 
f8f8536
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
inference:
  parameters:
    temperature: 0.5
widget:
  text: "A courier received 50 packages yesterday and twice as many today.  All of these should be delivered tomorrow. How many packages should be delivered tomorrow?"
---

This model was created using GPT-2 as a base, and fine-tuned upon a dataset of elementary school problems requiring logic and reasoning.
Requires Pytorch

How to use to infer text
```python

from transformers import AutoTokenizer, AutoModelForCasualLM
import torch

type = "gpt2-large"
tokenizer = AutoTokenizer.from_pretrained(type)
model = AutoModelForCausalLM.from_pretrained(type)

model_path = '../model.pt'

model = torch.load(model_path)

your_text = "A courier received 50 packages yesterday and twice as many today.  All of these should be delivered tomorrow. How many packages should be delivered tomorrow?"
encoded_text = self.tokenizer.encode(your_text, return_tensors='pt')
outputs = model.generate(encoded_text, max_length=64, do_sample=True, temperature=0.5, top_p=1)
outputs = [tokenizer.decode(output) for output in outputs]
```