Commit
·
1ec8ba9
1
Parent(s):
cf9d94b
Update README.md
Browse files
README.md
CHANGED
|
@@ -31,6 +31,7 @@ Now, with Instruct GPT-J, you can ask things in natural language "like a human":
|
|
| 31 |
```text
|
| 32 |
Correct spelling and grammar from the following text.
|
| 33 |
I do not wan to go
|
|
|
|
| 34 |
```
|
| 35 |
|
| 36 |
Which returns the following:
|
|
@@ -51,7 +52,7 @@ import torch
|
|
| 51 |
|
| 52 |
generator = pipeline(model="nlpcloud/instruct-gpt-j", torch_dtype=torch.float16, device=0)
|
| 53 |
|
| 54 |
-
prompt = "Correct spelling and grammar from the following text.\nI do not wan to go"
|
| 55 |
|
| 56 |
print(generator(prompt))
|
| 57 |
```
|
|
@@ -65,7 +66,7 @@ import torch
|
|
| 65 |
tokenizer = AutoTokenizer.from_pretrained('nlpcloud/instruct-gpt-j')
|
| 66 |
generator = AutoModelForCausalLM.from_pretrained("nlpcloud/instruct-gpt-j",torch_dtype=torch.float16).cuda()
|
| 67 |
|
| 68 |
-
prompt = "Correct spelling and grammar from the following text.\nI do not wan to go"
|
| 69 |
|
| 70 |
inputs = tokenizer(prompt, return_tensors='pt')
|
| 71 |
outputs = generator.generate(inputs.input_ids.cuda())
|
|
@@ -73,6 +74,22 @@ outputs = generator.generate(inputs.input_ids.cuda())
|
|
| 73 |
print(tokenizer.decode(outputs[0]))
|
| 74 |
```
|
| 75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
## Hardware Requirements
|
| 77 |
|
| 78 |
This model is an fp16 version of our fine-tuned model, which works very well on a GPU with 16GB of VRAM like an NVIDIA Tesla T4.
|
|
|
|
| 31 |
```text
|
| 32 |
Correct spelling and grammar from the following text.
|
| 33 |
I do not wan to go
|
| 34 |
+
|
| 35 |
```
|
| 36 |
|
| 37 |
Which returns the following:
|
|
|
|
| 52 |
|
| 53 |
generator = pipeline(model="nlpcloud/instruct-gpt-j", torch_dtype=torch.float16, device=0)
|
| 54 |
|
| 55 |
+
prompt = "Correct spelling and grammar from the following text.\nI do not wan to go\n"
|
| 56 |
|
| 57 |
print(generator(prompt))
|
| 58 |
```
|
|
|
|
| 66 |
tokenizer = AutoTokenizer.from_pretrained('nlpcloud/instruct-gpt-j')
|
| 67 |
generator = AutoModelForCausalLM.from_pretrained("nlpcloud/instruct-gpt-j",torch_dtype=torch.float16).cuda()
|
| 68 |
|
| 69 |
+
prompt = "Correct spelling and grammar from the following text.\nI do not wan to go\n"
|
| 70 |
|
| 71 |
inputs = tokenizer(prompt, return_tensors='pt')
|
| 72 |
outputs = generator.generate(inputs.input_ids.cuda())
|
|
|
|
| 74 |
print(tokenizer.decode(outputs[0]))
|
| 75 |
```
|
| 76 |
|
| 77 |
+
## Special Note About Input Format
|
| 78 |
+
|
| 79 |
+
Due to the way this model was fine-tuned, you should always use new lines at the end of your instructions.
|
| 80 |
+
|
| 81 |
+
For example the following instruction might not always work:
|
| 82 |
+
|
| 83 |
+
```text
|
| 84 |
+
Correct spelling and grammar from the following text.\nI do not wan to go
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
But this one would:
|
| 88 |
+
|
| 89 |
+
```text
|
| 90 |
+
Correct spelling and grammar from the following text.\nI do not wan to go\n
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
## Hardware Requirements
|
| 94 |
|
| 95 |
This model is an fp16 version of our fine-tuned model, which works very well on a GPU with 16GB of VRAM like an NVIDIA Tesla T4.
|