paragon-analytics commited on
Commit
ef8d623
·
1 Parent(s): 6285eaf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -2
README.md CHANGED
@@ -1,3 +1,13 @@
 
 
 
 
 
 
 
 
 
 
1
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
2
 
3
  tokenizer = AutoTokenizer.from_pretrained("paragon-analytics/t5_para")
@@ -10,7 +20,6 @@ text = "paraphrase: " + sentence + " </s>"
10
  encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
11
  input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
12
 
13
-
14
  outputs = model.generate(
15
  input_ids=input_ids, attention_mask=attention_masks,
16
  max_length=256,
@@ -23,4 +32,5 @@ outputs = model.generate(
23
 
24
  for output in outputs:
25
  line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
26
- print(line)
 
 
1
+ ---
2
+ license: "mit"
3
+ ---
4
+
5
+ This model takes text as input and returns the top five paraphrased versions of the input text. The T5 model is fine-tuned using persuasive ad transcripts.
6
+
7
+ Example classification:
8
+
9
+ ```python
10
+ import torch
11
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
12
 
13
  tokenizer = AutoTokenizer.from_pretrained("paragon-analytics/t5_para")
 
20
  encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
21
  input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
22
 
 
23
  outputs = model.generate(
24
  input_ids=input_ids, attention_mask=attention_masks,
25
  max_length=256,
 
32
 
33
  for output in outputs:
34
  line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
35
+ print(line)
36
+ ```