ImanAndrea commited on
Commit
fb7219d
·
verified ·
1 Parent(s): b3b4abf

Update annotations for Ed/paper_38.txt

Browse files
Files changed (1) hide show
  1. annotations/Ed/paper_38.txt.json +8 -0
annotations/Ed/paper_38.txt.json CHANGED
@@ -22,5 +22,13 @@
22
  "label": "Lacks synthesis",
23
  "user": "Ed",
24
  "text": "The GPT-3 models (Brown et al., 2020;Schick and Schütze, 2020) find that with proper manual prompts, a pre-trained LM can successfully match the fine-tuning performance of BERT models. LM-BFF (Gao et al., 2020), EFL (Wang et al., 2021), and AutoPrompt (Shin et al., 2020) further this direction by insert prompts in the input embedding layer. However, these methods rely on grid-search for a natural language-based prompt from a large search space, resulting in difficulties to optimize."
 
 
 
 
 
 
 
 
25
  }
26
  ]
 
22
  "label": "Lacks synthesis",
23
  "user": "Ed",
24
  "text": "The GPT-3 models (Brown et al., 2020;Schick and Schütze, 2020) find that with proper manual prompts, a pre-trained LM can successfully match the fine-tuning performance of BERT models. LM-BFF (Gao et al., 2020), EFL (Wang et al., 2021), and AutoPrompt (Shin et al., 2020) further this direction by insert prompts in the input embedding layer. However, these methods rely on grid-search for a natural language-based prompt from a large search space, resulting in difficulties to optimize."
25
+ },
26
+ {
27
+ "file": "paper_38.txt",
28
+ "start": 2461,
29
+ "end": 2602,
30
+ "label": "Unsupported claim",
31
+ "user": "Ed",
32
+ "text": "all existing prompt-tuning methods have thus far focused on task-specific prompts, making them incompatible with the traditional LM objective"
33
  }
34
  ]