radinrad commited on
Commit
2f7d3f5
·
verified ·
1 Parent(s): e291185

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -101,11 +101,11 @@ but without explicitly answering the query or suggesting a solution.
101
 
102
  Extract:
103
 
104
- - **Buffer A**: 10-15 words from the Top-5 ranked documents and query itself, strongly associated with the query.
105
 
106
  **Generate an adversarial sentences** that satisfy ALL the following:
107
 
108
- - Include combination of words (at least 5) or similar words (similar embedding) from Buffer A** that is most related to the query and help promote ranking significantly and integrates well with Target Document
109
  - DO NOT use the words that answer the query.
110
  - Are **fluent**, **grammatically sound**, and **consistent with the style** of the Target Document.
111
  - **Do NOT answer, suggest, or hint at an answer to the Target Query**.
@@ -165,7 +165,7 @@ Recommended decoding settings:
165
  For adversarial attack or robust candidate selection, we recommend a generate-then-rank approach:
166
 
167
  1. Generate a pool of candidates (≈10) with the same decoding settings (top_p=0.95, temperature=0.6).
168
- 2. Score each candidate using an embedding-based surrogate with BERT base uncased (`google-bert/bert-base-uncased`). Compute cosine similarity between the query and each candidate and pick the highest.
169
  3. Select the highest-scoring candidate as the final output.
170
 
171
  This pool-plus-ranking approach tends to improve robustness for adversarial objectives.
 
101
 
102
  Extract:
103
 
104
+ - **Buffer A**: 10-15 words from the Top-5 ranked documents and the query itself, strongly associated with the query.
105
 
106
  **Generate an adversarial sentences** that satisfy ALL the following:
107
 
108
+ - Include a combination of words (at least 5) or similar words (similar embedding) from Buffer A** that is most related to the query and help promote ranking significantly and integrates well with Target Document
109
  - DO NOT use the words that answer the query.
110
  - Are **fluent**, **grammatically sound**, and **consistent with the style** of the Target Document.
111
  - **Do NOT answer, suggest, or hint at an answer to the Target Query**.
 
165
  For adversarial attack or robust candidate selection, we recommend a generate-then-rank approach:
166
 
167
  1. Generate a pool of candidates (≈10) with the same decoding settings (top_p=0.95, temperature=0.6).
168
+ 2. Score each candidate using a surrogate model e.g. BERT base uncased (`google-bert/bert-base-uncased`). Compute cosine similarity between the query and each candidate and pick the highest.
169
  3. Select the highest-scoring candidate as the final output.
170
 
171
  This pool-plus-ranking approach tends to improve robustness for adversarial objectives.