Update README.md
Browse files
README.md
CHANGED
|
@@ -9,4 +9,218 @@ tags:
|
|
| 9 |
- adversarial
|
| 10 |
- rank-boosting
|
| 11 |
- rank-promotion
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
- adversarial
|
| 10 |
- rank-boosting
|
| 11 |
- rank-promotion
|
| 12 |
+
library_name: transformers
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
# CRAFT-R1-Distill-Llama-70B
|
| 17 |
+
|
| 18 |
+
Specialized for adversarial rank promotion in neural IR systems, this model is fine-tuned with the Alpaca template on an R1-distilled Llama 70B backbone. It produces fluent, style-consistent sentences that strategically enhance a target document’s relevance score without addressing the query.
|
| 19 |
+
|
| 20 |
+
- Base architecture: `DeepSeek-R1-Distill-Llama-70B`
|
| 21 |
+
- Format: Alpaca instruction template (exact template below)
|
| 22 |
+
- Dataset: Adversarial datasets for fine-tuning (link under the Dataset section)
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
### Task Definition
|
| 26 |
+
|
| 27 |
+
- Adversarial Ranking Augmentation (primary): Given a Query, a Target Document, and the Top-5 Documents for that query, generate a single adversarial sentence that increases the Target Document's perceived relevance without answering the query. The response must be returned as a JSON object of the form `{ "response": "<one sentence>" }`. Constraints include reusing vocabulary/themes (Buffer A), preserving style/fluency, and avoiding direct answers or explicit references to the query.
|
| 28 |
+
|
| 29 |
+
## Prompt Formatting
|
| 30 |
+
|
| 31 |
+
This model was fine-tuned using the Alpaca-style instruction template. The exact prompt format, including the instruction and input content used during fine-tuning, is:
|
| 32 |
+
|
| 33 |
+
```text
|
| 34 |
+
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
| 35 |
+
|
| 36 |
+
### Instruction:
|
| 37 |
+
You are an expert in information retrieval and adversarial search ranking. Your goal is to help a *Target Document* rank higher.
|
| 38 |
+
(ideally between 1 to 10) for a *given query* by subtly increasing its perceived relevance to ranking models such as cross-encoders.
|
| 39 |
+
|
| 40 |
+
You will do this by generating content that shares vocabulary, structure, and context with high-ranking documents,
|
| 41 |
+
but without explicitly answering the query or suggesting a solution.
|
| 42 |
+
|
| 43 |
+
Extract:
|
| 44 |
+
|
| 45 |
+
- **Buffer A**: 10-15 words from the Top-5 ranked documents and query itself, strongly associated with the query.
|
| 46 |
+
|
| 47 |
+
**Generate an adversarial sentences** that satisfy ALL the following:
|
| 48 |
+
|
| 49 |
+
- Include combination of words (at least 5) or similar words (similar embedding) from Buffer A** that is most related to the query and help promote ranking significantly and integrates well with Target Document
|
| 50 |
+
- DO NOT use the words that answer the query.
|
| 51 |
+
- Are **fluent**, **grammatically sound**, and **consistent with the style** of the Target Document.
|
| 52 |
+
- **Do NOT answer, suggest, or hint at an answer to the Target Query**.
|
| 53 |
+
- **Do NOT reference the Target Query at all**.
|
| 54 |
+
- Are designed to **sound relevant** but only reinforce theme/context alignment.
|
| 55 |
+
|
| 56 |
+
### Input:
|
| 57 |
+
Query: {query}
|
| 58 |
+
|
| 59 |
+
Target Document:
|
| 60 |
+
{doc_content}
|
| 61 |
+
|
| 62 |
+
Top-5 Documents:
|
| 63 |
+
{top_docs_str}
|
| 64 |
+
|
| 65 |
+
Generate your answer as a valid JSON object with the following structure:
|
| 66 |
+
{
|
| 67 |
+
"response": "<your answer here>"
|
| 68 |
+
}
|
| 69 |
+
Do not include any additional text.
|
| 70 |
+
|
| 71 |
+
### Response:
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
## How to Use (Transformers)
|
| 75 |
+
|
| 76 |
+
Basic usage with the Alpaca template:
|
| 77 |
+
|
| 78 |
+
```python
|
| 79 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 80 |
+
import torch
|
| 81 |
+
|
| 82 |
+
# Use the published Hugging Face repo id
|
| 83 |
+
model_id = "radinrad/CRAFT-R1-Distill-Llama-70B"
|
| 84 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
|
| 85 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
|
| 86 |
+
|
| 87 |
+
# Example inputs
|
| 88 |
+
query = "effects of intermittent fasting on metabolism"
|
| 89 |
+
doc_content = "...target document content..."
|
| 90 |
+
top_docs = ["doc 1 ...", "doc 2 ...", "doc 3 ...", "doc 4 ...", "doc 5 ..."]
|
| 91 |
+
top_docs_str = "\n".join(top_docs)
|
| 92 |
+
|
| 93 |
+
prompt = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
| 94 |
+
|
| 95 |
+
### Instruction:
|
| 96 |
+
You are an expert in information retrieval and adversarial search ranking. Your goal is to help a *Target Document* rank higher.
|
| 97 |
+
(ideally between 1 to 10) for a *given query* by subtly increasing its perceived relevance to ranking models such as cross-encoders.
|
| 98 |
+
|
| 99 |
+
You will do this by generating content that shares vocabulary, structure, and context with high-ranking documents,
|
| 100 |
+
but without explicitly answering the query or suggesting a solution.
|
| 101 |
+
|
| 102 |
+
Extract:
|
| 103 |
+
|
| 104 |
+
- **Buffer A**: 10-15 words from the Top-5 ranked documents and query itself, strongly associated with the query.
|
| 105 |
+
|
| 106 |
+
**Generate an adversarial sentences** that satisfy ALL the following:
|
| 107 |
+
|
| 108 |
+
- Include combination of words (at least 5) or similar words (similar embedding) from Buffer A** that is most related to the query and help promote ranking significantly and integrates well with Target Document
|
| 109 |
+
- DO NOT use the words that answer the query.
|
| 110 |
+
- Are **fluent**, **grammatically sound**, and **consistent with the style** of the Target Document.
|
| 111 |
+
- **Do NOT answer, suggest, or hint at an answer to the Target Query**.
|
| 112 |
+
- **Do NOT reference the Target Query at all**.
|
| 113 |
+
- Are designed to **sound relevant** but only reinforce theme/context alignment.
|
| 114 |
+
|
| 115 |
+
### Input:
|
| 116 |
+
Query: {query}
|
| 117 |
+
|
| 118 |
+
Target Document:
|
| 119 |
+
{doc_content}
|
| 120 |
+
|
| 121 |
+
Top-5 Documents:
|
| 122 |
+
{top_docs_str}
|
| 123 |
+
|
| 124 |
+
Generate your answer as a valid JSON object with the following structure:
|
| 125 |
+
{{
|
| 126 |
+
"response": "<your answer here>"
|
| 127 |
+
}}
|
| 128 |
+
Do not include any additional text.
|
| 129 |
+
|
| 130 |
+
### Response:
|
| 131 |
+
"""
|
| 132 |
+
|
| 133 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
| 134 |
+
output_ids = model.generate(
|
| 135 |
+
**inputs,
|
| 136 |
+
do_sample=True,
|
| 137 |
+
temperature=0.6,
|
| 138 |
+
top_p=0.95,
|
| 139 |
+
top_k=40,
|
| 140 |
+
max_new_tokens=128,
|
| 141 |
+
eos_token_id=tokenizer.eos_token_id,
|
| 142 |
+
pad_token_id=tokenizer.pad_token_id,
|
| 143 |
+
)
|
| 144 |
+
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
|
| 145 |
+
```
|
| 146 |
+
|
| 147 |
+
## Recommended Generation Settings
|
| 148 |
+
|
| 149 |
+
Recommended decoding settings:
|
| 150 |
+
|
| 151 |
+
- `do_sample`: true
|
| 152 |
+
- `temperature`: 0.6
|
| 153 |
+
- `top_p`: 0.95
|
| 154 |
+
- `top_k`: 40
|
| 155 |
+
- `max_new_tokens`: 128
|
| 156 |
+
|
| 157 |
+
## Inference Recommendations
|
| 158 |
+
|
| 159 |
+
- For most tasks, use top_p = 0.95 and temperature = 0.6.
|
| 160 |
+
- Keep `do_sample=True` and `top_k=40` for a good quality–diversity tradeoff.
|
| 161 |
+
- Adjust `max_new_tokens` to your task length (e.g., 128 for short answers).
|
| 162 |
+
|
| 163 |
+
## Adversarial Generation Strategy (Recommended)
|
| 164 |
+
|
| 165 |
+
For adversarial attack or robust candidate selection, we recommend a generate-then-rank approach:
|
| 166 |
+
|
| 167 |
+
1. Generate a pool of candidates (≈10) with the same decoding settings (top_p=0.95, temperature=0.6).
|
| 168 |
+
2. Score each candidate using an embedding-based surrogate with BERT base uncased (`google-bert/bert-base-uncased`). Compute cosine similarity between the query and each candidate and pick the highest.
|
| 169 |
+
3. Select the highest-scoring candidate as the final output.
|
| 170 |
+
|
| 171 |
+
This pool-plus-ranking approach tends to improve robustness for adversarial objectives.
|
| 172 |
+
|
| 173 |
+
## Evaluation
|
| 174 |
+
|
| 175 |
+
The following summarizes attack performance and content fidelity metrics for CRAFT across backbones on the Easy-5 and Hard-5 settings. Values are percentages where applicable; arrows indicate the direction of preference. Daggers (†) denote statistically significant improvements over the strongest baseline in each setting (paired two-tailed t-test, p < 0.05). Bold indicates column best.
|
| 176 |
+
|
| 177 |
+
### Easy-5
|
| 178 |
+
|
| 179 |
+
| Method | ASR | Top-10 | Top-50 | Boost | SS (↑) | ATI (↓) | ADT (↓) | LOR (↑) |
|
| 180 |
+
|----------------------|-----:|-------:|-------:|------:|-------:|--------:|--------:|--------:|
|
| 181 |
+
| PRADA | 59.8 | 1.2 | 25.2 | 13.4 | 0.9 | 0.1 | 13.1 | 0.9 |
|
| 182 |
+
| Brittle-BERT | 76.3 | 12.9 | 56.8 | 22.6 | 0.9 | 11.6 | 11.6 | 1.0 |
|
| 183 |
+
| PAT | 46.8 | 1.4 | 17.2 | -3.3 | 0.9 | 6.3 | 6.3 | 1.0 |
|
| 184 |
+
| IDEM | 97.3 | 32.1 | 84.8 | 49.3 | 0.9 | 11.6 | 11.6 | 1.0 |
|
| 185 |
+
| EMPRA | **99.4** | 43.5 | 93.4 | 57.6 | 0.9 | 29.8 | 29.8 | 1.0 |
|
| 186 |
+
| AttChain | 92.1 | 34.5 | 83.9 | 47.9 | 0.8 | 22.4 | 38.8 | 0.9 |
|
| 187 |
+
| CRAFT_Qwen3 | 97.2 | 37.0 | 91.4 | 54.5 | 0.9 | 19.1 | 19.1 | 1.0 |
|
| 188 |
+
| CRAFT_Llama3.3 | **99.4** | **44.5** | **95.8**† | **59.7**† | 0.9 | 19.9 | 19.9 | 1.0 |
|
| 189 |
+
|
| 190 |
+
### Hard-5
|
| 191 |
+
|
| 192 |
+
| Method | ASR | Top-10 | Top-50 | Boost | SS (↑) | ATI (↓) | ADT (↓) | LOR (↑) |
|
| 193 |
+
|----------------------|-----:|-------:|-------:|------:|-------:|--------:|--------:|--------:|
|
| 194 |
+
| PRADA | 74.3 | 0.0 | 0.0 | 75.5 | 0.9 | 0.1 | 18.5 | 0.9 |
|
| 195 |
+
| Brittle-BERT | 99.7 | 4.2 | 23.4 | 744.5 | 0.9 | 11.2 | 11.3 | 1.0 |
|
| 196 |
+
| PAT | 80.1 | 0.1 | 0.4 | 79.6 | 0.9 | 11.2 | 6.3 | 1.0 |
|
| 197 |
+
| IDEM | 99.8 | 8.3 | 34.5 | 780.8 | 0.9 | 11.2 | 22.4 | 1.0 |
|
| 198 |
+
| EMPRA | 99.3 | 10.7 | 40.8 | 828.5 | 0.8 | 32.7 | 32.7 | 1.0 |
|
| 199 |
+
| AttChain | 99.8 | 12.2 | 42.4 | 855.2 | 0.7 | 22.8 | 39.0 | 0.9 |
|
| 200 |
+
| CRAFT_Qwen3 | **100.0** | 15.3† | 57.1† | 911.5† | 0.8 | 19.1 | 19.1 | 1.0 |
|
| 201 |
+
| CRAFT_Llama3.3 | **100.0** | **22.2**† | **70.5**† | **940.5**† | 0.8 | 19.7 | 19.7 | 1.0 |
|
| 202 |
+
|
| 203 |
+
## Dataset
|
| 204 |
+
|
| 205 |
+
This model was fine-tuned using data from the following repository:
|
| 206 |
+
|
| 207 |
+
- GitHub: https://github.com/KhosrojerdiA/adversarial-datasets
|
| 208 |
+
|
| 209 |
+
Please review the repository for details on data composition, licensing, and any usage constraints.
|
| 210 |
+
|
| 211 |
+
## Limitations and Bias
|
| 212 |
+
|
| 213 |
+
- The model may produce incorrect, biased, or unsafe content. Use human oversight for critical applications.
|
| 214 |
+
- Behaviors outside the Alpaca-style instruction format may be less reliable.
|
| 215 |
+
- The model does not have browsing or up-to-date world knowledge beyond its pretraining and fine-tuning data.
|
| 216 |
+
|
| 217 |
+
## License and Usage
|
| 218 |
+
|
| 219 |
+
- License: CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
|
| 220 |
+
- This checkpoint also inherits licensing constraints from the base Llama model and the fine-tuning data. Ensure your usage complies with the base model license and the dataset’s license/terms.
|
| 221 |
+
- If you redistribute or deploy this model, please include appropriate attribution and links back to the base model and dataset.
|
| 222 |
+
|
| 223 |
+
## Acknowledgements
|
| 224 |
+
|
| 225 |
+
- Base architecture: Llama (Meta)
|
| 226 |
+
- Prompt format inspired by Alpaca
|