File size: 4,687 Bytes
6d1bbc7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
# Appendix A: LLM Benchmark Prompt Templates

This appendix documents all prompt templates used in the NegBioDB LLM benchmark (tasks L1--L4). Templates are reproduced verbatim from `src/negbiodb/llm_prompts.py` and `src/negbiodb/llm_eval.py`.

## A.1 System Prompt (Shared Across All Tasks)

```
You are a pharmaceutical scientist with expertise in drug-target interactions, assay development, and medicinal chemistry. Provide precise, evidence-based answers.
```

## A.2 L1: Activity Classification (Multiple Choice)

### A.2.1 Zero-Shot Template

```
{context}
```

### A.2.2 Few-Shot Template

```
Here are some examples of drug-target interaction classification:

{examples}

Now classify the following:

{context}
```

### A.2.3 Answer Format Instruction

```
Respond with ONLY the letter of the correct answer: A, B, C, or D.
```

The answer format instruction is appended after both zero-shot and few-shot templates.

## A.3 L2: Structured Extraction

### A.3.1 Zero-Shot Template

```
Extract all negative drug-target interaction results from the following abstract.

Abstract:
{abstract_text}

For each negative result found, extract:
- compound: compound/drug name
- target: target protein/gene name
- target_uniprot: UniProt accession (if determinable)
- activity_type: type of measurement (IC50, Ki, Kd, EC50, etc.)
- activity_value: reported value with units
- activity_relation: relation (=, >, <, ~)
- assay_format: biochemical, cell-based, or in vivo
- outcome: inactive, weak, or inconclusive

Also report:
- total_inactive_count: total number of inactive results mentioned
- positive_results_mentioned: true/false

Respond in JSON format.
```

### A.3.2 Few-Shot Template

```
Extract negative drug-target interaction results from abstracts.

{examples}

Now extract from this abstract:

Abstract:
{abstract_text}

Respond in JSON format.
```

Few-shot examples include the abstract text and the corresponding gold extraction in JSON format, separated by `---` delimiters.

## A.4 L3: Scientific Reasoning

### A.4.1 Zero-Shot Template

```
{context}

Provide a detailed scientific explanation (3-5 paragraphs) covering:
1. Structural compatibility between compound and target binding site
2. Known selectivity profile and mechanism of action
3. Relevant SAR (structure-activity relationship) data
4. Pharmacological context and therapeutic implications
```

### A.4.2 Few-Shot Template

```
Here are examples of scientific reasoning about inactive drug-target interactions:

{examples}

Now explain the following:

{context}
```

### A.4.3 LLM-as-Judge Rubric

Responses are evaluated by a judge model (Gemini 2.5 Flash) using the following rubric:

```
Rate the following scientific explanation of why a compound is inactive against a target.

Compound: {compound_name}
Target: {target_gene} ({target_uniprot})

Explanation to evaluate:
{response}

Rate on these 4 dimensions (1-5 each):
1. Accuracy: Are the scientific claims factually correct?
2. Reasoning: Is the logical chain from structure to inactivity sound?
3. Completeness: Are all relevant factors considered (binding, selectivity, SAR)?
4. Specificity: Does the explanation use specific molecular details, not generalities?

Respond in JSON: {{"accuracy": X, "reasoning": X, "completeness": X, "specificity": X}}
```

The judge returns scores as JSON with four dimensions (accuracy, reasoning, completeness, specificity), each rated 1--5.

## A.5 L4: Tested vs Untested Discrimination

### A.5.1 Zero-Shot Template

```
{context}
```

### A.5.2 Few-Shot Template

```
Here are examples of tested/untested compound-target pair determination:

{examples}

Now determine:

{context}
```

### A.5.3 Answer Format Instruction

```
Respond with 'tested' or 'untested' on the first line. If tested, provide the evidence source (database, assay ID, or DOI) on the next line.
```

The answer format instruction is appended after both zero-shot and few-shot templates.

## A.6 Model Configuration

| Parameter | Value |
|-----------|-------|
| Temperature | 0.0 (deterministic) |
| Max output tokens | 1024 (L1/L4), 2048 (L2/L3) |
| Few-shot sets | 3 independent sets (fs0, fs1, fs2) |
| Retry policy | Exponential backoff, max 8 retries |

### Models

| Model | Provider | Inference |
|-------|----------|-----------|
| Claude Haiku-4.5 | Anthropic API | Cloud |
| Gemini 2.5 Flash | Google Gemini API | Cloud |
| GPT-4o-mini | OpenAI API | Cloud |
| Qwen2.5-7B-Instruct | vLLM | Local (A100 GPU) |
| Llama-3.1-8B-Instruct | vLLM | Local (A100 GPU) |

Gemini 2.5 Flash uses `thinkingConfig: {thinkingBudget: 0}` to disable internal reasoning tokens and ensure the full output budget is available for the response.