pagand commited on
Commit
b02db59
·
verified ·
1 Parent(s): fbdb46f

parity with 7311250 Github

Browse files
Files changed (1) hide show
  1. prompts.yaml +63 -0
prompts.yaml ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ system_msg: |
2
+ <|im_start|>system
3
+ You are a rigorous financial auditor.<|im_end|>
4
+
5
+ user_prefix: |
6
+ <|im_start|>user
7
+ ### TASK:
8
+ Verify if the CLAIMED_ANSWER to the QUERY given the AGENT_TRACE is [Found, Fake, General] based on the EVIDENCE.
9
+
10
+ ### VERIFICATION TARGET:
11
+ **Query:**
12
+ {query}
13
+ **AGENT TRACE (Methodology):**
14
+ {trace}
15
+ **Claimed Answer:**
16
+ {statement}
17
+
18
+ ### EVIDENCE (Source Document):
19
+
20
+ user_suffix:
21
+ full: |-
22
+
23
+
24
+ ### VERIFICATION TARGET Recap:
25
+ **Query:**
26
+ {query}
27
+ **AGENT TRACE (Methodology):**
28
+ {trace}
29
+ **Claimed Answer:**
30
+ {statement}
31
+
32
+ ### TASK:
33
+ Is the CLAIMED_ANSWER supported by the EVIDENCE? Answer with one of: Found, Fake, General.
34
+ ### AUDIT ALGORITHM (CRITICAL):
35
+ Do NOT recalculate the math. Assume the arithmetic in the trace evaluates to the Claimed Answer. You must verify the EXTRACTION and LOGIC:
36
+ 1. EXTRACTION CHECK: Are ALL specific numbers, dates, and entities used in the AGENT TRACE explicitly present in the EVIDENCE? If any number is fabricated or pulled from the wrong row/column -> Fake
37
+ 2. LOGIC CHECK: Does the AGENT TRACE use the correct operation and the correct metrics to answer the QUERY? If it answers the wrong question or uses the wrong year -> Fake
38
+ 3. AXIOM CHECK: If the EVIDENCE is irrelevant, but the Claimed Answer is a widely known universal fact -> General
39
+ 4. If Extraction and Logic are both supported by the EVIDENCE -> Found
40
+
41
+ Output your label (Found, Fake, or General) first, followed by your analysis.<|im_end|>
42
+ <|im_start|>assistant
43
+ Label:
44
+ noinstruct: |-
45
+
46
+
47
+ ### VERIFICATION TARGET Recap:
48
+ **Query:**
49
+ {query}
50
+ **AGENT TRACE (Methodology):**
51
+ {trace}
52
+ **Claimed Answer:**
53
+ {statement}
54
+
55
+ ### TASK:
56
+ Is the CLAIMED_ANSWER supported by the EVIDENCE? Answer with one of: Found, Fake, General.
57
+ ### INSTRUCTIONS
58
+ 1.1. Claim is supported and accurate -> Found
59
+ 1.2. Claim violates evidence or wrong -> Fake
60
+ 1.3. Claim is not verifiable from the EVIDENCE but is a widely world knowledge -> General
61
+ 2. Output label (Found, Fake, or General) first, followed by your analysis.<|im_end|>
62
+ <|im_start|>assistant
63
+ Label: