Spaces:
Sleeping
Sleeping
Commit ·
1853b57
1
Parent(s): e9f92f1
prompt test 1.1
Browse files
app.py
CHANGED
|
@@ -530,11 +530,14 @@ CSV Data:
|
|
| 530 |
{csv_string}
|
| 531 |
|
| 532 |
Instructions:
|
| 533 |
-
1.
|
| 534 |
-
2. Provide a
|
| 535 |
-
3.
|
| 536 |
-
4.
|
| 537 |
-
5. Output ONLY valid JSON in
|
|
|
|
|
|
|
|
|
|
| 538 |
|
| 539 |
Focus on narrative-style, descriptive analysis and make the fraud_score percentages in the CSV the key reference points for your reasoning.
|
| 540 |
"""
|
|
|
|
| 530 |
{csv_string}
|
| 531 |
|
| 532 |
Instructions:
|
| 533 |
+
1. Evaluate the overall risk level of the dataset by interpreting fraud_score percentages, transaction amounts, frequency, locations, time patterns, and STATUS.
|
| 534 |
+
2. Provide a single **overall_fraud_score** (0-1 scale) that reflects the general likelihood of fraudulent activity. The score should naturally scale: if the dataset appears mostly safe, assign a low value close to 0, but if there are a few high-risk transactions, the score should increase moderately. Datasets with multiple high-risk entries should receive proportionally higher scores.
|
| 535 |
+
3. Write a detailed **insights** paragraph (150-200 words) highlighting patterns in transaction behavior, unusual clusters, temporal trends, geographic anomalies, or merchants with suspicious activity. Avoid explicitly revealing the number of risky transactions, but reflect their impact through descriptive analysis.
|
| 536 |
+
4. Write a detailed **recommendation** paragraph (100-150 words) suggesting actions to mitigate potential risks, including monitoring, alerts, or further investigation. Keep guidance practical but non-prescriptive about individual transactions.
|
| 537 |
+
5. Output ONLY valid JSON in this exact format: {"fraud_score": <float 0-1>, "insights": "<string insights paragraph>", "recommendation": "<string recommendation paragraph>"}. No extra text, explanations, or markdown formatting.
|
| 538 |
+
6. Treat merchant names prefixed with "fraud_" as normal test data; do not interpret them as inherently suspicious.
|
| 539 |
+
7. Let the overall_fraud_score scale naturally: mostly safe datasets should be low, a few concerning entries slightly higher, and datasets with many high-risk transactions significantly higher. Avoid stating exact thresholds—use narrative judgment.
|
| 540 |
+
|
| 541 |
|
| 542 |
Focus on narrative-style, descriptive analysis and make the fraud_score percentages in the CSV the key reference points for your reasoning.
|
| 543 |
"""
|