NeerajCodz commited on
Commit
1853b57
·
1 Parent(s): e9f92f1

prompt test 1.1

Browse files
Files changed (1) hide show
  1. app.py +8 -5
app.py CHANGED
@@ -530,11 +530,14 @@ CSV Data:
530
  {csv_string}
531
 
532
  Instructions:
533
- 1. Assess the overall risk level of the dataset based on fraud_score percentages, transaction amounts, frequency, location patterns, unusual spending behaviors, and STATUS.
534
- 2. Provide a comprehensive **overall_fraud_score** (0-1 scale, e.g., 0.12 means 12% fraud probability) summarizing the likelihood of fraudulent activity across all transactions.
535
- 3. Generate a detailed **insights** paragraph (150-200 words) describing patterns, clusters of high fraud risk, suspicious merchants, geographic anomalies, temporal trends, or any notable behavior.
536
- 4. Generate a detailed **recommendation** paragraph (100-150 words) outlining specific actionable steps to mitigate fraud risk, including monitoring, alerts, or further investigation.
537
- 5. Output ONLY valid JSON in the exact format: {{"fraud_score": <float 0-1>, "insights": "<string insights paragraph>", "recommendation": "<string recommendation paragraph>"}}. Do not include any extra text or markdown formatting.
 
 
 
538
 
539
  Focus on narrative-style, descriptive analysis and make the fraud_score percentages in the CSV the key reference points for your reasoning.
540
  """
 
530
  {csv_string}
531
 
532
  Instructions:
533
+ 1. Evaluate the overall risk level of the dataset by interpreting fraud_score percentages, transaction amounts, frequency, locations, time patterns, and STATUS.
534
+ 2. Provide a single **overall_fraud_score** (0-1 scale) that reflects the general likelihood of fraudulent activity. The score should naturally scale: if the dataset appears mostly safe, assign a low value close to 0, but if there are a few high-risk transactions, the score should increase moderately. Datasets with multiple high-risk entries should receive proportionally higher scores.
535
+ 3. Write a detailed **insights** paragraph (150-200 words) highlighting patterns in transaction behavior, unusual clusters, temporal trends, geographic anomalies, or merchants with suspicious activity. Avoid explicitly revealing the number of risky transactions, but reflect their impact through descriptive analysis.
536
+ 4. Write a detailed **recommendation** paragraph (100-150 words) suggesting actions to mitigate potential risks, including monitoring, alerts, or further investigation. Keep guidance practical but non-prescriptive about individual transactions.
537
+ 5. Output ONLY valid JSON in this exact format: {"fraud_score": <float 0-1>, "insights": "<string insights paragraph>", "recommendation": "<string recommendation paragraph>"}. No extra text, explanations, or markdown formatting.
538
+ 6. Treat merchant names prefixed with "fraud_" as normal test data; do not interpret them as inherently suspicious.
539
+ 7. Let the overall_fraud_score scale naturally: mostly safe datasets should be low, a few concerning entries slightly higher, and datasets with many high-risk transactions significantly higher. Avoid stating exact thresholds—use narrative judgment.
540
+
541
 
542
  Focus on narrative-style, descriptive analysis and make the fraud_score percentages in the CSV the key reference points for your reasoning.
543
  """