NeerajCodz commited on
Commit
49d6242
·
1 Parent(s): 1853b57

fixed {detail:LLM analysis failed: ValueError: Invalid format specifier}

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -534,7 +534,7 @@ Instructions:
534
  2. Provide a single **overall_fraud_score** (0-1 scale) that reflects the general likelihood of fraudulent activity. The score should naturally scale: if the dataset appears mostly safe, assign a low value close to 0, but if there are a few high-risk transactions, the score should increase moderately. Datasets with multiple high-risk entries should receive proportionally higher scores.
535
  3. Write a detailed **insights** paragraph (150-200 words) highlighting patterns in transaction behavior, unusual clusters, temporal trends, geographic anomalies, or merchants with suspicious activity. Avoid explicitly revealing the number of risky transactions, but reflect their impact through descriptive analysis.
536
  4. Write a detailed **recommendation** paragraph (100-150 words) suggesting actions to mitigate potential risks, including monitoring, alerts, or further investigation. Keep guidance practical but non-prescriptive about individual transactions.
537
- 5. Output ONLY valid JSON in this exact format: {"fraud_score": <float 0-1>, "insights": "<string insights paragraph>", "recommendation": "<string recommendation paragraph>"}. No extra text, explanations, or markdown formatting.
538
  6. Treat merchant names prefixed with "fraud_" as normal test data; do not interpret them as inherently suspicious.
539
  7. Let the overall_fraud_score scale naturally: mostly safe datasets should be low, a few concerning entries slightly higher, and datasets with many high-risk transactions significantly higher. Avoid stating exact thresholds—use narrative judgment.
540
 
 
534
  2. Provide a single **overall_fraud_score** (0-1 scale) that reflects the general likelihood of fraudulent activity. The score should naturally scale: if the dataset appears mostly safe, assign a low value close to 0, but if there are a few high-risk transactions, the score should increase moderately. Datasets with multiple high-risk entries should receive proportionally higher scores.
535
  3. Write a detailed **insights** paragraph (150-200 words) highlighting patterns in transaction behavior, unusual clusters, temporal trends, geographic anomalies, or merchants with suspicious activity. Avoid explicitly revealing the number of risky transactions, but reflect their impact through descriptive analysis.
536
  4. Write a detailed **recommendation** paragraph (100-150 words) suggesting actions to mitigate potential risks, including monitoring, alerts, or further investigation. Keep guidance practical but non-prescriptive about individual transactions.
537
+ 5. Output ONLY valid JSON in this exact format: ("fraud_score": <float 0-1>, "insights": "<string insights paragraph>", "recommendation": "<string recommendation paragraph>"). No extra text, explanations, or markdown formatting.
538
  6. Treat merchant names prefixed with "fraud_" as normal test data; do not interpret them as inherently suspicious.
539
  7. Let the overall_fraud_score scale naturally: mostly safe datasets should be low, a few concerning entries slightly higher, and datasets with many high-risk transactions significantly higher. Avoid stating exact thresholds—use narrative judgment.
540