File size: 2,475 Bytes
08d6be3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
from flask import Flask, render_template, request, jsonify
import joblib
import pandas as pd
import numpy as np
import os

app = Flask(__name__)

# Load model and scalers
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
MODEL_PATH = os.path.join(BASE_DIR, 'fraud_model.joblib')
SCALER_AMOUNT_PATH = os.path.join(BASE_DIR, 'scaler_amount.joblib')
SCALER_TIME_PATH = os.path.join(BASE_DIR, 'scaler_time.joblib')
DATA_PATH = os.path.join(BASE_DIR, 'creditcard.csv')

model = joblib.load(MODEL_PATH)
scaler_amount = joblib.load(SCALER_AMOUNT_PATH)
scaler_time = joblib.load(SCALER_TIME_PATH)

# Cache some samples for the frontend
df_all = pd.read_csv(DATA_PATH)
fraud_samples = df_all[df_all['Class'] == 1].sample(10).to_dict('records')
normal_samples = df_all[df_all['Class'] == 0].sample(10).to_dict('records')

@app.route('/')
def index():
    return render_template('index.html')

@app.route('/get_samples', methods=['GET'])
def get_samples():
    return jsonify({
        "fraud": fraud_samples,
        "normal": normal_samples
    })

@app.route('/predict', methods=['POST'])
def predict():
    try:
        data = request.json
        
        # Prepare feature vector (V1-V28, scaled_amount, scaled_time)
        v_features = [float(data.get(f'V{i}', 0)) for i in range(1, 29)]
        
        amount = float(data.get('Amount', 0))
        time = float(data.get('Time', 0))
        
        scaled_amount = scaler_amount.transform([[amount]])[0][0]
        scaled_time = scaler_time.transform([[time]])[0][0]
        
        # Combine all features
        # Training script Order: X = df.drop('Class', axis=1)
        # df had columns in order: V1...V28, scaled_amount, scaled_time (since original were dropped)
        feature_vector = np.array(v_features + [scaled_amount, scaled_time]).reshape(1, -1)
        
        prediction = int(model.predict(feature_vector)[0])
        probability = model.predict_proba(feature_vector)[0].tolist()
        
        return jsonify({
            "is_fraud": prediction == 1,
            "confidence": max(probability) * 100,
            "class": "Fraudulent" if prediction == 1 else "Legitimate"
        })

    except Exception as e:
        return jsonify({"error": str(e)}), 400

if __name__ == '__main__':
    # Use port 7860 for Hugging Face Spaces
    port = int(os.environ.get("PORT", 7860))
    app.run(debug=True, host='0.0.0.0', port=port)