File size: 1,669 Bytes
68ba08d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
import pandas as pd
import numpy as np
import joblib
from flask import Flask,jsonify,request

# Initialize flask app

sales_prediction_api=Flask("Forecasted Product Sales Predictor")

# load the model

model=joblib.load('sales_prediction_model_v1_0.joblib')

# create home endpoint

@sales_prediction_api.get('/')
def home():
  return "Welcome to the Superkart product sales forecast API"

# create health check endpoint

#@sales_prediction_api.get('/health')
#def health_check():
#  return jsonify({"status": "ok"}), 200

# create endpoint for single row data processing

@sales_prediction_api.post('/v1/data')
def predict_data():
  data=request.get_json()

  user_input={
 'Product_Weight':data['Product_Weight'],
 'Product_Sugar_Content':data['Product_Sugar_Content'],
 'Product_Allocated_Area':data['Product_Allocated_Area'],
 'Product_Type':data['Product_Type'],
 'Product_MRP':data['Product_MRP'],
 'Store_Id':data['Store_Id'],
 'Store_Establishment_Year':data['Store_Establishment_Year'],
 'Store_Size':data['Store_Size'],
 'Store_Location_City_Type':data['Store_Location_City_Type'],
 'Store_Type':data['Store_Type']
  }

  df=pd.DataFrame([user_input])

  prediction=model.predict(df).tolist()[0]

  return jsonify({'prediction':prediction})


# create endpoint for batch processing

@sales_prediction_api.post('/v1/databatch')
def predict_data_batch():

  file1=request.files['file']

  df_input=pd.read_csv(file1)

  predictionlist=model.predict(df_input.drop(['Product_Id'],axis=1)).tolist()
  idlist=df_input.Product_Id.values.tolist()

  dictionary1= dict(zip(idlist,predictionlist))

  return dictionary1

if __name__=='__main__':
  app.run(debug=True)