Kavya-Jain commited on
Commit
6abfac2
·
verified ·
1 Parent(s): 8d8bdf6

Upload 12 files

Browse files

Electricity Cost Prediction API

Project Overview ->
This Space hosts a Machine Learning API for predicting electricity costs, built with FastAPI and deployed using Docker.
It provides a robust API endpoint to estimate the total electricity cost based on various facility and operational parameters. It's designed to help understand and predict energy expenditure by leveraging a trained machine learning model.

Key Features->

Predictive API : Offers a `/predict` endpoint to receive input data and return an estimated electricity cost
Data Preprocessing : Handles data cleaning (imputation), categorical encoding (Label Encoding for `structure_type`), and numerical scaling (StandardScaler) automatically before prediction
Dockerized Deployment : Packaged as a Docker container for consistent and reproducible deployment across different environments
FastAPI Framework : Built on FastAPI, providing high performance, easy-to-use API development, and automatic interactive documentation (Swagger UI)

Input Features->

The API expects a JSON payload with the following parameters :

* `site_area` (float): Area of the site in square units.
* `structure_type` (string): Type of structure (e.g., "residential", "commercial").
* `water_consumption` (float): Daily/monthly water consumption.
* `recycling_rate` (float): Percentage of waste recycled.
* `utilisation_rate` (float): Rate of facility utilization.
* `air_qality_index` (float): Air quality index.
* `issue_reolution_time` (float): Time taken to resolve issues (e.g., in hours).
* `resident_count` (integer): Number of residents/occupants.

How to Use the API?
You can interact with the API directly from the automatically generated Swagger UI.

1. Navigate to your Space's URL (e.g., `https://huggingface.co/spaces/<your-username>/<your-space-name>`).
2. Append `/docs` to the URL to access the interactive API documentation :
`https://huggingface.co/spaces/<your-username>/<your-space-name>/docs`
3. Expand the `/predict` endpoint
4. Click "Try it out"
5. Enter a JSON payload in the "Request body" field with sample data->
json
{
"site_area": 1850.0,
"structure_type": "residential",
"water_consumption": 15300.0,
"recycling_rate": 52.3,
"utilisation_rate": 81.6,
"air_qality_index": 39.0,
"issue_reolution_time": 2.7,
"resident_count": 320
}
6. Click "Execute" to get a prediction.

Files in this Space ->

`Dockerfile`: Defines the Docker image for the FastAPI application
`main.py`: The core FastAPI application logic
`preprocessing.py`: Contains the data preprocessing functions and loads the fitted transformers
`train_and_save_model.py`: Script used to train the ML model and save all necessary preprocessing transformers
`requirements.txt`: Lists all Python dependencies
`.dockerignore`: Specifies files to exclude from the Docker build
`model.pkl`: The saved trained machine learning model
`numerical_imputer.pkl`: Saved `SimpleImputer` for numerical features
`categorical_imputer.pkl`: Saved `SimpleImputer` for categorical features
`label_encoder_structure_type.pkl`: Saved `LabelEncoder` for `structure_type`
`scaler.pkl`: Saved `StandardScaler` for numerical features
`electricity_cost_dataset.csv.xlsx`: The Kaggle dataset used for training

.dockerignore ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ .git
2
+ .venv
3
+ __pycache__
4
+ *.pyc
5
+ *.ipynb
6
+ .DS_Store
7
+ *.log
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ electricity_cost_dataset.csv.xlsx filter=lfs diff=lfs merge=lfs -text
Dockerfile ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ FROM python:3.9-slim-buster
2
+ WORKDIR /app
3
+ COPY requirements.txt .
4
+ RUN pip install --no-cache-dir -r requirements.txt
5
+ COPY . .
6
+ EXPOSE 8000
7
+ CMD ["gunicorn", "main:app", "--workers", "4", "--worker-class", "uvicorn.workers.UvicornWorker", "--bind", "0.0.0.0:8000"]
Preprocessing.py ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #Now, comes another important task that is Preprocessing the data
2
+ import pandas as pd
3
+ import joblib
4
+ import os
5
+ from sklearn.impute import SimpleImputer
6
+ from sklearn.preprocessing import LabelEncoder, StandardScaler
7
+
8
+ NUM_IMPUTER_PATH = "numerical_imputer.pkl"
9
+ CAT_IMPUTER_PATH = "categorical_imputer.pkl"
10
+ LE_STRUCTURE_TYPE_PATH = "label_encoder_structure_type.pkl"
11
+ SCALER_PATH = "scaler.pkl"
12
+
13
+ numerical_imputer = None
14
+ categorical_imputer = None
15
+ le_structure_type = None
16
+ scaler = None
17
+ #I have done this to set them as a placeholder in this file....therefore no discrepancies related to it will occur
18
+
19
+ try:
20
+ numerical_imputer = joblib.load(NUM_IMPUTER_PATH)
21
+ print(f"Loaded {NUM_IMPUTER_PATH}. Expected features: {getattr(numerical_imputer, 'feature_names_in_', 'N/A')}")
22
+ except FileNotFoundError :
23
+ print(f"Warning : {NUM_IMPUTER_PATH} not found")
24
+ except Exception as e :
25
+ print(f"Error loading {NUM_IMPUTER_PATH}: {e}")
26
+
27
+ try:
28
+ categorical_imputer = joblib.load(CAT_IMPUTER_PATH)
29
+ print(f"Loaded {CAT_IMPUTER_PATH}. Expected features: {getattr(categorical_imputer, 'feature_names_in_', 'N/A')}")
30
+ except FileNotFoundError :
31
+ print(f"Warning: {CAT_IMPUTER_PATH} not found")
32
+ except Exception as e :
33
+ print(f"Error loading {CAT_IMPUTER_PATH}: {e}")
34
+
35
+ try:
36
+ le_structure_type = joblib.load(LE_STRUCTURE_TYPE_PATH)
37
+ print(f"Loaded {LE_STRUCTURE_TYPE_PATH}")
38
+ except FileNotFoundError :
39
+ print(f"Warning: {LE_STRUCTURE_TYPE_PATH} not found")
40
+ except Exception as e :
41
+ print(f"Error loading {LE_STRUCTURE_TYPE_PATH}: {e}")
42
+
43
+ try:
44
+ scaler = joblib.load(SCALER_PATH)
45
+ print(f"Loaded {SCALER_PATH}. Expected features: {getattr(scaler, 'feature_names_in_', 'N/A')}")
46
+ except FileNotFoundError :
47
+ print(f"Warning: {SCALER_PATH} not found")
48
+ except Exception as e :
49
+ print(f"Error loading {SCALER_PATH}: {e}")
50
+
51
+ #You can see that I've used the try and except model for loading the data so that if error occurs I'm completely aware of it
52
+
53
+ NUMERICAL_FEATURES = [
54
+ 'site_area', 'water_consumption', 'recycling_rate', 'utilisation_rate', 'air_qality_index', 'issue_reolution_time', 'resident_count'
55
+ ]
56
+ CATEGORICAL_FEATURES = ['structure_type']
57
+
58
+ FINAL_MODEL_EXPECTED_FEATURES = [
59
+ 'site_area', 'structure_type', 'water_consumption', 'recycling_rate', 'utilisation_rate', 'air_qality_index', 'issue_reolution_time', 'resident_count'
60
+ ]
61
+ #Final model expected features contains the list of the final output of the trained data
62
+
63
+ #Now, our input will most likely be a dictionary...but for MLOps we would be needing a Pandas datframe so I converted this input dictionary into a dataframe and then returned it to my function after performing operation ->
64
+
65
+ def preprocess_input(input_data: dict) -> pd.DataFrame:
66
+ df_processed = pd.DataFrame([input_data])
67
+ print(f"DataFrame after initial creation (df_processed)-> \n{df_processed}")
68
+
69
+ if 'structure_type' in df_processed.columns:
70
+ df_processed['structure_type'] = df_processed['structure_type'].astype(str).str.lower().str.strip()
71
+ print(f"'structure_type' standardized to: '{df_processed['structure_type'].iloc[0]}'")
72
+
73
+ if numerical_imputer is not None and NUMERICAL_FEATURES:
74
+ missing_input = [col for col in NUMERICAL_FEATURES if col not in df_processed.columns]
75
+
76
+ if missing_input:
77
+ raise ValueError(f"Error : Numerical features {missing_input} are missing from input DataFrame!")
78
+ #This is only to verify...It will give us the missing columns which should be present while doing numerical imputation....basically, I'm trying to handle all the errors possible
79
+
80
+ try:
81
+ df_processed[NUMERICAL_FEATURES] = numerical_imputer.transform(df_processed[NUMERICAL_FEATURES])
82
+
83
+ except Exception as e:
84
+ raise RuntimeError(
85
+ f"Error during numerical imputation\n"
86
+ f"Error : {e}"
87
+ )
88
+ #raise functions are best here because as soon as the error occurs....it will stop the function
89
+
90
+ if categorical_imputer is not None and CATEGORICAL_FEATURES:
91
+ missing_input = [col for col in CATEGORICAL_FEATURES if col not in df_processed.columns]
92
+
93
+ if missing_input:
94
+ raise ValueError(f"Error : Categorical features {missing_input} are missing from input DataFrame!")
95
+ try:
96
+ df_processed[CATEGORICAL_FEATURES] = categorical_imputer.transform(df_processed[CATEGORICAL_FEATURES])
97
+
98
+ except Exception as e:
99
+ raise RuntimeError(
100
+ f"Error during categorical imputation\n"
101
+ f"Error : {e}"
102
+ )
103
+
104
+ if le_structure_type is not None and 'structure_type' in df_processed.columns:
105
+ try:
106
+ df_processed['structure_type'] = le_structure_type.transform(df_processed['structure_type'])
107
+ except ValueError as e:
108
+ raise ValueError(
109
+ f"Unknown category -> '{df_processed['structure_type'].iloc[0]}' in column 'structure_type'\n"
110
+ f"Error : {e}"
111
+ )
112
+ except Exception as e:
113
+ raise RuntimeError(f"Error during Label Encoding for 'structure_type'...Error: {e}")
114
+
115
+ if scaler is not None and NUMERICAL_FEATURES:
116
+ missing_input = [col for col in NUMERICAL_FEATURES if col not in df_processed.columns]
117
+
118
+ if missing_input:
119
+ raise ValueError(f"Error : Numerical features {missing_input} are missing from input DataFrame")
120
+
121
+ try:
122
+ df_processed[NUMERICAL_FEATURES] = scaler.transform(df_processed[NUMERICAL_FEATURES])
123
+
124
+ except Exception as e:
125
+ raise RuntimeError(
126
+ f"Error during scaling\n"
127
+ f"Error: {e}"
128
+ )
129
+
130
+ print(f"Current df_processed columns before final reorder: {df_processed.columns.tolist()}")
131
+ #Checkpoint
132
+
133
+ for col in FINAL_MODEL_EXPECTED_FEATURES:
134
+ if col not in df_processed.columns:
135
+ print(f"Adding missing column: '{col}' with value 0.")
136
+ df_processed[col] = 0
137
+
138
+ df_final = df_processed[FINAL_MODEL_EXPECTED_FEATURES]
139
+ print(f"Final DataFrame for prediction: \n{df_final}")
140
+
141
+ return df_final
142
+
143
+ #The function I created above was based upon the numerical and categorical imputation, label encoding, scaling or basically all the data preprocessing that should be done after training all the models.....
144
+ #I have show all the error messages in my coding lines because I got stuck in this process many time and to highlight the mistakes I have created some checkpoints also in between....Therefore, now all the data operations are done and the next thing is DEPLOYMENT-> creation of FastAPI and deployment on AWS etc.
categorical_imputer.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad3f6b2271f26471c7eb5abc2e80538142161a45503c29d0258c2c79021c0f5f
3
+ size 874
electricity_cost_dataset.csv.xlsx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36ed22e74eed1f3efa0e8633b8a84ff898f05581078f1f4d9079c3dc728c518f
3
+ size 539799
label_encoder_structure_type.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51543e74e42d53fb0c16cd39c1abd249c678b30409387c217660c3f47c110cac
3
+ size 525
main.py ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, HTTPException
2
+ from pydantic import BaseModel, Field
3
+ import joblib
4
+ import pandas as pd
5
+ import os
6
+
7
+ from Preprocessing import preprocess_input
8
+
9
+ app = FastAPI(
10
+ title="Electricity Cost Prediction API",
11
+ description="Predicts electricity cost based on facility and operational parameters"
12
+ )
13
+
14
+ MODEL_PATH = "model.pkl"
15
+
16
+ if not os.path.exists(MODEL_PATH):
17
+ raise FileNotFoundError(
18
+ "Model file not found"
19
+ )
20
+ try:
21
+ model = joblib.load(MODEL_PATH)
22
+ except Exception as e:
23
+ raise RuntimeError(f"Error loading model from {MODEL_PATH}: {e}")
24
+
25
+ class ElectricityInput(BaseModel):
26
+ site_area: float = Field(..., description="Area of the site in square units")
27
+ structure_type: str = Field(..., description="Type of structure (e.g., 'residential', 'commercial')")
28
+ water_consumption: float = Field(..., description="Daily/monthly water consumption")
29
+ recycling_rate: float = Field(..., description="Percentage of waste recycled")
30
+ utilisation_rate: float = Field(..., description="Rate of facility utilization")
31
+ air_qality_index: float = Field(..., description="Air quality index")
32
+ issue_reolution_time: float = Field(..., description="Time taken to resolve issues (e.g., in hours)")
33
+ resident_count: int = Field(..., description="Number of residents/occupants")
34
+
35
+ #Basically all these inputs for the base model will be converted into pydantic object after checking, optimizing and safety assurance
36
+
37
+ #Moving to main API code-> Using the predict model for my task
38
+ @app.post("/predict")
39
+ async def predict_electricity_cost(data: ElectricityInput):
40
+ print("Predicts the total electricity cost based on the provided input features")
41
+
42
+ try:
43
+ input_data_dict = data.model_dump()
44
+ processed_df = preprocess_input(input_data_dict)
45
+ prediction = model.predict(processed_df)[0]
46
+ predicted_cost = round(float(prediction), 2)
47
+ return {"predicted_electricity_cost": predicted_cost}
48
+
49
+ except Exception as e:
50
+ print(f"An unexpected error occurred during prediction: {e}")
51
+ raise HTTPException(
52
+ status_code=500, #Internal server error....Basically giving error coverups
53
+ detail=f"An internal server error occurred during prediction. Error : {e}"
54
+ )
55
+
56
+ #My overall work in this API File ->
57
+ #User input : JSON -> Pydantic object -> dictionary -> DataFrame -> model.predict()
58
+
59
+ @app.get("/health")
60
+ async def health_check():
61
+ #Using asyn function, because it can pause wherever the function needs to run a different block of code and then restart again
62
+ return {"status": "ok", "message": "Electricity Cost Prediction API is running accurately!"}
63
+
64
+ #So, this was the overall API implementation.....Also, I've created the docker and .dockerignore files in this folder to package my work and deploy it....basically storing it in a container...You can see that I've marked up all error points as much as possible to resolve all the incoming issues fast
model.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:490f135e4a7667db37a9cc7aa351f49fc47c78dc19b71652081d314723d3bc76
3
+ size 1081
numerical_imputer.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbfc3ab70a663e882e5365dee33ebce4df6bc5024e55fc5beaffd0773134387b
3
+ size 911
requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ fastapi
2
+ uvicorn
3
+ pandas
4
+ scikit-learn
5
+ joblib
6
+ pydantic
7
+ gunicorn
8
+ openpyxl
scaler.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:002fa9d924b015f36be4d9d1b34d407183f47641afd48623be69ee5134122a1d
3
+ size 1151
train_and_save_model.py ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #Next Task -> Training the dataset
2
+ #In this file I've done Training and encoding the dataset
3
+ #Now as I've already done the EDA...the next task is to train and save the data for preprocessing
4
+
5
+ import pandas as pd
6
+ from sklearn.impute import SimpleImputer
7
+ from sklearn.preprocessing import LabelEncoder, StandardScaler
8
+ from sklearn.model_selection import train_test_split
9
+ from sklearn.linear_model import LinearRegression
10
+ import joblib
11
+ import os
12
+ import re
13
+
14
+ DATASET_PATH = "C:/Users/kavya/Documents/GDG_Files_Kavya/electricity_predictor_API/electricity_cost_dataset.csv.xlsx"
15
+ MODEL_OUTPUT_DIR = "."
16
+
17
+ os.makedirs(MODEL_OUTPUT_DIR, exist_ok=True)
18
+
19
+ def RenamingColumns(Column_Name):
20
+ Column_Name = re.sub(r'\s+', '_', Column_Name)
21
+ Column_Name = re.sub(r'[^\w_]', '', Column_Name)
22
+ return Column_Name.lower()
23
+
24
+ try:
25
+ df = pd.read_excel(DATASET_PATH)
26
+ print("Original columns ->\n")
27
+ print(df.columns.tolist())
28
+
29
+ new_columns = []
30
+
31
+ #As I've to rename the columns....I'm using a for loop to do this->
32
+ #If, the column names given as an input in the FastAPI are not same as the column names in the dataset...an error will be occured on the fastAPI application
33
+
34
+ for col in df.columns:
35
+ new_col = RenamingColumns(col)
36
+ new_columns.append(new_col)
37
+
38
+ df.columns = new_columns
39
+
40
+ print("Renamed Columns ->\n")
41
+ print(df.columns.tolist())
42
+
43
+ except FileNotFoundError:
44
+ print(f"Error: Dataset not found! Please ensure the file is in the same directory")
45
+ exit()
46
+
47
+ except Exception as e:
48
+ print(f"Error : {e}")
49
+ exit()
50
+
51
+ #I used try and except blocks for ERROR HANDLING
52
+ #Now, all the names have been changed and I've converted same as the datset ones...Therefor from here, I've used new names
53
+
54
+ TARGET_COL = 'electricity_cost'
55
+
56
+ if TARGET_COL not in df.columns:
57
+ print(f"Error: Target column '{TARGET_COL}' not found!")
58
+ exit()
59
+
60
+ features_df = df.drop(columns=[TARGET_COL])
61
+ #Using .drop, I removed the feature which will not be used in calculation
62
+ y = df[TARGET_COL]
63
+
64
+ NUMERICAL_FEATURES = [
65
+ 'site_area', 'water_consumption', 'recycling_rate', 'utilisation_rate',
66
+ 'air_qality_index', 'issue_reolution_time', 'resident_count'
67
+ ]
68
+ CATEGORICAL_FEATURES = ['structure_type']
69
+
70
+ all_expected_features = NUMERICAL_FEATURES + CATEGORICAL_FEATURES
71
+ missing_features = [col for col in all_expected_features if col not in features_df.columns]
72
+
73
+ if missing_features:
74
+ print(f"Error: The following expected features are missing from the data after renaming: {missing_features}")
75
+ exit()
76
+ #The above steps were only for the safety purpose...to recheck if there is any missing features.
77
+ #Actually, I did it just because I was facing many errors...therefore just to check I added some checkpoints.
78
+
79
+ numerical_imputer = SimpleImputer(strategy='mean')
80
+ if NUMERICAL_FEATURES:
81
+ features_df[NUMERICAL_FEATURES] = numerical_imputer.fit_transform(features_df[NUMERICAL_FEATURES])
82
+ joblib.dump(numerical_imputer, os.path.join(MODEL_OUTPUT_DIR, 'numerical_imputer.pkl'))
83
+ print("Numerical imputer fitted and saved")
84
+ else:
85
+ print("No numerical columns to impute")
86
+
87
+ categorical_imputer = SimpleImputer(strategy='most_frequent')
88
+ if CATEGORICAL_FEATURES:
89
+ features_df[CATEGORICAL_FEATURES] = categorical_imputer.fit_transform(features_df[CATEGORICAL_FEATURES])
90
+ joblib.dump(categorical_imputer, os.path.join(MODEL_OUTPUT_DIR, 'categorical_imputer.pkl'))
91
+ print("Categorical imputer fitted and saved")
92
+ else:
93
+ print("No categorical columns to impute")
94
+ #I used joblib because I wanted to use this data later as well...therefore, whenever I will be in need of it I will load this with joblib.load()
95
+
96
+ if 'structure_type' in features_df.columns:
97
+ features_df['structure_type'] = features_df['structure_type'].astype(str).str.lower().str.strip()
98
+ le_structure_type = LabelEncoder()
99
+ features_df['structure_type'] = le_structure_type.fit_transform(features_df['structure_type'])
100
+ joblib.dump(le_structure_type, os.path.join(MODEL_OUTPUT_DIR, 'label_encoder_structure_type.pkl'))
101
+ print("LabelEncoder for 'structure_type' fitted and saved.")
102
+ else:
103
+ print("structure_type column not found or not categorical, skipping LabelEncoder.")
104
+
105
+ if NUMERICAL_FEATURES:
106
+ scaler = StandardScaler()
107
+ features_df[NUMERICAL_FEATURES] = scaler.fit_transform(features_df[NUMERICAL_FEATURES])
108
+ joblib.dump(scaler, os.path.join(MODEL_OUTPUT_DIR, 'scaler.pkl'))
109
+ print("StandardScaler fitted and saved.")
110
+ else:
111
+ print("No numerical columns to scale.")
112
+
113
+ #You can see that, I've used joblib.dump to create a separate directory for each imputer and encoder made
114
+
115
+ X = features_df
116
+ y = df[TARGET_COL]
117
+
118
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
119
+
120
+ model = LinearRegression()
121
+ model.fit(X_train, y_train)
122
+ joblib.dump(model, os.path.join(MODEL_OUTPUT_DIR, 'model.pkl'))
123
+
124
+ FINAL_MODEL_EXPECTED_FEATURES = X_train.columns.tolist()
125
+ print("All expected features from Final Model->\n")
126
+ print(FINAL_MODEL_EXPECTED_FEATURES)
127
+
128
+ #So, now, all necessary .pkl files created and saved in the current directory