markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:- **Survived**: Outcome of survival (0 = No; 1 = Yes)- **Pclass**: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)- **Name**: Name of passenger- **Sex**: Sex of the passenger- **Age**: Age of the passenger (Some entries contain `NaN`)- **SibSp**: Number of siblings and spouses of the passenger aboard- **Parch**: Number of parents and children of the passenger aboard- **Ticket**: Ticket number of the passenger- **Fare**: Fare paid by the passenger- **Cabin** Cabin number of the passenger (Some entries contain `NaN`)- **Embarked**: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)Since we're interested in the outcome of survival for each passenger or crew member, we can remove the **Survived** feature from this dataset and store it as its own separate variable `outcomes`. We will use these outcomes as our prediction targets. Run the code cell below to remove **Survived** as a feature of the dataset and store it in `outcomes`.
# Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head())
_____no_output_____
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
The very same sample of the RMS Titanic data now shows the **Survived** feature removed from the DataFrame. Note that `data` (the passenger data) and `outcomes` (the outcomes of survival) are now *paired*. That means for any passenger `data.loc[i]`, they have the survival outcome `outcomes[i]`.To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how *accurate* our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our `accuracy_score` function and test a prediction on the first five passengers. **Think:** *Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?*
def accuracy_score(truth, pred): """ Returns accuracy score for input truth and predictions. """ # Ensure that the number of predictions matches number of outcomes if len(truth) == len(pred): # Calculate and return the accuracy as a percent return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100) else: return "Number of predictions does not match number of outcomes!" # Test the 'accuracy_score' function predictions = pd.Series(np.ones(5, dtype = int)) print accuracy_score(outcomes[:5], predictions)
Predictions have an accuracy of 60.00%.
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
> **Tip:** If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off. Making PredictionsIf we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking. The `predictions_0` function below will always predict that a passenger did not survive.
def predictions_0(data): """ Model with no features. Always predicts a passenger did not survive. """ predictions = [] for _, passenger in data.iterrows(): # Predict the survival of 'passenger' predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_0(data)
_____no_output_____
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
Question 1*Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?* **Hint:** Run the code cell below to see the accuracy of this prediction.
print accuracy_score(outcomes, predictions)
Predictions have an accuracy of 61.62%.
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
**Answer:** 61.62% ***Let's take a look at whether the feature **Sex** has any indication of survival rates among passengers using the `survival_stats` function. This function is defined in the `visuals.py` Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across. Run the code cell below to plot the survival outcomes of passengers based on their sex.
vs.survival_stats(data, outcomes, 'Sex')
_____no_output_____
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females *did* survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive. Fill in the missing code below so that the function will make this prediction. **Hint:** You can access the values of each feature for a passenger like a dictionary. For example, `passenger['Sex']` is the sex of the passenger.
def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): if passenger['Sex'] == 'female': predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_1(data)
_____no_output_____
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
Question 2*How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?* **Hint:** Run the code cell below to see the accuracy of this prediction.
print accuracy_score(outcomes, predictions)
Predictions have an accuracy of 78.68%.
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
**Answer**: 78.68% ***Using just the **Sex** feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the **Age** of each male, by again using the `survival_stats` function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the **Sex** 'male' will be included. Run the code cell below to plot the survival outcomes of male passengers based on their age.
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
_____no_output_____
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older *did not survive* the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive. Fill in the missing code below so that the function will make this prediction. **Hint:** You can start your implementation of this function using the prediction code you wrote earlier from `predictions_1`.
def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): if passenger['Sex'] == 'female': predictions.append(1) elif passenger['Sex'] == 'male' and passenger['Age'] < 10: predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_2(data)
_____no_output_____
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
Question 3*How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?* **Hint:** Run the code cell below to see the accuracy of this prediction.
print accuracy_score(outcomes, predictions)
Predictions have an accuracy of 79.35%.
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
**Answer**: 79.35% ***Adding the feature **Age** as a condition in conjunction with **Sex** improves the accuracy by a small margin more than with simply using the feature **Sex** alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. **Pclass**, **Sex**, **Age**, **SibSp**, and **Parch** are some suggested features to try.Use the `survival_stats` function below to to examine various survival statistics. **Hint:** To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: `["Sex == 'male'", "Age < 18"]`
vs.survival_stats(data, outcomes, 'SibSp', ["Sex == 'female'"])
_____no_output_____
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
Adding number of siblings and spouses of the passenger for females increased accuracy. Having less than 3 siblings and spouses increasing chance of survive, 3 or 4 siblings and spouses reduce survive and there is no survival chance for females who had more than 4 siblings and spouses.
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Pclass == 1"])
_____no_output_____
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
For male who was in first class with the age between 30 and 40 has high chance of survive. After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. **Hint:** You can start your implementation of this function using the prediction code you wrote earlier from `predictions_2`.
def predictions_3(data): """ Model with multiple features. Makes a prediction with an accuracy of at least 80%. """ predictions = [] for _, passenger in data.iterrows(): if passenger['Sex'] == 'female' and passenger['SibSp'] < 3: predictions.append(1) elif passenger['Sex'] == 'female' and passenger['SibSp'] < 5: predictions.append(0) elif passenger['Sex'] == 'female' and passenger['SibSp'] >=5 : predictions.append(0) # code above incrased the accuracy up to 80.36% elif passenger['Sex'] == 'male' and passenger['Age'] < 10: predictions.append(1) #vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Pclass == 1"]) #elif statement below increases accuracy to 80.58 elif passenger['Sex'] == 'male' and passenger['Age'] < 40 and passenger['Age'] > 30 and passenger['Pclass'] == 1: predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_3(data)
_____no_output_____
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
Question 4*Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?* **Hint:** Run the code cell below to see the accuracy of your predictions.
print accuracy_score(outcomes, predictions)
Predictions have an accuracy of 80.58%.
MIT
titanic_survival_exploration.ipynb
numanyilmaz/titanic_survival_exploration
Examples on how to use this package:
from circuit import Circuit
_____no_output_____
MIT
quantum_circuits/examples.ipynb
JinLi711/quantum_circuits
Circuit Creation
circ = Circuit(3,5) circ.X(2) circ.H() circ.barrier()
_____no_output_____
MIT
quantum_circuits/examples.ipynb
JinLi711/quantum_circuits
Getting the Current State
circ.get_state() circ.get_state(output='tensor')
_____no_output_____
MIT
quantum_circuits/examples.ipynb
JinLi711/quantum_circuits
Execute Simulation
circ.execute(num_instances=800)
_____no_output_____
MIT
quantum_circuits/examples.ipynb
JinLi711/quantum_circuits
Code To OpenQASM
print(circ.compile())
OPENQASM 2.0; include "qelib1.inc"; qreg q[3] creg c[5] x q[2]; h q[None]; barrier q;
MIT
quantum_circuits/examples.ipynb
JinLi711/quantum_circuits
Measurement
circ = Circuit(2, 3) circ.X(1) # measure qubit one and write the result to classical bit 1 circ.measure(0, 1) circ.bits[1]
_____no_output_____
MIT
quantum_circuits/examples.ipynb
JinLi711/quantum_circuits
Make Corner Plots of Posterior DistributionsThis file allows me to quickly and repeatedly make the cornor plot to examin the results of the MCMC analsys
import numpy as np import matplotlib.pyplot as plt import matplotlib import pandas as pd from astropy.table import Table import corner # import seaborn matplotlib.rcParams.update({'font.size': 11})
_____no_output_____
MIT
figures/Results-mcmc-corner-plot.ipynb
benjaminrose/SNIa-Local-Environments
This function is the general function that is repeated called throught the file. One benifite to this system, is that I only need to update to higher quality labels in one place.
def corner_plot(file_, saved_file, truths=None, third=0): data = Table.read(file_, format='ascii.commented_header', delimiter='\t') if third !=0: size = len(data) data = data[(third-1)*size//3:(third)*size//3] data = data.to_pandas() data.dropna(inplace=True) # look at corner.hist2d(levels) to not have too many conturs on a plot # http://corner.readthedocs.io/en/latest/api.html fig = corner.corner(data, show_titles=True, use_math_text=True, bins=25, quantiles=[0.16, 0.84], smooth=1, plot_datapoints=False, labels=[r"$\log(z/z_{\odot})$", r"$\tau_2$", r"$\tau$", r"$t_{0}$", r"$t_{i}$", r'$\phi$', '$\delta$', 'age'], truths=truths, range=[0.99]*8 ) fig.savefig(saved_file)
_____no_output_____
MIT
figures/Results-mcmc-corner-plot.ipynb
benjaminrose/SNIa-Local-Environments
One Object
#run one object SN = 16185 file_ = f'../resources/SN{SN}_campbell_chain.tsv' saved_file = f'SN{SN}-mcmc-2018-12-21.pdf' corner_plot(file_, saved_file)
_____no_output_____
MIT
figures/Results-mcmc-corner-plot.ipynb
benjaminrose/SNIa-Local-Environments
Messier Objects
# run all Messier objects for id in [63, 82, 87, 89, 91, 101, 105, 108]: file_ = f'../resources/SN{id}_messier_chain.tsv' saved_file = f'messierTests/12-29-M{id}.pdf' print(f'\nMaking {saved_file}') corner_plot(file_, saved_file) # One Messier Object ID = 63 file_ = f'../resources/SN{ID}_messier_chain.tsv' saved_file = f'messierTests/12-22-M{ID}.pdf' print(f'\nMaking {saved_file}') corner_plot(file_, saved_file)
Making messierTests/12-22-M63.pdf
MIT
figures/Results-mcmc-corner-plot.ipynb
benjaminrose/SNIa-Local-Environments
Circle Test -- old
# run on circle test for id in [1, 2, 3, 4, 5, 6, 7]: file_ = f'../resources/SN{id}_chain.tsv' saved_file = f'circleTests/12-19-C{id}.pdf' print(f'\nMaking {saved_file}') corner_plot(file_, saved_file) # run on circle test 3 with truths file_ = f'../resources/SN3_chain.tsv' saved_file = f'circleTests/07-31-C3-truths.pdf' data = Table.read(file_, format='ascii.commented_header', delimiter='\t') data = data.to_pandas() data.dropna(inplace=True) fig = corner.corner(data, show_titles=True, use_math_text=True, quantiles=[0.16, 0.5, 0.84], smooth=0.5, plot_datapoints=False, labels=["$logZ_{sol}$", "$dust_2$", r"$\tau$", "$t_{start}$", "$t_{trans}$", 'sf slope', 'c', 'Age'], truths=[-0.5, 0.1, 7.0, 3.0, 10, 15.0, -25, None] ) fig.savefig(saved_file) # run on circle test 1 with truths file_ = f'../resources/SN1_chain_2017-09-11.tsv' saved_file = f'circleTests/09-11-C1-truths.pdf' truths=[-0.5, 0.1, 0.5, 1.5, 9.0, -1.0, -25, None] corner_plot(file_, saved_file, truths) # data = Table.read(file_, format='ascii.commented_header', delimiter='\t') # data = data.to_pandas() # data.dropna(inplace=True) # # fig = corner.corner(data, show_titles=True, use_math_text=True, # quantiles=[0.16, 0.5, 0.84], smooth=0.5, # plot_datapoints=False, # labels=["$logZ_{sol}$", "$dust_2$", r"$\tau$", # "$t_{start}$", "$t_{trans}$", 'sf slope', # 'c', 'Age'], # truths=[-0.5, 0.1, 0.5, 1.5, 9.0, -1.0, -25, None] # ) # fig.savefig(saved_file)
WARNING:root:Too few points to create valid contours
MIT
figures/Results-mcmc-corner-plot.ipynb
benjaminrose/SNIa-Local-Environments
Test all Circle Tests
# for slope # truths = { # 1 : [-0.5, 0.1, 0.5, 1.5, 9.0, -1.0, -25, 10.68], # 2 : [-0.5, 0.1, 0.5, 1.5, 9.0, 15.0, -25, 1.41], # 3 : [-0.5, 0.1, 7.0, 3.0, 10, 15.0, -25, 1.75], # 4 : [-0.5, 0.1, 7.0, 3.0, 13.0, 0.0, -25, 4.28], # 5 : [-1.5, 0.1, 0.5, 1.5, 9.0, -1.0, -25, 10.68], # 6 : [-0.5, 0.8, 7.0, 3.0, 10.0, 15.0, -25, 1.75], # 7 : [-0.5, 0.1, 0.5, 1.5, 6.0, 15.0, -25, ] # } # for phi truths = { 1 : [-0.5, 0.1, 0.5, 1.5, 9.0, -0.785, -25, 10.68], 2 : [-0.5, 0.1, 0.5, 1.5, 9.0, 1.504, -25, 1.41], 3 : [-0.5, 0.1, 7.0, 3.0, 10, 1.504, -25, 1.75], 4 : [-0.5, 0.1, 7.0, 3.0, 13.0, 0.0, -25, 4.28], 5 : [-1.5, 0.1, 0.5, 1.5, 9.0, -0.785, -25, 10.68], 6 : [-0.5, 0.8, 7.0, 3.0, 10.0, 1.504, -25, 1.75], 7 : [-0.5, 0.1, 0.5, 1.5, 6.0, 1.504, -25, 2.40], 8 : [-0.5, 0.1, 0.1, 8.0, 12.0, 1.52, -25, 0.437] } for id_ in np.arange(8) + 1: file_ = f'../resources/SN{id_}_circle_chain.tsv' saved_file = f'circleTests/C{id_}-truths-0717.pdf' print(f'\nMaking {saved_file}') corner_plot(file_, saved_file, truths[id_]) # just one cirlce test id_ = 8 file_ = f'../resources/SN{id_}_circle_chain.tsv' saved_file = f'circleTests/C{id_}-truths-0717_1.pdf' corner_plot(file_, saved_file, truths[id_])
/usr/local/lib/python3.6/site-packages/matplotlib/pyplot.py:524: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`). max_open_warning, RuntimeWarning)
MIT
figures/Results-mcmc-corner-plot.ipynb
benjaminrose/SNIa-Local-Environments
Check sections of chain
file_ = f'../resources/SN2_chain.tsv' saved_file = f'circleTests/C2-3.pdf' print(f'\nMaking {saved_file}') corner_plot(file_, saved_file, truths[2], third=3)
Making circleTests/C2-3.pdf
MIT
figures/Results-mcmc-corner-plot.ipynb
benjaminrose/SNIa-Local-Environments
Question 1: Which team scored the most points when playing at home?
cur = conn.cursor() cur.execute("SELECT sum(a.home_team_goal) as sum_goals, b.team_api_id, b.team_long_name FROM match a, team b WHERE a.home_team_api_id = b.team_api_id group by b.team_api_id order by sum_goals desc limit 1" ) rows = cur.fetchall() for row in rows[:]: print(row)
(505, 8633, 'Real Madrid CF')
MIT
Soccer_SQLite.ipynb
gequitz/Issuing-Credit-Cards-and-SQL-Assignment
Question 2: Did this team also score the most points when playing away?
cur = conn.cursor() cur.execute("SELECT sum(a.away_team_goal) as sum_goals, b.team_api_id, b.team_long_name FROM match a, team b WHERE a.away_team_api_id = b.team_api_id group by b.team_api_id order by sum_goals desc limit 1" ) rows = cur.fetchall() for row in rows[:]: print(row)
(354, 8634, 'FC Barcelona')
MIT
Soccer_SQLite.ipynb
gequitz/Issuing-Credit-Cards-and-SQL-Assignment
Question 3: How many matches resulted in a tie?
cur = conn.cursor() cur.execute("SELECT count(match_api_id) FROM match where home_team_goal = away_team_goal" ) rows = cur.fetchall() for row in rows[:]: print(row)
(6596,)
MIT
Soccer_SQLite.ipynb
gequitz/Issuing-Credit-Cards-and-SQL-Assignment
Question 4: How many players have Smith for their last name? How many have 'smith' anywhere in their name?
cur = conn.cursor() cur.execute("SELECT COUNT(player_name) FROM Player where player_name LIKE '% smith' " ) rows = cur.fetchall() for row in rows[:]: print(row) cur = conn.cursor() cur.execute("SELECT COUNT(player_name) FROM Player where player_name LIKE '%smith%' " ) rows = cur.fetchall() for row in rows[:]: print(row)
(18,)
MIT
Soccer_SQLite.ipynb
gequitz/Issuing-Credit-Cards-and-SQL-Assignment
Question 5: What was the median tie score? Use the value determined in the previous question for the number of tie games. Hint: PostgreSQL does not have a median function. Instead, think about the steps required to calculate a median and use the WITH command to store stepwise results as a table and then operate on these results.
cur = conn.cursor() #cur.execute("WITH goal_list AS (SELECT home_team_goal FROM match where home_team_goal = away_team_goal order \ # by home_team_goal desc) select home_team_goal from goal_list limit 1 offset 6596/2" ) cur.execute("WITH goal_list AS (SELECT home_team_goal FROM match where home_team_goal = away_team_goal order \ by home_team_goal desc) select home_team_goal from goal_list limit 1 offset (select count(*) from goal_list)/2" ) rows = cur.fetchall() for row in rows[:20]: print(row)
(1,)
MIT
Soccer_SQLite.ipynb
gequitz/Issuing-Credit-Cards-and-SQL-Assignment
Question 6: What percentage of players prefer their left or right foot? Hint: Calculate either the right or left foot, whichever is easier based on how you setup the problem.
cur = conn.cursor() cur.execute("SELECT (COUNT(DISTINCT(player_api_id)) * 100.0 / (SELECT COUNT(DISTINCT(player_api_id)) FROM Player_Attributes)) \ FROM Player_Attributes WHERE preferred_foot LIKE '%right%' " ) rows = cur.fetchall() for row in rows[:20]: print(row) #SELECT (COUNT(DISTINCT(player_api_id)) * 100.0 / (SELECT COUNT(DISTINCT(player_api_id)) FROM Player_Attributes)) as percentage #FROM Player_Attributes #WHERE preferred_foot LIKE '%left%'
(81.18444846292948,)
MIT
Soccer_SQLite.ipynb
gequitz/Issuing-Credit-Cards-and-SQL-Assignment
Importing the libraries
%pip install tensorflow import pandas as pd import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import seaborn as sns
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Importing The Dataset
dataset = pd.read_csv("../input/framingham-heart-study-dataset/framingham.csv")
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Analysing The Data
dataset.shape dataset.dtypes dataset.info
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Visualizing the data
fig = plt.figure(figsize = (8,8)) ax = fig.gca() dataset.hist(ax=ax) plt.show() fig, ax = plt.subplots() ax.hist(dataset["TenYearCHD"],color = "yellow") ax.set_title(' To predict heart disease') ax.set_xlabel('TenYearCHD') ax.set_ylabel('Frequency') data = np.random.random([100,4]) sns.violinplot(data=data, palette=['r','g','b','m'])
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Separating the dependent and independent variables
X = dataset.iloc[:,:-1].values y = dataset.iloc[:,-1].values np.isnan(X).sum() np.isnan(y).sum()
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Taking Care of Missing Values
from sklearn.impute import SimpleImputer si = SimpleImputer(missing_values = np.nan, strategy = 'mean') X = si.fit_transform(X) y.shape np.isnan(X).sum() np.isnan(y).sum() dataset.isna().sum()
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Splitting into Training and test Data
from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.3,random_state = 0)
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Normalising The data
from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) X_train y_train np.isnan(X_train).sum() np.isnan(y_train).sum()
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Preparing ANN Model with two layers
ann = tf.keras.models.Sequential() ann.add(tf.keras.layers.Dense(units = 6, activation = 'relu')) ann.add(tf.keras.layers.Dense(units = 6, activation='relu')) ann.add(tf.keras.layers.Dense(units = 1,activation='sigmoid')) ann.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy']) model = ann.fit(X_train,y_train,validation_data=(X_test,y_test), batch_size = 32,epochs=100) y_pred = ann.predict(X_test) y_pred = (y_pred > 0.5) print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1)) from sklearn.metrics import confusion_matrix, accuracy_score, classification_report cm = confusion_matrix(y_test, y_pred) print(cm) accuracy_score(y_test, y_pred)
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Model Accuracy Visualisation
plt.plot(model.history['accuracy']) plt.plot(model.history['val_accuracy']) plt.title('Model Accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Validation'], loc='lower right') plt.show()
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Model Loss Visualisation
plt.plot(model.history['loss']) plt.plot(model.history['val_loss']) plt.title('Model Loss') plt.ylabel('loss') plt.xlabel('Epoch') plt.legend(['Train', 'Validation'], loc='upper right') plt.show()
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Calculating Different Metrics
print(classification_report(y_test, y_pred))
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Using MLP Classifier for Prediction
from sklearn.neural_network import MLPClassifier classifier = MLPClassifier(hidden_layer_sizes=(150,100,50), max_iter=300,activation = 'relu',solver='adam',random_state=1) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) y_pred = (y_pred > 0.5) print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1)) from sklearn.metrics import confusion_matrix, accuracy_score, classification_report cm = confusion_matrix(y_test, y_pred) print(cm) accuracy_score(y_test, y_pred) print(classification_report(y_test, y_pred))
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Visualising The MLP Model After Applying the PCA method
from sklearn.decomposition import PCA pca = PCA(n_components=2) X_train = pca.fit_transform(X_train) X_test = pca.transform(X_test) classifier.fit(X_train, y_train) def visualization_train(model): sns.set_context(context='notebook',font_scale=2) plt.figure(figsize=(16,9)) from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.6, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title("%s Model on training data" %(model)) plt.xlabel('PC 1') plt.ylabel('PC 2') plt.legend() def visualization_test(model): sns.set_context(context='notebook',font_scale=2) plt.figure(figsize=(16,9)) from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.6, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title("%s Test Set" %(model)) plt.xlabel('PC 1') plt.ylabel('PC 2') plt.legend() visualization_train(model= 'MLP')
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Saving a machine learning Model
#import joblib #joblib.dump(ann, 'ann_model.pkl') #joblib.dump(sc, 'sc_model.pkl') #knn_from_joblib = joblib.load('mlp_model.pkl') #sc_model = joblib.load('sc_model.pkl')
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Saving a tensorflow model
#!pip install h5py #ann.save('ann_model.h5') #model = tf.keras.models.load_model('ann_model.h5')
_____no_output_____
MIT
Heart Disease Risk Prediction/Heart Disease Risk Prediction.ipynb
somyasriv16/AN319_H4CK3RS_SIH2020
Reinforcement Learningpage 441For details, see- https://github.com/ageron/handson-ml/blob/master/16_reinforcement_learning.ipynb,- https://gym.openai.com,- http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html,- https://www.jstor.org/stable/24900506?seq=1metadata_info_tab_contents,- http://book.pythontips.com/en/latest/enumerate.html, and- https://docs.python.org/2.3/whatsnew/section-slices.html (on slices).Reinforcement Learning is one of the most excititng fields of Machine Learning today. It has been around since the 1950s but flying below the radar most of the time. In 2013, *DeepMind* (at that time still an independent start-up in London) produced a reinforcement learning algorithm that could achieve superhuman performance of playing about any Atari game without having prior domain knowledge. Notably, *AlphaGo*, also a reinforcement learning algorithm by Deepmind, beat world champion *Lee Sedol* in the board game *Go* in March 2016. This has become possible by applying the power of Deep Learning to the field of Reinforcement Learning.Two very important concepts for Reinforcement Learning are **policy gradients** and **deep Q-networks** (DQN). Learning to Optimize Rewardspage 442Reinforcement learning utilizes a software **agent** that makes **observations** and takes **actions** in an **environment** by which it can earn **rewards**. Its objective is to learn to act in a way that is expected to maximise long-term rewards. This can be linked to human behaviour as humans seem to try to minimize pain (negative reward / penalty) and maximise pleasure (positive reward). The setting of Reinforcement Learning is quite broad. The following list shows some typical applications.- The agent can be the program controlling a walking robot. In this case, the environment is the real world, the agent obersrves the environment through a set of *sensors* such as cameras and touch sensors, and its actions consist of sending signals to activate motors. It may be programmed to get positive rewards whenever it approaches the target destination, and negative rewards whenever it wastes time, goes in the wrong direction, or falls down.- The agent can be the program controlling Ms. Pac-Man. In this case, the environment is a simulation of the Atari game, the actions are the nine possible joystick positions (upper left, down, center, and so on), the observations are screenshots, and the rewards are just the game points.- Similarly, the agent can be the program playing a board game such as the game of *Go*.- The agent does not have to control a physically (or virtually ) moving thing. For example, it can be a smart thermostat, getting rewards whenever it is close to the target temperature and saves energy, and negative rewards when humans need to tweak the temperature, so the agent must learn to anticipate human needs.- The agent can observe stock market prices and decide how much to buy or sell every second. Rewards are obviously the monetary gains and losses.Note that it is not necessary to have both positive and negative rewards. For example, using only negative rewards can be useful when a task shall be finished as soon as possible. There are many more applications of Reinforcement Learning, e.g., self-driving cars or placing ads on a webpage (maximizing clicks and/or buys). Policy Searchpage 444The algorithm that the software agent uses to determine its action is called its **policy**. For example, the policy could be a neural network taking inputs (observations) and producing outputs (actions). The policy can be pretty much any algorithm. If that algorithm is not deterministic, it is called a *stochastic policy*.A simple, stochastic policy for a vacuum cleaner could be to go straight for a specified time and then turn with probability $p$ by a random angle $r$. In that example, the algorithm's *policy parameters* are $p$ and $r$. One example of *policy search* would be to try out all possible parameter values and then settle for those values that perform best (e.g., most dust picked up per time). However, if the *policy space* is too large, as is often the case, a near-optimal policy will likely be missed by such brute force approach.*Genetic algorithms* are another way to explore the policy space. For example, one may randomly create 100 policies, rank their performance on the task, and then filter out the bad performers. This could be done by strictly filtering out the 80 lowest performers, or filter out a policy with a probability that is high if the performance is low (this works often better). To restore the population of 100 policies, each of the 20 survivors may produce 4 offsprings, i.e., copies of the survivor[s] (parent algorithm[s]) with slight random modifications of its [their] policy parameters. With a single parent [two or more parents], this is referred to as asexual [sexual] reproduction. The surviving policies together with their offsprins constitute the next generation. This scheme can be continued until a satisfactory policy is found.Yet another approach is to use optimization techniques, e.g., *gradient ascent* (similar to gradient descent) to maximise the reward (similar to minimising the cost). Introduction to OpenAI Gympage 445In order to train an agent, that agent needs an *environment* to operate in. The environment evolves subject to the agent's actions and the agent observes the environment. For example, an Atari game simulator is the environment for an agent that tries to learn to play an Atari game. If the agent tries to learn how to walk a humanoid robot in the real world, then the real world is the environment. Yet, the real world may have limitations when compared to software environments: if the real-world robot gets damaged, just clicking "undo" or "restart" will not suffice. It is also not possible to speed up training by having the time run faster (which would be the equivalent of increasing the clock rate or using a faster processor). And having thousands or millions of real-world robots train in parallel will be much more expensive than running simulations in parallel.In short, one generally wants to use a *simulated environment* at least for getting started. Here, we use the *OpenAI gym* (link at the top). A minimal installation only requires to run "pip3 install --upgrade gym" in a terminal. We shall try it out!
import gym # import gym from OpenAI env = gym.make("CartPole-v0") # make CartPole environment print(env) # print the cartpole environment obs = env.reset() # reset observations (in case this is necessary) print(obs) # print the observations
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. <TimeLimit<CartPoleEnv<CartPole-v0>>> [ 0.01911763 -0.04981526 0.02235611 0.01423974]
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
The "make()" function creates an environment (here, the CartPole environment). After creation, the environment must be reset via the "reset()" method. This returns the first observation, in the form of four floats in a $1\times4$ NumPy array: (i) the horizontal position (0=center), (ii) the velocity, (iii) the angle of the pole (0=vertical), and (iv) its angular velocity.So far so good. Some environments including the CartPole environment demand access to the screen to visualize the environment. As explained in the Github link (linked above), this may be problematic when Jupyter is used. However, one can work around this issue by using the following function (taken from the Github link) that will render the CartPole environment within a Jupyter notebook.
# further imports and plot specifications (taken from Github link above) import numpy as np %matplotlib nbagg import matplotlib import matplotlib.animation as animation import matplotlib.pyplot as plt plt.rcParams["axes.labelsize"] = 14 plt.rcParams["xtick.labelsize"] = 12 plt.rcParams["ytick.labelsize"] = 12 # functions for rendering of the CartPole environment within the Jupyter notebook (taken from Github-link) from PIL import Image, ImageDraw try: from pyglet.gl import gl_info openai_cart_pole_rendering = True except Exception: openai_cart_pole_rendering = False def render_cart_pole(env, obs): if openai_cart_pole_rendering: return env.render(mode="rgb_array") else: img_w = 600 img_h = 400 cart_w = img_w // 12 cart_h = img_h // 15 pole_len = img_h // 3.5 pole_w = img_w // 80 + 1 x_width = 2 max_ang = 0.2 bg_col = (255, 255, 255) cart_col = 0x000000 pole_col = 0x669acc pos, vel, ang, ang_vel = obs img = Image.new("RGB", (img_w, img_h), bg_col) draw = ImageDraw.Draw(img) cart_x = pos * img_w // x_width + img_w // x_width cart_y = img_h * 95 // 100 top_pole_x = cart_x + pole_len * np.sin(ang) top_pole_y = cart_y - cart_h // 2 - pole_len * np.cos(ang) draw.line((0, cart_y, img_w, cart_y), fill=0) draw.rectangle((cart_x - cart_w // 2, cart_y - cart_h // 2, cart_x + cart_w // 2, cart_y + cart_h // 2), fill=cart_col) draw.line((cart_x, cart_y - cart_h // 2, top_pole_x, top_pole_y), fill=pole_col, width=pole_w) return np.array(img) def plot_cart_pole(env, obs): plt.close() img = render_cart_pole(env, obs) plt.imshow(img) plt.axis("off") plt.show() # now, employ the above code openai_cart_pole_rendering = False # here, we do not even try to display the CartPole environment outside Jupyter plot_cart_pole(env, obs)
_____no_output_____
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
Nice. So this is a visualization OpenAI's CartPole environment. After the visualization is finished (it may be a movie), it is best to "close" the visualization by tapping the off-button at its top-right corner.**Suggestion or Tip**Unfortunately, the CartPole (and a few other environments) renders the image to the screen even if you set the mode to "rgb_array". The only way to avoid this is to use a fake X server such as Xvfb or Xdummy. For example, you can install Xvfb and start Python using the following [shell] command: 'xcfb-run -s "-screen 0 1400x900x24" python'. Or use the xvfb wrapper package (https://goo.gl/wR1oJl). [With the code from the Github link, we do not have to bother about all this.]To continue, we ought to know what actions are possible. This can be checked via the environment's "action_space" method.
env.action_space # check what actions are possible
_____no_output_____
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
There are two possible actions: accelerating left (represented by the integer 0) or right (integer 1). Other environments are going to have other action spaces. We accelerate the cart in the direction that it is leaning.
obs = env.reset() # reset again to get different first observations when rerunning this cell print(obs) angle = obs[2] # get the angle print(angle) action = int((1 + np.sign(angle)) / 2) # compute the action print(action) obs, reward, done, info = env.step(action) # submit the action to the environment so the environment can evlolve print(obs, reward, done, info)
[-0.01435942 0.0420023 0.00417418 0.0271142 ] 0.004174176652464501 1 [-0.01351937 0.23706414 0.00471646 -0.26424881] 1.0 False {}
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
The "step()" method executes the chosen action and returns 4 values:1. "obs"This is the next observation.2. "reward"This environment gives a reward for every time step (not more and not less). The only goal is to keep balancing the cart as long as possble.3. "done"The value of this dictionary is "False" while the episode is running and "True" when the episode is finished. It will be finished when the cart moves out of its boundaries or the pole tilts too much.4. infoThis dictionary may provide extra useful information (but not in the CartPole environment). This information can be useful for understanding the problem but shall not be used for adapting the policy. That would be cheating.We now hardcode a simple policy that always accelerates the cart in the direction it is leaning.
# define a basic policy def basic_policy(obs): angle = obs[2] # angle return 0 if angle < 0 else 1 # return action totals = [] # container for total rewards of different episodes # run 500 episodes of up to 1000 steps each for episode in range(500): # run 500 episodes episode_rewards = 0 # for each episode, start with 0 rewards ... obs = env.reset() # ... and initialize the environment for step in range(1000): # attempt 1000 steps for each episode (the policy might fail way before) action = basic_policy(obs) # use our basic policy to infer the action from the observation obs, reward, done, info = env.step(action) # apply the action and make a step in the environment episode_rewards += reward # add the earned reward if done: # if the environment says that it has finished ... break # ... this shall be the final step within the current episode totals.append(episode_rewards) # put the final episode rewards into the container for all the total rewards # print statistics on the earned rewards print("mean:", np.mean(totals),", std:", np.std(totals),", min:", np.min(totals),", max:", np.max(totals))
mean: 41.938 , std: 8.855854334845397 , min: 24.0 , max: 68.0
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
Even for 500 episodes, this policy did not manage to reach only 100 steps, not to speak of 1000. As can be seen from the additional code shown under the above Github link, the reason for this is that the pole quickly begins strongly oscillating back and forth. Certainly, the policy is just too simple: also taking the angular velocity of the pole into account should help avoiding wild angular oscillations. A more sophisticated policy might employ a neural network. Neural Network Policiespage 448Let's not just talk the talk but also walk the walk by actually implementing a neural network policy for the CartPole environment. This environment accepts only 2 possible actions: 0 (left) and 1 (right). So one output layer will be sufficient: it returns a value $p$ between 0 and 1. We will accelerate right with probability $p$ and – obviously – left with probability $1-p$.Why do we not go right with probability ${\rm heaviside}(p-0.5)$? This would hamper the algorithm from exploring further, i.e., it would slow down (or even stop) the learning progress, as can be illustrated with the following analogue. Consider going to a foreign restaurant for the first time. All items on the menu seem equally OK. You choose absolutely randomly one dish. If you happen to like it, you might from now on order this dish every time you visit the restaurant. Fair enough. But there might be even better dishes that you will never find out about if you stick to that policy. By just choosing this menu with a probability in the interval $(0.5,1)$, you will continue to explore the menu, and develop a better understanding of what menu is the best.Note that the CartPole environment returns on every instance the complete necessary information: (i) it contains the velocities (position and angle), so previous observations are not necessary to deduce them, and (ii) it is noise free, so previous observations are not necessary to deduce the actual values from an average. In that sense, the CartPole environment is really as simple as can be.The environment returns 4 observations so we will employ 4 input neurons. We have already mentioned that we only need one single output neuron. To keep things simple, this shall be it: 4 input neurons in the input and in the only hidden layer and 1 output neuron in the output layer. Now, let's cast this in code!
# code partially taken from Github link above import tensorflow as tf def reset_graph(seed=42): tf.reset_default_graph() tf.set_random_seed(seed) np.random.seed(seed) reset_graph() # 1. specify the neural network architecture n_inputs = 4 n_hidden = 4 n_outputs = 1 initializer = tf.variance_scaling_initializer() # 2. build the neural network X = tf.placeholder(tf.float32, shape=[None, n_inputs]) hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer) logits = tf.layers.dense(hidden, n_outputs, kernel_initializer=initializer) outputs = tf.nn.sigmoid(logits) # 3. select a random action based on the estimated probabilities p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) init = tf.global_variables_initializer() # maximal number of steps and container for video frames n_max_steps = 1000 frames = [] # start a session, run the graph, and close the environment (see Github link) with tf.Session() as sess: init.run() obs = env.reset() for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) if done: break env.close() # use the below functions and commands to show an animation of the frames def update_scene(num, frames, patch): patch.set_data(frames[num]) return patch, def plot_animation(frames, repeat=False, interval=40): plt.close() fig = plt.figure() patch = plt.imshow(frames[0]) plt.axis("off") return animation.FuncAnimation(fig, update_scene, fargs=(frames, patch), frames=len(frames), repeat=repeat, interval=interval) video = plot_animation(frames) plt.show()
/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:493: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:494: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:495: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:496: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:497: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:502: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)])
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
Above, the numbered comments indicate what is happening in the subsequent code. Here are some details.1. Definition of the Neural Network architecture (c.f. Figure 16-5 on page 449 of the book).2. Building the Neural Network. Obviously, it is a vanilla (most basic) version. Via the sigmoid activation function, the output lies in the interval [0,1]. If more than two actions were possible, this scheme should use one output neuron per action and a softmax activation function instead of a sigmoid (refer to Chapter 4).3. Use a multinomial probability distribution to map the probability to one single action. See https://en.wikipedia.org/wiki/Multinomial_distribution and the book for details. Evaluating Actions: The Credit Assignment Problempage 451If we knew the optimal (target) probability for taking a certain action, we could just do regular machine learning by trying to minimize the cross entropy between that target probability and the estimated probability that we obtain from our policy (see above and below "Neural Network Policies"). However, we do not know the optimal probability for taking a certain action in a certain situation. We have to guess it based on subsequent rewards and penalties (negative rewards). However, rewards are often rare and usually delayed. So it is not clear which previous actions have contributed to earning the subsequent reward. This is the *Credit Assignment Problem*.One way to tackle the Credit Assignment Problem is to introduce a *discount rate* $r\in[0,1]$. A reward $x$ that occurs $n$ steps after an action - with $n=0$ if the reward is earned directly after the action - adds a score of $r^nx$ to the rewards assigned to that action. For exmaple, if $r=0.8$ and an action is immediately followed by a reward of $+10$, then $0$ after the next action, and $-50$ after the second next action, then that initial action is assigned a score of$$+10+0-50\times 0.8^2=10-50\times0.64=10-32=-22\,.$$Typical discount rates are 0.95 and 0.99. When actions have rather immediate consequences, $r=0.95$ is expected to be more suitable than $r=0.99$.Of course it can happen that in one specific run of the program, a good action happens to be followed by an observation with negative reward. However, on average, a good action will be followed by positive rewards (otherwise it is not a good action). So we need to make many runs to smoothen out the effect of untypical evolutions. Then the scores need to be normalized. In the CartPole environment for example, one could discretize the continuous observations into a finite number of intervals, such that different observations may lie within the same (or different) margins. Then, the scores within one interval should be added up and divided by the number of times the according action was taken, thus giving a representative score. Such scores shall be calculated for all possible actions given that observation. The mean score over all actions needs to be subtracted from each action's score. The resulting number should probably be normalized by the standard deviation and then mapped to probabilities assigned to taking these actions. Policy Gradientspage 452The training procedure that has been outlined above shall now be implemented using TensorFlow.- First, let the neural network policy play the game several times and **at each step compute the gradients** that would make the chosen action even more likely, but don't apply thes gradients yet. Note that **the gradients involve the observation / state** of the system!- Once you have **run several episodes (to reduce noise from odd evolutions)**, compute each action's score (using the method described in the previous paragraph).- If an action's score is positive, it means that the action was good and you want to apply the grdients computed earler to make the action even more likel to be chosen in the future. However, if the score is negative, it means the action was bad and you want to apply the opposite gradients to make this action slightly *less* likely in the future. The solution is simply to **multiply each gradient vector by the corresponding action's score**.- Finally, compute the mean of all the resulting gradient vectors, and us it to perform a **Gradient Descent** step.We start with two functions. The first function calculated receives a list of rewards corresponding that occur one step after another and returns a list of accumulated and discounted rewards. The second function receives a list of such lists as well as a discount rate and returns the centered (subtraction of the total mean) and normalized (division by the total standard deviation) lists of rewards. Both functions are tested with simple inputs.
# build a function that calculates the accumulated and discounted rewards for all steps def discount_rewards(rewards, discount_rate): discounted_rewards = np.empty(len(rewards)) cumulative_rewards = 0 for step in reversed(range(len(rewards))): # go from end to start cumulative_rewards = rewards[step] + cumulative_rewards * discount_rate # the current reward has no discount discounted_rewards[step] = cumulative_rewards return discounted_rewards # after rewards from 10 episodes, center and normalize the rewards in order to calculate the gradient (below) def discount_and_normalize_rewards(all_rewards, discount_rate): all_discounted_rewards = [discount_rewards(rewards, discount_rate) # make a list of the lists with ... for rewards in all_rewards] # ... the rewards (i.e., a matrix) flat_rewards = np.concatenate(all_discounted_rewards) # flatten this list to calculate ... reward_mean = flat_rewards.mean() # ... the mean and ... reward_std = flat_rewards.std() # ... the standard deviation return [(discounted_rewards - reward_mean)/reward_std # return a list of centered and normalized ... for discounted_rewards in all_discounted_rewards] # ... lists of rewards # check whether these functions work discount_rewards([10,0,-50], 0.8) discount_and_normalize_rewards([[10, 0, -50], [10, 20]], 0.8)
_____no_output_____
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
The following cell is a combination of code from the book and from the github link above. It implements the entire algorithm so it is a bit complicated. Comments shall assist in understanding the code. The default value of "n_iterations" ("n_max_steps") ["discount_rate"] is 250 (1000) [0.95] but can be increased for better performance.
# see also "Visualizing the Graph and Training Curves Using TensorBoard" in chapter / notebook 9 to review ... # ... how to save a graph and display it in tensorboard; important ingredients: ... # ... (i) reset_graph(), (ii) file_writer, and (iii) file_writer.close(); possible the "saver" is also important # create a directory with a time stamp from datetime import datetime # import datetime now = datetime.utcnow().strftime("%Y%m%d%H%M%S") # get current UTC time as a string with specified format root_logdir = "./tf_logs/16_RL/Cartpole" # directory in the same folder as this .ipynb notebook logdir = "{}/policy_gradients-{}/".format(root_logdir, now) # folder with timestamp (inside the folder "tf_logs") print(logdir) # print the total path relative to this notebook # reset the graph and setup the environment (cartpole) reset_graph() env = gym.make("CartPole-v0") # 1. specify the neural network architecture n_inputs = 4 n_hidden = 4 n_outputs = 1 initializer = tf.contrib.layers.variance_scaling_initializer() learning_rate = 0.01 # 2. build the neural network X = tf.placeholder(tf.float32, shape=[None, n_inputs]) hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer) logits = tf.layers.dense(hidden, n_outputs, kernel_initializer=initializer) outputs = tf.nn.sigmoid(logits) p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) y = 1. - tf.to_float(action) # 3. cost function (cross entropy), optimizer, and calculation of gradients cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits) optimizer = tf.train.AdamOptimizer(learning_rate) grads_and_vars = optimizer.compute_gradients(cross_entropy) # 4. rearrange gradients and variables gradients = [grad for grad, variable in grads_and_vars] gradient_placeholders = [] grads_and_vars_feed = [] for grad, variable in grads_and_vars: gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape()) gradient_placeholders.append(gradient_placeholder) grads_and_vars_feed.append((gradient_placeholder, variable)) training_op = optimizer.apply_gradients(grads_and_vars_feed) # 5. file writer, global initializer, and saver file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph()) # save the graph graph under "logdir" init = tf.global_variables_initializer() saver = tf.train.Saver() # 6. details for the iterations that will be run below n_iterations = 250 # default is 250 n_max_steps = 1000 # default is 1000 n_games_per_update = 10 save_iterations = 10 discount_rate = 0.95 # default is 0.95 # 7. start a session and run it with tf.Session() as sess: init.run() # 7.1. loop through iterations for iteration in range(n_iterations): all_rewards = [] # container for rewards lists all_gradients = [] # container for gradients lists # 7.2. loop through games for game in range(n_games_per_update): current_rewards = [] # container for unnormalized rewards from current episode current_gradients = [] # container for gradients from current episode obs = env.reset() # get first observation # 7.3 make steps in the current game for step in range(n_max_steps): action_val, gradients_val = sess.run([action, gradients], # feed the observation to the model and ... feed_dict={X: obs.reshape(1, n_inputs)}) # ... receive the ... # ... action and the gradient (see 4. above) obs, reward, done, info = env.step(action_val[0][0]) # feed action and receive next observation current_rewards.append(reward) # store reward in container current_gradients.append(gradients_val) # store gradients in container if done: # stop when done break all_rewards.append(current_rewards) # update list of rewards all_gradients.append(current_gradients) # update list of gradients # 7.4. we have now run the current policy for "n_games_per_update" times and it shall be updated all_rewards = discount_and_normalize_rewards(all_rewards, discount_rate) # use the function(s) defined above # 7.5. fill place holders with actual values feed_dict = {} for var_index, grad_placeholder in enumerate(gradient_placeholders): # loop through placeholders mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index] # calculate ... for game_index, rewards in enumerate(all_rewards) # ... the ... for step, reward in enumerate(rewards)], # ... mean ... axis=0) # ... gradients and ... feed_dict[grad_placeholder] = mean_gradients # ... feed them # 7.6. run the training operation sess.run(training_op, feed_dict=feed_dict) # 7.7. save every now and then if iteration % save_iterations == 0: saver.save(sess, "./tf_logs/16_RL/Cartpole/my_policy_net_pg.ckpt") # 8. close the file writer and the environment (cartpole) file_writer.close() env.close()
./tf_logs/16_RL/Cartpole/policy_gradients-20190919190805/ WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
Using another function from the github link, we apply the trained model and visualize the performance of the agent in its environment. The performance differs from run to run, presumably due to random initialization of the environment.
# use code from the github link above to apply the trained model to the cartpole environment and show the results model = 1 def render_policy_net(model_path, action, X, n_max_steps = 1000): frames = [] env = gym.make("CartPole-v0") obs = env.reset() with tf.Session() as sess: saver.restore(sess, model_path) for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) if done: break env.close() return frames if model == 1: frames = render_policy_net("./tf_logs/16_RL/Cartpole/my_policy_net_pg.ckpt", action, X, n_max_steps=1000) else: frames = render_policy_net("./tf_logs/16_RL/Cartpole/my_good_policy_net_pg.ckpt", action, X, n_max_steps=1000) video = plot_animation(frames) plt.show()
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. INFO:tensorflow:Restoring parameters from ./tf_logs/16_RL/Cartpole/my_policy_net_pg.ckpt
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
This looks much better than without updating the gradients. Almost skillful, at least witn "n_iterations = 1000", "n_max_steps = 1000", and "discount_rate = 0.999". The according model is saved under "./tf_logs/Cartpole/my_1000_1000_0999_policy_net_pg.ckpt". In fact, **this algorithm is really powerful**. It can be used for much harder problems. **AlphaGo** was also based on it (and on *Monte Carlo Tree Search*, which is beyond the scope of this book).**Suggestion or Tip**Researchers try to find algorithms that work well even when the agent initially knows nothing about the environment. However, unless you are writing a paper, you should inject as much prior knowledge as possible into the agent, as it will speed up training dramatically. For example, you could add negative rewards proportional to the distance from the center of the screen, and to the pole's angle. Also, if you already have a reasonably good policy (e.g., hardcoded), you may want to train the nerual network to imitate it before using policy gradients to imporve it.But next, we go a different path: the agent shall calculate what future rewards (possibly discounted) it expects after an action in a certain situation. The action shall be chosen based on that expectation. To this end, some more theory is necessary. Markov Decision Processespage 457A *Markov chain* is a stochastic process where a system changes between a finite number of states and the probability to transition from state $s$ to state $s'$ is only determined by the pair $(s,s')$, not on previous states (the system has no memory). Andrey Markov studied these processes in the early 20th century. If the transition $(s,s)$ has probability $1$, then the state $s$ is a *terminal state*: the system cannot leave it (see also Figure 16-7 on page 458 of the book). There can be any integer $n\geq0$ number of terminal states. Markov chains are heavily used in thermodynamics, chemistry, statistics, and much more.Markov decision processes were first decribed by Richard Bellmann in the 1950s (see the jstor-link above). They are Markov chains where at each state $s$, there is one or more action $a$ that an *agent* can choose from and the transition probilitiy between state $s$ and $s'$ depends on action $a$. Moreover, some transitions $(s,a,s')$ return a reward (positive or negative). The agent's goal is to find a policy that maximizes rewards over time (see also Figure 16-8 on page 459).If an agent acts optimally, then the *Bellman Optimality Equation* applies. This recursive equation states that the optimal value of the current state is equal to the reward it will get on average after taking one optimal action, plus the expected optimal vlaue of all possible next states that this action can lead to:$$V^*(s)={\rm max}_a\sum_{s'}T(s,a,s')[R(s,a,s')+\gamma*V^*(s')]\quad\text{for all $s$.}$$Here,- $V^*(s)$ is the *optimal state value* of state $s$,- $T(s,a,s')$ is the transition probability from state $s$ to state $s'$, given that the agent chose action $a$,- $R(s,a,s')$ is the reward that the agent gets when it goes from state $s$ to state$s'$, given that the agent chose action $a$, and- $\gamma$ is the discount rate.The **Value Iteration** algorithm will have the values $V(s)$ - initialized to 0 - converge towards their optima $V^*(s)$ by iteratively updating them:$$V_{k+1}(s)\leftarrow{\rm max}_a\sum_{s'}T(s,a,s')[R(s,a,s')+\gamma V_k(s')]\quad\text{for all $s$.}$$Here, $V_k(s)$ is the estimated value of state $s$ in the $k$-th iteration and $\lim_{k\to\infty}V_k(s)=V^*(s)$.**General note**This algorithm is an example of *Dynamic Programming*, which breaks down a complex problem (in this case estimating a potentially infinite sum of discounted future rewards) into tractable sub-problems that can be tackled iteratively (in this case finding the action that maximizes the average reward plus the discounted next state value).Yet even when $V^*(s)$ is known, it is still unclear what the agent shall do. Bellman thus extended the concept of state values $V(s)$ to *state-action values*, called *Q-values*. The optimal Q-value of the state-action pair $(s,a)$, noted $Q^*(s,a)$, is the sum of discounted future rewards the agent will earn on average when it applies to correct policy, i.e., choosing the right action $a$ when in state $s$. The according **Q-value iteration** algorithm is$$Q_{k+1}(s,a)\leftarrow\sum_{s'}T(s,a,s')[R(s,a,s')+\gamma\,{\rm max}_{a'}Q_k(s',a')],\quad\text{for all $(s,a)$.}$$This is very similar to the value iteration algorithm. Now, the optimal policy $\pi^*(s)$ is clear: when in state $s$, apply that action $a$ for which the Q-value is highest: $\pi^*(s)={\rm argmax}_aQ^*(s,a)$. We will practice this on the Markov decision process shown in Figure 16-8 on page 459 of the book.
nan = np.nan # for transitions that do not exist (and thus have probability 0 but shall not be updated) # transition probabilities T = np.array([ [[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]], # at state s_0: actions a_0, a_1, a_2 to states s_0, s_1, s_2 [[0.0, 1.0, 0.0], [nan, nan, nan], [0.0, 0.0, 1.0]], # at state s_1 [[nan, nan, nan], [0.8, 0.1, 0.1], [nan, nan, nan]] # at state s_2 ]) # rewards associated with the transitions above R = np.array([ [[10., 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]], [[10., 0.0, 0.0], [nan, nan, nan], [0.0, 0.0, -50]], [[nan, nan, nan], [40., 0.0, 0.0], [nan, nan, nan]] ]) # possible actions possible_actions = [ [0, 1, 2], # in state s0 [0, 2], # in state s1 [1]] # in state s2 # Q value initialization Q = np.full((3, 3), -np.inf) # -infinity for impossible actions for state, actions in enumerate(possible_actions): print(state, actions) Q[state, actions] = 0.0 # 0.0 for possible actions print(Q)
0 [0, 1, 2] 1 [0, 2] 2 [1] [[ 0. 0. 0.] [ 0. -inf 0.] [-inf 0. -inf]]
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
Above, we have specified the model (lists of transition probabilities T, rewards R, and possible actions) and initialized the Q-values. Using $-\infty$ for impossible actions ensures that those actions will not be chosen by the policy. The enumerate command can be quite helpful and is therefore shortly introduced below.
# see the python tips link above for counter, value in enumerate(["apple", "banana", "grapes", "pear"]): print(counter, value) print() for counter, value in enumerate(["apple", "banana", "grapes", "pear"], 1): print(counter, value) print() counter_list = list(enumerate(["apple", "banana", "grapes", "pear"], 1)) print(counter_list)
0 apple 1 banana 2 grapes 3 pear 1 apple 2 banana 3 grapes 4 pear [(1, 'apple'), (2, 'banana'), (3, 'grapes'), (4, 'pear')]
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
Now, define residual parameters and run the Q-value iteration algorithm. The results are very similar to the ones in the book.
# learning parameters learning_rate = 0.01 discount_rate = 0.95 # try 0.95 and 0.9 (they give different results) n_iterations = 100 # Q-value iteration for iteration in range(n_iterations): # loop over iterations Q_prev = Q.copy() # previous Q for s in range(3): # loop over states s for a in possible_actions[s]: # loop over available actions a # update Q[s, a] Q[s, a] = np.sum([T[s, a, sp] * (R[s, a, sp] + # transition probability to sp times ... discount_rate*np.max(Q_prev[sp])) # ... ( reward + max Q(sp) ) for sp in range(3)]) # sum over sp (sp = s prime) print(Q) # print final Q np.argmax(Q, axis=1) # best action for a given state (i.e., in a given row)
[[21.88646117 20.79149867 16.854807 ] [ 1.10804034 -inf 1.16703135] [ -inf 53.8607061 -inf]]
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
The Q-value iteration algorithm gives different results for different discount rates $\gamma$. For $\gamma=0.95$, the optimal policy is to choose action 0 (2) [1] in state 0 (1) [2]. But this changes to actions 0 (0) [1] if $\gamma=0.9$. And it makes sense: when someone appreciates the present more than possible future rewards, there is less motivation to go through the fire. Temporal Difference Learning and Q-Learningpage 461Reinforcement Learning problems can often be modelled with Markov Decision Processes, yet initially, there is no knowledge on transition probabilities $T(s,a,s')$ and rewards $R(s,a,s')$. The algorithm must experience every connection at least once to obtain the rewards and multiple times to get a glimplse of the probabilities.**Temporal Difference Learning** (TD Learning) is similar to *Value Iteration* (see above) but adapted to the fact that the agent has only partial knowledge of the MDP. In fact, it often only knows the possible states and actions and thus must *explore* the MDP via an *exploration policy* in order to recursively update the estimates for the state values based on observed transitions and rewards. This is achieved via the **Temporal Difference Learning algorithm** (Equation 16-4 in the book),$$V_{k+1}\leftarrow(1-\alpha)V_k(s)+\alpha(r+\gamma V_k(s'))\,,$$where $\alpha$ is the learning rate (e.g., 0.01).For each state $s$, this algorithm simply keeps track of a running average of the next reward (for the transition to the state it will end up at next) and future rewards (via the state value of the next state, since the agent is assumed to act optimally). This algorithm is stochastic: the next state $s'$ and the reward gained by going to it are not known at state $s$: this transition only happens with a certain probability and thus will – in general – be different from time to time.**Suggestion or Tip**TD Learning has many similarities with Stochasitc Gradient Descent, in particular the fact that it handles one sample at a time. Just like SGD, it can only truly converge if you gradually reduce the learning rate (otherwise it will keep bouncing around the optimum).The **Q-Learning algorithm** resembles the *Temporal Difference Learning algorithm* just like the *Q-Value Iteration algorithm* resembles the *Value Iteration algorithm* (see Equation 16-5 in the book):$$Q_{k+1}(s,a)\leftarrow(1-\alpha)Q_k(s,a)+\alpha(r+\gamma\,{\rm max}_{a'}Q_k(s',a'))\,.$$In comparison with the *TD Learning algorithm* above, there is another degree of freedom: the action $a$. It is assumed that the agent will act optimally. As a consequence, the maximum (w.r.t. actions a') Q-Value is chosen for the next state-action pair. Now to the implementation!
n_states = 3 # number of MDP states n_actions = 3 # number of actions n_steps = 20000 # number of steps (=iterations) alpha = 0.01 # learning rate gamma = 0.99 # discount rate # class for a Markov Decision Process (taken from the github link above) class MDPEnvironment(object): def __init__(self, start_state=0): # necessary __init__ self.start_state=start_state self.reset() def reset(self): # reset the environment self.total_rewards = 0 self.state = self.start_state def step(self, action): # make a step next_state =np.random.choice(range(3), # use the transition probabilities T above and the ... p=T[self.state][action]) # ... state and action to infer the next state reward = R[self.state][action][next_state] # in this fashion, also calculate the obtained reward self.state = next_state # update the state ... self.total_rewards += reward # ... and the total rewards return self.state, reward # function that returns a random action that is in line the given state (taken from github link above) def policy_random(state): return np.random.choice(possible_actions[state]) # see definition of "possible_actions" above exploration_policy = policy_random # use that function as an exploration policy # reinitialize the Q values (they had been updated above) Q = np.full((3, 3), -np.inf) # -infinity for impossible actions for state, actions in enumerate(possible_actions): # see (unchanged) defintion of possible_actions above Q[state, actions] = 0.0 # 0.0 for possible actions print(Q) # use the class defined above env = MDPEnvironment() # loop through all steps (=iterations) for step in range(n_steps): action = exploration_policy(env.state) # give our "exploration_policy = policy_random" the ... # ... current state and obtain an action in return state = env.state # obtain the state from the environment next_state, reward = env.step(action) # apply the action to obtain the next state and reward next_value = np.max(Q[next_state]) # assuming optimal behavior, use the highest Q-value ... # ... that is in line with the next state (maximise ... # ... w.r.t. the actions) Q[state, action] = (1-alpha)*Q[state, action] + alpha*(reward + gamma * next_value) # update Q value Q # return the final Q values
[[ 0. 0. 0.] [ 0. -inf 0.] [-inf 0. -inf]]
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
The above implementation is different from the book (where prior knowledge on the transition probabilities is used). Accordingly, the resulting Q-values are different from those in the book and closer to the ones listed in the github link above.With enough iterations, this algorithm will converge towards theo optimal Q-values. It is called an **off-policy** algorithm since the policy that is used for training is different from the one that will be applied. Surprisingly, this algorithm converges towards the optimal policy by stumbling around: it's a stochastic process. So although this works, there should be a more promising approach. This shall be studied next. Exploration Policiespage 463Above, the transitions probabilites $T(s,a,s')$ are used to explore the MDP and obtain a policy. However, this might take a long time. An alternative approach is to use an **$\epsilon$-greedy policy** (greedy = gierig): in each state, it chooses the state-action value Q(s,a) with the highest score with probability $1-\epsilon$ or a random action (i.e., Q(s,...)) with probability $\epsilon$. This way, the policy will explorte the interesting parts of the network more intensely while still getting in touch (eventually) with all other parts. It is common to start with $\epsilon=1$ (totally random) and transition to $\epsilon=0.05$ (choose highest Q-value most of the time).Another way to incite the agent to explore the entire network is to artificially increase the value of state action pairs that have not been visited frequently. This can be done like so (Equation 16-6 in the book):$$Q(s,a)\leftarrow(1-\alpha)Q(s,a)+\alpha\left(r+\gamma\,{\rm max}_{\alpha'}f(Q(s',a'),N(s',a')\right)\,,$$where $\alpha$ is the learning rate, $\gamma$ is the discount rate, and $f(q,n)$ is a function of the Q-value $q$ and the additional reward $n$, e.g. $f(q,n)=q+K/(1+n)$, where $K$ is a **curiosity hyperparameter** and $n$ is the number of times the Q-value $q$ has been visited. Approximate Q-Learningpage 464Unfortunately, Q-Learning as described above dose not scale well. For example, in Ms. Pac-Man there exist more than 250 pellets that can be already eaten or not. This alone gives rise to $2^{250}\approx10^{75}$ states, i.e., more than the number of atoms in the universe. And this is just the pellets! There is no way, a policy could completely explore the according MDP.The solution is **Approximate Q-Learning** where estimates for the Q-values are inferred from a managable number of parameters. For a long time, the way to go had been to hand-craft features, e.g., the distances between Ms. Pac-Man and the ghosts. But DeepMind showed that deep neural networks can perform much better. On top, they do not require feature engineering. A DNN that is used to estimate Q-Values is called a **Deep Q-Network (DQN)**, and using a DQN for Approximate Q-Learning (and subsequently inferring a policy from the Q-Values) is called **Deep Q-Learning**.The rest of this chapter is about using Deep Q-Learning to train a computer on Ms. Pac-Man, much like DeepMind did in 2013. This code is quite versatile. It works for many Atari games but tends to have problems with games with long-running storylines. Learning to Play Ms. Pac-Man Using Deep Q-Learningpage 464Some installations shall be made:- Homebrew (see http://brew.sh/). For macOS, run '/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"' in the terminal and for Ubuntu, use 'apg-get install -y python3-numpy python3-dev cmake zlib1g-dev libjpeg-dev\xvfb libav-tools xorg-dev python3-opengl libboost-all-dev libsdl2-dev swig'.- Extra python modules for the gym of OpenAI, via "pip3 install --upgrade 'gym[all]'".With these, we should manage to create a Ms. Pac-Man environment.
env = gym.make("MsPacman-v0") obs = env.reset() print(obs.shape) # [height, width, channels] print(env.action_space) # possible actions
(210, 160, 3) Discrete(9)
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
So the agent observes a picture (screenshot) of 210 pixels height and 160 pixels width, with each pixel having three color values: the intensity of red, green, and blue. The agent may react by returning an action to the environment. There are nine discrete actions, corresponding to the nine joystick positions ([up, middle, down]x[left, middle, right]). The screenshots are a bit too large and grayscale should suffice. So we preprocess the data by cropping the image to the relevant part and converting the color image to grayscale. This will speed up training.
# see https://docs.python.org/2.3/whatsnew/section-slices.html for details on slices trial_list = [[0.0,0.1,0.2,0.3,0.4], [0,1,2,3,4], [0,10,20,30,40]] print(trial_list[:2]) # print list elements until before element 2, i.e., 0 and 1 print(trial_list[::2]) # print all list elements that are reached by step size 2, i.e., 0 and 2, not 1 print(trial_list[:3]) # print list elements until before element 3, i.e., 0, 1, and 2 (this means all, here) print(trial_list[::3]) # print all list elements that are reached by step size 3, i.e., only 0, not 1 and 2 # back to the book mspacman_color = np.array([210, 164, 74]).mean() def preprocess_observation(obs): img = obs[1:176:2, ::2] # use only every second line from line 1 to line 176 (top towards bottom) and in ... # ... each line, use only every second row img = img.mean(axis=2) # go to grayscale by replacing the 3 rgb values by their average img[img==mspacman_color] = 0 # draw Ms. Pac-Man in black img = (img-128)/128 - 1 # normalize grayscale return img.reshape(88, 80, 1) # reshape to 88 rows and 80 columns; each scalar entry is the color in grayscale ... # ... and thus is a bit different from the book #print(preprocess_observation(obs)) print(preprocess_observation(obs).shape) # slightly modified from the github link above plt.figure(figsize=(9, 5)) plt.subplot(121) plt.title("Original observation (160×210 RGB)") plt.imshow(obs) plt.axis("off") plt.subplot(122) plt.title("Preprocessed observation (88×80 greyscale)") # the output of "preprocess_observation()" is reshaped as to be in line with the placeholder "X_state", to which ... # ... it will be fed; below, another reshape operation is appropriate to conform the (possbily updated w.r.t. the ... # ... book) "plt.imshow()" command, https://matplotlib.org/api/_as_gen/matplotlib.pyplot.imshow.html plt.imshow(preprocess_observation(obs).reshape(88,80), interpolation="nearest", cmap="gray") plt.axis("off") plt.show()
[[0.0, 0.1, 0.2, 0.3, 0.4], [0, 1, 2, 3, 4]] [[0.0, 0.1, 0.2, 0.3, 0.4], [0, 10, 20, 30, 40]] [[0.0, 0.1, 0.2, 0.3, 0.4], [0, 1, 2, 3, 4], [0, 10, 20, 30, 40]] [[0.0, 0.1, 0.2, 0.3, 0.4]] (88, 80, 1)
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
The next task will be to build the DQN (*Deep Q-Network*). In principle, one could just feed it a state-action pair (s, a) and have it estimate the Q-Value Q(s, a). But since the actions are discrete and only 9 actions are possible, one can also feed the DQN the state s and have it estimate Q(s, a) for all 9 possible actions. The DQN shall have 3 convolutional layers followed by 2 fully connected layers, the last of which is the output layer. An actor-critic approach is used. They share the same architecture but have different tasks: the actor plays and the critic watches the play and tries to identify and fix shortcomings. After each few iterations, the critic network is copied to the actor network. So while they share the same architecture their parameters are different. Next, we define a function to build these DQNs.
### construction phase # resetting the graph seems to always be a good idea reset_graph() # DQN architecture / hyperparameters input_height = 88 input_width = 80 input_channels = 1 conv_n_maps = [32, 64, 64] conv_kernel_sizes = [(8,8), (4,4), (3,3)] conv_strides = [4, 2, 1] conv_paddings = ["SAME"] * 3 conv_activation =[tf.nn.relu] * 3 n_hidden_in = 64 * 11 * 10 # conv3 has 64 maps of 11x10 each n_hidden = 512 hidden_activation = tf.nn.relu # corrected from "[tf.nn.relu]*3", according to the code on github (see link above) n_outputs = env.action_space.n # 9 discrete actions are available initializer = tf.contrib.layers.variance_scaling_initializer() # function to build networks with the above architecture; the function receives a state (observation) and a name def q_network(X_state, name): prev_layer = X_state # input from the input layer conv_layers = [] # container for all the layers of the convolutional network (input layer is not part of that) with tf.variable_scope(name) as scope: # loop through tuples, see "https://docs.python.org/3.3/library/functions.html#zip" for details; # in the first (second) [next] step take the first (second) [next] elements of the lists conv_n_maps, ... # ... conv_kernel_sizes, etc.; this continues until one of these lists has arrived at the end; # here, all these lists have length 3 (see above) for n_maps, kernel_size, stride, padding, activation in zip(conv_n_maps, conv_kernel_sizes, conv_strides, conv_paddings, conv_activation): # in the next step (of the for-loop), this layer will be the "previous" one # ["stride" -> "strides", see https://www.tensorflow.org/api_docs/python/tf/layers/conv2d] prev_layer = tf.layers.conv2d(prev_layer, filters=n_maps, kernel_size=kernel_size, strides=stride, padding=padding, activation=activation, kernel_initializer=initializer) # put the current layer into the container for all the layers of the convolutional neural network conv_layers.append(prev_layer) # make the last output a vector so it it can be passed easily to the upcoming fully connected layer last_conv_layer_flat = tf.reshape(prev_layer, shape=[-1, n_hidden_in]) # first hidden layer hidden = tf.layers.dense(last_conv_layer_flat, n_hidden, activation=hidden_activation, kernel_initializer=initializer) # second hidden layer = output layer (these are the results!) outputs = tf.layers.dense(hidden, n_outputs, kernel_initializer=initializer) # let tensorflow figure out what variables shall be trained ... trainable_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope.name) # ... and give these variables names so they can be distinguished in the graph trainable_vars_by_name = {var.name[len(scope.name):]: var for var in trainable_vars} return outputs, trainable_vars_by_name # return the outputs and the dictionary of trainable variables # input, actor, critic, and model copying X_state = tf.placeholder(tf.float32, shape=[None, input_height, input_width, input_channels]) # input placeholder actor_q_values, actor_vars = q_network(X_state, name="q_networks/actor") # actor model #q_values = actor_q_values.eval(feed_dict={X_state: [state]}) critic_q_values, critic_vars = q_network(X_state, name="q_networks/critic") # critic model copy_ops = [actor_var.assign(critic_vars[var_name]) for var_name, actor_var in actor_vars.items()] # copy critic ... copy_critic_to_actor = tf.group(*copy_ops) # ... to actor # some more theory is due print("next, some text...")
next, some text...
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
The somehow initialized Q-values will have the actor network play the game (initially somehow random), update the encountered Q-values (via the default discounted rewards), and leave the remaining Q-values unchanged. Every now and then, the critic network is tasked with predicting all Q-values by fitting the weights and biases. The Q-values resulting from the fitted weights and biases will be slightly different. This is supervised learning. The cost function that the critic network shall minimize is$$J(\theta_{\rm critic})=\frac{1}{m}\sum_{i=1}^m\left(y^{(i)}-Q(s^{(i)},a^{(i)},\theta_{\rm critic})\right)^2{\rm , with }\quad y^{(i)}=r^{(i)}+\gamma {\rm max}_{a'}Q\left(s'^{(i)},a^{(i)},\theta_{\rm actor}\right)\,,$$where- $s^{(i)},\,a^{(i)},\,r^{(i)},\,s'^{(i)}$ are respectively the state, action, reward, and the next state of the $i^{\rm th}$ memory sampled from the replay memory,- $m$ is the size of the memory batch,- $\theta_{\rm critic}$ and $\theta_{\rm actor}$ are the critic and the actor's parameters,- $Q(s^{(i)},a^{(i)},\theta_{\rm critic})$ is the critic DQN's prediction of the $i^{\rm th}$ memorized state-action's Q-value,- $Q(s^{(i)},a^{(i)},\theta_{\rm actor})$ is the actor DQN's prediction of Q-value it can expect from the next state $s'^{\rm(i)}$ if it chooses action $a'$,- $y^{(i)}$ is the target Q-value for the $i^{\rm th}$ memory. Note that it is equal to the reward actually observed by the actor, plus the actor's *prediction* of what future rewards it should expect if it were to play optimally (as far as it knows), and- $J(\theta_{\rm critic})$ is the cost function used to train the critic DQN. As you can see, it is just the mean squared error between the target Q-values $y^{(i)}$ as estimated by the actor DQN, and the critic DQN's prediction of these Q-values.**General note**The replay memory [see code and/or text in the book] is optional but highly recommended. Without it , you would train the critic DQN using consecutive experiences that may be very correlated. This would introduce a lot of bias and slow down the training algorithm's convergence. By using a replay memory, we ensure that the memories fed to the training algorithm can be fairly uncorrelated.
# the actor DQN computes 9 Q-values: 1 for each possible action; use one-hot encoding to select only the one that ... # ... is actually chosen (https://www.tensorflow.org/api_docs/python/tf/one_hot) X_action = tf.placeholder(tf.int32, shape=[None]) q_value = tf.reduce_sum(critic_q_values * tf.one_hot(X_action, n_outputs), axis=1, keep_dims=True) # feed the Q-values for the critic network through a placeholder y and do all the rest for training operations y = tf.placeholder(tf.float32, shape=[None, 1]) cost = tf.reduce_mean(tf.square(y - q_value)) global_step = tf.Variable(0, trainable=False, name="global_step") optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(cost, global_step=global_step) # initializer and saver nodes init = tf.global_variables_initializer() saver = tf.train.Saver()
WARNING:tensorflow:From <ipython-input-17-483165648def>:4: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
For the execution phase, we will need the following tools.
# time keeping; for details, see the contribution of user "daviewales" on ... # ... https://codereview.stackexchange.com/questions/26534/is-there-a-better-way-to-count-seconds-in-python import time time_start = time.time() # another import, see https://docs.python.org/3/library/collections.html#collections.deque from collections import deque # deque or "double-ended queue" is similar to a python list but more performant replay_memory_size = 10000 # name says it replay_memory = deque([], maxlen=replay_memory_size) # deque container for the replay memory # function that samples random memories def sample_memories(batch_size): # use "np.rand" instead of "rnd", see ... # ... "https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.permutation.html" indices = np.random.permutation(len(replay_memory))[:batch_size] # take "batch_size" random indices cols = [[], [], [], [], []] # state, action, reward, next_state, continue # loop over indices for idx in indices: memory = replay_memory[idx] # specifice memory for col, value in zip(cols, memory): # fancy way of moving the values in the ... col.append(value) # ... memory into col cols = [np.array(col) for col in cols] # make entries numpy arrays return (cols[0], cols[1], cols[2].reshape(-1,1), cols[3], cols[4].reshape(-1,1)) # use an epsilon-greedy policy eps_min = 0.05 # next action is random with probability of 5% eps_max = 1.0 # next action will be random for sure eps_decay_steps = 50000 # decay schedule for epsilon # receive the Q values for the current state and the current step def epsilon_greedy(q_values, step): epsilon = max(eps_min, eps_max - (eps_max-eps_min) * step/eps_decay_steps) # calculate the current epsilon if np.random.rand() < epsilon: # choice of random or optimal action return np.random.randint(n_outputs) # random action else: return np.argmax(q_values) # optimal action ### execution phase n_steps = 10 # total number of training steps (default: 100000) training_start = 1000 # start training after 1000 game iterations training_interval = 3 # run a training step every few game iterations (default: 3) save_steps = 50 # save the model every 50 training steps copy_steps = 25 # copy the critic to the actor every few training steps (default: 25) discount_rate = 0.95 # discount rate (default: 0.95) skip_start = 90 # skip the start of every game (it's just waiting time) batch_size = 50 # batch size (default: 50) iteration = 0 # initialize the game iterations counter checkpoint_path = "./tf_logs/16_RL/MsPacMan/my_dqn.ckpt" # checkpoint path done = True # env needs to be reset # remaining import import os # start the session with tf.Session() as sess: # restore a session if one has been stored if os.path.isfile(checkpoint_path): saver.restore(sess, checkpoint_path) # otherwise start from scratch else: init.run() while True: # continue training ... step = global_step.eval() # ... until step ... if step >= n_steps: # ... has reached n_steps ... break # ... (stop in that latter case) iteration += 1 # count the iterations if done: # game over: start again ... # restart 1/3 obs = env.reset() # ... and reset the environment # restart 2/3 for skip in range(skip_start): # skip the start of each game obs, reward, done, info = env.step(0) # get data from environment (Ms. Pacman) # restart 3/3 state = preprocess_observation(obs) # preprocess the state of the game (screenshot) # actor evaluates what to do q_values = actor_q_values.eval(feed_dict={X_state: [state]}) # get available Q-values for the current state action = epsilon_greedy(q_values, step) # apply the epsilon-greedy policy # actor plays obs, reward, done, info = env.step(action) # apply the action and get new data from env next_state = preprocess_observation(obs) # preprocess the next state (screenshot) # let's memorize what just happened replay_memory.append((state, action, reward, next_state, 1.0 - done)) state = next_state # go to the next iteration of the while loop (i) before training starts and (ii) if learning is not scheduled if iteration < training_start or iteration % training_interval != 0: continue # if learning is scheduled for the current iteration, get samples from the memory, update the Q-values ... # ... that the actor learned as well as the rewards, and run a training operation X_state_val, X_action_val, rewards, X_next_state_val, continues = (sample_memories(batch_size)) next_q_values = actor_q_values.eval(feed_dict={X_state: X_next_state_val}) max_next_q_values = np.max(next_q_values, axis=1, keepdims=True) y_val = rewards + continues * discount_rate * max_next_q_values training_op.run(feed_dict={X_state: X_state_val, X_action: X_action_val, y: y_val}) # regularly copy critic to actor and ... if step % copy_steps == 0: copy_critic_to_actor.run() # ... save the session so it could be restored if step % save_steps == 0: saver.save(sess, checkpoint_path) # the output of this time keeping (see top of this cell) will be in seconds time_finish = time.time() print(time_finish - time_start)
14.940124988555908
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
With a function from the github link above, we can visualize the play of the trained model.
frames = [] n_max_steps = 10000 with tf.Session() as sess: saver.restore(sess, checkpoint_path) obs = env.reset() for step in range(n_max_steps): state = preprocess_observation(obs) q_values = actor_q_values.eval(feed_dict={X_state: [state]}) # from "online_q_values" to "actor_q_values" action = np.argmax(q_values) obs, reward, done, info = env.step(action) img = env.render(mode="rgb_array") frames.append(img) if done: break plot_animation(frames)
INFO:tensorflow:Restoring parameters from ./tf_logs/16_RL/MsPacMan/my_dqn.ckpt
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
**Suggestion or Tip**Unfortunately, training is very slow: if you use your laptop for training, it will take days before Ms. Pac-Man gets any good, and if you look at the learning curve, measuring the average rewards per episode, you will notice that it is extremely noisy. At some points there may be no apparent progress for a very long time until suddenly the agent learns to survive a reasonable amount of time. As mentioned earlier, one solution is to inject as much prior knowledge as possible into the model (e.g., through preprocessing, rewards, and so on), and you can also try to bootstrap the model by first training it to imitate a basic strategy. In any case, RL still requires quite a lot of patience and tweaking but the end result is very exciting. Exercisespage 473 1.-7.Solutions are shown in Appendix A of the book and in the separate notebook *ExercisesWithoutCode*. 8.Use Deep Q-Learning to tackle OpenAI gym's "BypedalWalker-v2". The Q-networks do not need to be very deep for this task.
# everything below is mainly from the github link above # the rendering command (see github link) does not work here; the according code is adapted; frames are not ... # ... displayed, here env = gym.make("BipedalWalker-v2") print(env) obs = env.reset() print(obs) print(env.action_space) print(env.action_space.low) print(env.action_space.high)
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. <TimeLimit<BipedalWalker<BipedalWalker-v2>>> [ 2.74743512e-03 -1.15991896e-05 9.02299415e-04 -1.59999228e-02 9.19910073e-02 -1.19072467e-03 8.60251635e-01 2.28293116e-03 1.00000000e+00 3.23961042e-02 -1.19064655e-03 8.53801474e-01 8.42094344e-04 1.00000000e+00 4.40814018e-01 4.45820123e-01 4.61422771e-01 4.89550203e-01 5.34102798e-01 6.02461040e-01 7.09148884e-01 8.85931849e-01 1.00000000e+00 1.00000000e+00] Box(4,) [-1. -1. -1. -1.] [1. 1. 1. 1.]
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
Build all possible combinations of actions (cf. above).
# see https://docs.python.org/3.7/library/itertools.html#itertools.product from itertools import product possible_torques = np.array([-1.0, 0.0, 1.0]) possible_actions = np.array(list(product(possible_torques, repeat=4))) possible_actions.shape print(possible_actions)
[[-1. -1. -1. -1.] [-1. -1. -1. 0.] [-1. -1. -1. 1.] [-1. -1. 0. -1.] [-1. -1. 0. 0.] [-1. -1. 0. 1.] [-1. -1. 1. -1.] [-1. -1. 1. 0.] [-1. -1. 1. 1.] [-1. 0. -1. -1.] [-1. 0. -1. 0.] [-1. 0. -1. 1.] [-1. 0. 0. -1.] [-1. 0. 0. 0.] [-1. 0. 0. 1.] [-1. 0. 1. -1.] [-1. 0. 1. 0.] [-1. 0. 1. 1.] [-1. 1. -1. -1.] [-1. 1. -1. 0.] [-1. 1. -1. 1.] [-1. 1. 0. -1.] [-1. 1. 0. 0.] [-1. 1. 0. 1.] [-1. 1. 1. -1.] [-1. 1. 1. 0.] [-1. 1. 1. 1.] [ 0. -1. -1. -1.] [ 0. -1. -1. 0.] [ 0. -1. -1. 1.] [ 0. -1. 0. -1.] [ 0. -1. 0. 0.] [ 0. -1. 0. 1.] [ 0. -1. 1. -1.] [ 0. -1. 1. 0.] [ 0. -1. 1. 1.] [ 0. 0. -1. -1.] [ 0. 0. -1. 0.] [ 0. 0. -1. 1.] [ 0. 0. 0. -1.] [ 0. 0. 0. 0.] [ 0. 0. 0. 1.] [ 0. 0. 1. -1.] [ 0. 0. 1. 0.] [ 0. 0. 1. 1.] [ 0. 1. -1. -1.] [ 0. 1. -1. 0.] [ 0. 1. -1. 1.] [ 0. 1. 0. -1.] [ 0. 1. 0. 0.] [ 0. 1. 0. 1.] [ 0. 1. 1. -1.] [ 0. 1. 1. 0.] [ 0. 1. 1. 1.] [ 1. -1. -1. -1.] [ 1. -1. -1. 0.] [ 1. -1. -1. 1.] [ 1. -1. 0. -1.] [ 1. -1. 0. 0.] [ 1. -1. 0. 1.] [ 1. -1. 1. -1.] [ 1. -1. 1. 0.] [ 1. -1. 1. 1.] [ 1. 0. -1. -1.] [ 1. 0. -1. 0.] [ 1. 0. -1. 1.] [ 1. 0. 0. -1.] [ 1. 0. 0. 0.] [ 1. 0. 0. 1.] [ 1. 0. 1. -1.] [ 1. 0. 1. 0.] [ 1. 0. 1. 1.] [ 1. 1. -1. -1.] [ 1. 1. -1. 0.] [ 1. 1. -1. 1.] [ 1. 1. 0. -1.] [ 1. 1. 0. 0.] [ 1. 1. 0. 1.] [ 1. 1. 1. -1.] [ 1. 1. 1. 0.] [ 1. 1. 1. 1.]]
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
Build a network and see how it performs.
tf.reset_default_graph() # 1. Specify the network architecture n_inputs = env.observation_space.shape[0] # == 24 n_hidden = 10 n_outputs = len(possible_actions) # == 625 initializer = tf.variance_scaling_initializer() # 2. Build the neural network X = tf.placeholder(tf.float32, shape=[None, n_inputs]) hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.selu, kernel_initializer=initializer) logits = tf.layers.dense(hidden, n_outputs, kernel_initializer=initializer) outputs = tf.nn.softmax(logits) # 3. Select a random action based on the estimated probabilities action_index = tf.squeeze(tf.multinomial(logits, num_samples=1), axis=-1) # 4. Training learning_rate = 0.01 y = tf.one_hot(action_index, depth=len(possible_actions)) cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=logits) optimizer = tf.train.AdamOptimizer(learning_rate) grads_and_vars = optimizer.compute_gradients(cross_entropy) gradients = [grad for grad, variable in grads_and_vars] gradient_placeholders = [] grads_and_vars_feed = [] for grad, variable in grads_and_vars: gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape()) gradient_placeholders.append(gradient_placeholder) grads_and_vars_feed.append((gradient_placeholder, variable)) training_op = optimizer.apply_gradients(grads_and_vars_feed) init = tf.global_variables_initializer() saver = tf.train.Saver() # 5. execute it, count the rewards, and show them def run_bipedal_walker(model_path=None, n_max_steps =1000): # function from github but adapted to counting rewards ... env = gym.make("BipedalWalker-v2") # ... and showing them instead of showing rendered frames with tf.Session() as sess: if model_path is None: init.run() else: saver.restore(sess, model_path) obs = env.reset() rewards = 0 for step in range(n_max_steps): action_index_val = action_index.eval(feed_dict={X: obs.reshape(1, n_inputs)}) action = possible_actions[action_index_val] obs, reward, done, info = env.step(action[0]) rewards += reward if done: break env.close() return rewards run_bipedal_walker()
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
The model does not perform well because it has not been trained, yet. This shall be done!
n_games_per_update = 10 # default is 10 n_max_steps = 10 # default is 1000 n_iterations = 100 # default is 1000 save_iterations = 10 # default is 10 discount_rate = 0.95 # default is 0.95 with tf.Session() as sess: init.run() for iteration in range(n_iterations): print("\rIteration: {}/{}".format(iteration + 1, n_iterations), end="") all_rewards = [] all_gradients = [] for game in range(n_games_per_update): current_rewards = [] current_gradients = [] obs = env.reset() for step in range(n_max_steps): action_index_val, gradients_val = sess.run([action_index, gradients], feed_dict={X: obs.reshape(1, n_inputs)}) action = possible_actions[action_index_val] obs, reward, done, info = env.step(action[0]) current_rewards.append(reward) current_gradients.append(gradients_val) if done: break all_rewards.append(current_rewards) all_gradients.append(current_gradients) all_rewards = discount_and_normalize_rewards(all_rewards, discount_rate=discount_rate) feed_dict = {} for var_index, gradient_placeholder in enumerate(gradient_placeholders): mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index] for game_index, rewards in enumerate(all_rewards) for step, reward in enumerate(rewards)], axis=0) feed_dict[gradient_placeholder] = mean_gradients sess.run(training_op, feed_dict=feed_dict) if iteration % save_iterations == 0: saver.save(sess, "./tf_logs/16_RL/BiPedal/my_bipedal_walker_pg.ckpt") run_bipedal_walker("./tf_logs/16_RL/BiPedal/my_bipedal_walker_pg.ckpt")
Iteration: 100/100WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. INFO:tensorflow:Restoring parameters from ./tf_logs/16_RL/BiPedal/my_bipedal_walker_pg.ckpt
Apache-2.0
16. Reinforcement Learning.ipynb
MRD-Git/Hands-On-Machine-Learning
Dynamic Recurrent Neural Network.TensorFlow implementation of a Recurrent Neural Network (LSTM) that performs dynamic computation over sequences with variable length. This example is using a toy dataset to classify linear sequences. The generated sequences have variable length. RNN OverviewReferences:- [Long Short Term Memory](http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf), Sepp Hochreiter & Jurgen Schmidhuber, Neural Computation 9(8): 1735-1780, 1997.
from __future__ import print_function import tensorflow as tf import random # ==================== # TOY DATA GENERATOR # ==================== class ToySequenceData(object): """ Generate sequence of data with dynamic length. This class generate samples for training: - Class 0: linear sequences (i.e. [0, 1, 2, 3,...]) - Class 1: random sequences (i.e. [1, 3, 10, 7,...]) NOTICE: We have to pad each sequence to reach 'max_seq_len' for TensorFlow consistency (we cannot feed a numpy array with inconsistent dimensions). The dynamic calculation will then be perform thanks to 'seqlen' attribute that records every actual sequence length. """ def __init__(self, n_samples=1000, max_seq_len=20, min_seq_len=3, max_value=1000): self.data = [] self.labels = [] self.seqlen = [] for i in range(n_samples): # Random sequence length len = random.randint(min_seq_len, max_seq_len) # Monitor sequence length for TensorFlow dynamic calculation self.seqlen.append(len) # Add a random or linear int sequence (50% prob) if random.random() < .5: # Generate a linear sequence rand_start = random.randint(0, max_value - len) s = [[float(i)/max_value] for i in range(rand_start, rand_start + len)] # Pad sequence for dimension consistency s += [[0.] for i in range(max_seq_len - len)] self.data.append(s) self.labels.append([1., 0.]) else: # Generate a random sequence s = [[float(random.randint(0, max_value))/max_value] for i in range(len)] # Pad sequence for dimension consistency s += [[0.] for i in range(max_seq_len - len)] self.data.append(s) self.labels.append([0., 1.]) self.batch_id = 0 def next(self, batch_size): """ Return a batch of data. When dataset end is reached, start over. """ if self.batch_id == len(self.data): self.batch_id = 0 batch_data = (self.data[self.batch_id:min(self.batch_id + batch_size, len(self.data))]) batch_labels = (self.labels[self.batch_id:min(self.batch_id + batch_size, len(self.data))]) batch_seqlen = (self.seqlen[self.batch_id:min(self.batch_id + batch_size, len(self.data))]) self.batch_id = min(self.batch_id + batch_size, len(self.data)) return batch_data, batch_labels, batch_seqlen # ========== # MODEL # ========== # Parameters learning_rate = 0.01 training_steps = 10000 batch_size = 128 display_step = 200 # Network Parameters seq_max_len = 20 # Sequence max length n_hidden = 64 # hidden layer num of features n_classes = 2 # linear sequence or not trainset = ToySequenceData(n_samples=1000, max_seq_len=seq_max_len) testset = ToySequenceData(n_samples=500, max_seq_len=seq_max_len) # tf Graph input x = tf.placeholder("float", [None, seq_max_len, 1]) y = tf.placeholder("float", [None, n_classes]) # A placeholder for indicating each sequence length seqlen = tf.placeholder(tf.int32, [None]) # Define weights weights = { 'out': tf.Variable(tf.random_normal([n_hidden, n_classes])) } biases = { 'out': tf.Variable(tf.random_normal([n_classes])) } def dynamicRNN(x, seqlen, weights, biases): # Prepare data shape to match `rnn` function requirements # Current data input shape: (batch_size, n_steps, n_input) # Required shape: 'n_steps' tensors list of shape (batch_size, n_input) # Unstack to get a list of 'n_steps' tensors of shape (batch_size, n_input) x = tf.unstack(x, seq_max_len, 1) # Define a lstm cell with tensorflow lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden) # Get lstm cell output, providing 'sequence_length' will perform dynamic # calculation. outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x, dtype=tf.float32, sequence_length=seqlen) # When performing dynamic calculation, we must retrieve the last # dynamically computed output, i.e., if a sequence length is 10, we need # to retrieve the 10th output. # However TensorFlow doesn't support advanced indexing yet, so we build # a custom op that for each sample in batch size, get its length and # get the corresponding relevant output. # 'outputs' is a list of output at every timestep, we pack them in a Tensor # and change back dimension to [batch_size, n_step, n_input] outputs = tf.stack(outputs) outputs = tf.transpose(outputs, [1, 0, 2]) # Hack to build the indexing and retrieve the right output. batch_size = tf.shape(outputs)[0] # Start indices for each sample index = tf.range(0, batch_size) * seq_max_len + (seqlen - 1) # Indexing outputs = tf.gather(tf.reshape(outputs, [-1, n_hidden]), index) # Linear activation, using outputs computed above return tf.matmul(outputs, weights['out']) + biases['out'] pred = dynamicRNN(x, seqlen, weights, biases) # Define loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost) # Evaluate model correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() # Start training with tf.Session() as sess: # Run the initializer sess.run(init) for step in range(1, training_steps+1): batch_x, batch_y, batch_seqlen = trainset.next(batch_size) # Run optimization op (backprop) sess.run(optimizer, feed_dict={x: batch_x, y: batch_y, seqlen: batch_seqlen}) if step % display_step == 0 or step == 1: # Calculate batch accuracy & loss acc, loss = sess.run([accuracy, cost], feed_dict={x: batch_x, y: batch_y, seqlen: batch_seqlen}) print("Step " + str(step) + ", Minibatch Loss= " + \ "{:.6f}".format(loss) + ", Training Accuracy= " + \ "{:.5f}".format(acc)) print("Optimization Finished!") # Calculate accuracy test_data = testset.data test_label = testset.labels test_seqlen = testset.seqlen print("Testing Accuracy:", \ sess.run(accuracy, feed_dict={x: test_data, y: test_label, seqlen: test_seqlen}))
Step 1, Minibatch Loss= 0.864517, Training Accuracy= 0.42188 Step 200, Minibatch Loss= 0.686012, Training Accuracy= 0.43269 Step 400, Minibatch Loss= 0.682970, Training Accuracy= 0.48077 Step 600, Minibatch Loss= 0.679640, Training Accuracy= 0.50962 Step 800, Minibatch Loss= 0.675208, Training Accuracy= 0.53846 Step 1000, Minibatch Loss= 0.668636, Training Accuracy= 0.56731 Step 1200, Minibatch Loss= 0.657525, Training Accuracy= 0.62500 Step 1400, Minibatch Loss= 0.635423, Training Accuracy= 0.67308 Step 1600, Minibatch Loss= 0.580433, Training Accuracy= 0.75962 Step 1800, Minibatch Loss= 0.475599, Training Accuracy= 0.81731 Step 2000, Minibatch Loss= 0.434865, Training Accuracy= 0.83654 Step 2200, Minibatch Loss= 0.423690, Training Accuracy= 0.85577 Step 2400, Minibatch Loss= 0.417472, Training Accuracy= 0.85577 Step 2600, Minibatch Loss= 0.412906, Training Accuracy= 0.85577 Step 2800, Minibatch Loss= 0.409193, Training Accuracy= 0.85577 Step 3000, Minibatch Loss= 0.406035, Training Accuracy= 0.86538 Step 3200, Minibatch Loss= 0.403287, Training Accuracy= 0.87500 Step 3400, Minibatch Loss= 0.400862, Training Accuracy= 0.87500 Step 3600, Minibatch Loss= 0.398704, Training Accuracy= 0.86538 Step 3800, Minibatch Loss= 0.396768, Training Accuracy= 0.86538 Step 4000, Minibatch Loss= 0.395017, Training Accuracy= 0.86538 Step 4200, Minibatch Loss= 0.393422, Training Accuracy= 0.86538 Step 4400, Minibatch Loss= 0.391957, Training Accuracy= 0.85577 Step 4600, Minibatch Loss= 0.390600, Training Accuracy= 0.85577 Step 4800, Minibatch Loss= 0.389334, Training Accuracy= 0.86538 Step 5000, Minibatch Loss= 0.388143, Training Accuracy= 0.86538 Step 5200, Minibatch Loss= 0.387015, Training Accuracy= 0.86538 Step 5400, Minibatch Loss= 0.385940, Training Accuracy= 0.86538 Step 5600, Minibatch Loss= 0.384907, Training Accuracy= 0.86538 Step 5800, Minibatch Loss= 0.383904, Training Accuracy= 0.85577 Step 6000, Minibatch Loss= 0.382921, Training Accuracy= 0.86538 Step 6200, Minibatch Loss= 0.381941, Training Accuracy= 0.86538 Step 6400, Minibatch Loss= 0.380947, Training Accuracy= 0.86538 Step 6600, Minibatch Loss= 0.379912, Training Accuracy= 0.86538 Step 6800, Minibatch Loss= 0.378796, Training Accuracy= 0.86538 Step 7000, Minibatch Loss= 0.377540, Training Accuracy= 0.86538 Step 7200, Minibatch Loss= 0.376041, Training Accuracy= 0.86538 Step 7400, Minibatch Loss= 0.374130, Training Accuracy= 0.85577 Step 7600, Minibatch Loss= 0.371514, Training Accuracy= 0.85577 Step 7800, Minibatch Loss= 0.367723, Training Accuracy= 0.85577 Step 8000, Minibatch Loss= 0.362049, Training Accuracy= 0.85577 Step 8200, Minibatch Loss= 0.353558, Training Accuracy= 0.85577 Step 8400, Minibatch Loss= 0.341072, Training Accuracy= 0.86538 Step 8600, Minibatch Loss= 0.323062, Training Accuracy= 0.87500 Step 8800, Minibatch Loss= 0.299278, Training Accuracy= 0.89423 Step 9000, Minibatch Loss= 0.273857, Training Accuracy= 0.90385 Step 9200, Minibatch Loss= 0.248392, Training Accuracy= 0.91346 Step 9400, Minibatch Loss= 0.221348, Training Accuracy= 0.92308 Step 9600, Minibatch Loss= 0.191947, Training Accuracy= 0.92308 Step 9800, Minibatch Loss= 0.159308, Training Accuracy= 0.93269 Step 10000, Minibatch Loss= 0.136938, Training Accuracy= 0.96154 Optimization Finished! Testing Accuracy: 0.952
MIT
dynamic_rnn.ipynb
abhihirekhan/-TensorFlow-basic-models-and-neural-networks-Tableau
**Exploratory Analysis** First of all, let's import some useful libraries that will be used in the analysis.
import numpy as np import pandas as pd import matplotlib.pyplot as plt
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
Now, the dataset stored in drive needs to be retieved. I am using google colab for this exploration with TPU hardware accelerator for faster computation. To get the data from drive the drive needs to be mounted first.
from google.colab import drive drive.mount('/content/drive/')
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code Enter your authorization code: ·········· Mounted at /content/drive/
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
Once the drive is successfully mounted, I fetched the data and stored it in a pandas dataframe.
dataset = pd.read_csv("/content/drive/My Drive/Colab Notebooks/DOB_Permit_Issuance.csv")
/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py:2718: DtypeWarning: Columns (1,8,9,10,15,25,31,33,34,35,36,51,52) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result)
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
To get the gist of the dataset, I used pandas describe function that gives a very broad understanding of the data.
pd.set_option('display.max_colwidth', 1000) pd.set_option('display.max_rows', 1000) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) dataset.describe()
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
From the describe function, we now know that the dataset has almost 3.5 M data. Let's take a look at the dataset now.
dataset.head()
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
I can see there are lot's of NaNs in many columns. To better analyse the data, the NaNs needs to be removed or dealt with. But first, let's see hoe many NaNs are there is each column.
dataset.isna().sum()
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
---The information above is very useful in feature selection. Observing the columns with very high number of NaNs, such as :Column | NaNs ------------|-------- Special District 1 | 3121182 Special District 2 | 3439516 Permittee's Other Title | 3236862 HIC License | 3477843 Site Safety Mgr's Last Name | 3481861 Site Safety Mgr's First Name | 3481885 Site Safety Mgr Business Name |3490529 Residential | 2139591 Superintendent First & Last Name | 1814931 Superintendent Business Name | 1847714 Self_Cert | 1274022 Permit Subtype | 1393411 Oil Gas | 3470104---From the column_info sheet of file 'DD_DOB_Permit_Issuance_2018_11_02', I know that some of the column has a meaning related to blanks. For example, for the Residential column, there are either 'Yes' or 'Blanks'. So it's safe to assume that the blanks are associated with 'No'. Similarly, to fill the blanks based on relevant information from column_info, I am using below mappings for some columns:* Residential : No* Site Fill : None* Oil Gas : None* Self_Cert : N* Act as Superintendent : N* Non-Profit : N
values = {'Residential': 'No','Site Fill':'NONE', 'Oil Gas':'None', 'Self_Cert':'N', 'Act as Superintendent':'N', 'Non-Profit':'N' } dataset = dataset.fillna(value= values)
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
Since there are many columns with blank spaces and we cannot fill the blanks with appropriate information, it's better to drop these column as they do not add value to the analysis.I will drop the following columns :* Special District 1* Special District 2* Work Type* Permit Subtype* Permittee's First Name* Permittee's Last Name* Permittee's Business Name* Permittee's Phone * Permittee's Other Title* HIC License* Site Safety Mgr's First Name* Site Safety Mgr's Last Name* Site Safety Mgr Business Name* Superintendent First & Last Name* Superintendent Business Name* Owner's Business Name* Owner's First Name* Owner's Last Name* Owner's House * Owner's House Street Name* Owner's Phone * DOBRunDate
dataset.drop("Special District 1", axis=1, inplace=True) dataset.drop("Special District 2", axis=1, inplace=True) dataset.drop("Work Type", axis=1, inplace=True) #since work type and permit type give same information dataset.drop("Permit Subtype", axis=1, inplace=True) dataset.drop("Permittee's First Name", axis=1, inplace=True) dataset.drop("Permittee's Last Name", axis=1, inplace=True) dataset.drop("Permittee's Business Name", axis=1, inplace=True) dataset.drop("Permittee's Phone #", axis=1, inplace=True) dataset.drop("Permittee's Other Title", axis=1, inplace=True) #Permit Subtype dataset.drop("HIC License", axis=1, inplace=True) dataset.drop("Site Safety Mgr's First Name", axis=1, inplace=True) dataset.drop("Site Safety Mgr's Last Name", axis=1, inplace=True) dataset.drop("Site Safety Mgr Business Name", axis=1, inplace=True) dataset.drop("Superintendent First & Last Name", axis=1, inplace=True) dataset.drop("Superintendent Business Name", axis=1, inplace=True) dataset.drop("Owner's Business Name", axis=1, inplace=True) dataset.drop("Owner's First Name", axis=1, inplace=True) dataset.drop("Owner's Last Name", axis=1, inplace=True) dataset.drop("Owner's House #", axis=1, inplace=True) dataset.drop("Owner's House Street Name", axis=1, inplace=True) dataset.drop("Owner's Phone #", axis=1, inplace=True) dataset.drop("DOBRunDate", axis=1, inplace=True)
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
Let's take a look at the remaining columns and their number of blanks again.
dataset.isna().sum()
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
We still have blanks in few columns left. One way to deal with them is to replace them with mean of that column or the most frequent entry of that column. Mean can onlly be applied to numerical columns and even for numerical columns such as Longitude and Lattitude, this mean might not be right to replace all the missing value with single longitude or lattitude. This will skew the column and would not result in fair analysis of data.Similarly, if we use the most frequently used entry to replace all the blanks in that column, it will either skew the matrix or the data itself would not make sense. For example, the for a particular location, there is a state, city, zip code and street name associated with it. If we replace the missing entries in zip code with most frequent entry, then it might result in a data that is having a different state, city and a zip code of another location. Therefore, to clean the data I will drop the rows with NaNs. We will still have enough data for exploration.
dataset.dropna(inplace=True) dataset.isna().sum()
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
---Now the dataset looks clean and we can proceed with analysis. I will try find the correlation between columns using the correlation matrix.
# Correlation matrix def plotCorrelationMatrix(df, graphWidth): filename = 'DOB_Permit_Issuance.csv' df = df[[col for col in df if df[col].nunique() > 1]] # keep columns where there are more than 1 unique values corr = df.corr() plt.figure(num=None, figsize=(graphWidth, graphWidth), dpi=80, facecolor='w', edgecolor='k') corrMat = plt.matshow(corr, fignum = 1) plt.xticks(range(len(corr.columns)), corr.columns, rotation=90) plt.yticks(range(len(corr.columns)), corr.columns) plt.gca().xaxis.tick_bottom() plt.colorbar(corrMat) plt.title(f'Correlation Matrix for {filename}', fontsize=15) plt.show() plotCorrelationMatrix(dataset, 15)
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
We can see that there is strong positive relationship between :* Zip Code and Job * Job and Council_District* Zip Code and Council_District ---To get more insight of the data and its column-wise data distribution, I will plot the columns using bar graphs. For displaying purposes, I will pick columns that have between 1 and 50 unique values.
def plotColumns(dataframe, nGraphShown, nGraphPerRow): nunique = dataframe.nunique() dataframe = dataframe[[col for col in dataframe if nunique[col] > 1 and nunique[col] < 50]] nRow, nCol = dataframe.shape columnNames = list(dataframe) nGraphRow = (nCol + nGraphPerRow - 1) / nGraphPerRow plt.figure(num = None, figsize = (6 * nGraphPerRow, 8 * nGraphRow), dpi = 80, facecolor = 'w', edgecolor = 'k') for i in range(min(nCol, nGraphShown)): plt.subplot(nGraphRow, nGraphPerRow, i + 1) columndataframe = dataframe.iloc[:, i] if (not np.issubdtype(type(columndataframe.iloc[0]), np.number)): valueCounts = columndataframe.value_counts() valueCounts.plot.bar() else: columndataframe.hist() plt.ylabel('counts') plt.xticks(rotation = 90) plt.title(f'{columnNames[i]} (column {i})') plt.tight_layout(pad = 1.0, w_pad = 1.0, h_pad = 1.0) plt.show() plotColumns(dataset, 38, 3)
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
From the Borough graph, it's evident that the Manhattan has highest number of filing for the permit. Then the second popuar borough after manhattan is Brooklyn with huge margin. Then comes the Queens with almost same number of permits as Brooklyn. We see another plunge in permit numbers with Bronx and Staten Island.---Job Document number is the number of documents that were added with the file during the application. Mostly all the filings required a single document. There are some permits that had two documents and then higher number of documents are negligible.---There is a pattern in Job Type as well. We can see that the most populat Job Type or Work Type is A2 with more than 1.75 M permits. The second most popular work type is NB (new building) with around 400,000 permits. The number of permits decreases with A3, A1, DM and SG where DM and SG are significantly less than other types.---Self_Cert indicates whether or not the application was submitted as Professionally Certified. A Professional Engineer (PE) or Registered Architect (RA) can certify compliance with applicable laws and codes on applications filed by him/her as applicant. Plot shows mostly were not filed by Professional Engineer or Registered Architect. ---Bldg Type indicates legal occupancy classification. The most popular type of building type is '2' with more than 2M permits.---Most of the buildings are non Residential and only about 1.3M buildinngs were residential.---Permit Status indicates the current status of the permit application. Corresponding plot for the column suggests that most of the permits are 'Issued' and very small number were 'Reissued'. The 'In-Progress' and 'Revoked' are negligible. ---Filing Status indicates if this is the first time the permit is being applied for or if the permit is being renewed. A large amount of permits were in 'Initial' status and less than half of that were in 'Renewal' state.---Permit Type The specific type of work covered by the permit. This is a two character code to indicate the type of work. There are 7 types of permits where EW has the highest number. The number indicates decreasing trend with PL, EQ, AL, NB, SG, and FO. ---A sequential number assigned to each issuance of the particular permit from initial issuance to each subsequent renewal. Every initial permit should have a 01 sequence number. Every additional renewal receives a number that increases by 1 (ex: 02, 03, 04). Most of the permits have less than 5 sequence number.---If the permit is for work on fuel burning equipment, this indicates whether it burns oil or gas. Most of the permits are for neither Oil nor Gas. A very small fraction of permits is for Oil and there is negligible number of permits for Gas.---Site Fill indicates the source of any fill dirt that will be used on the construction site. When over 300 cubic yards of fill is being used, the Department is required to inform Sanitation of where the fill is coming from and the amount. About 1.1M entries didn't mention any Site Fill type indicating that the less than 300 cubic yards of fill is being used. Almost permits are not applicable. About 300,000 permits were for on-site fill and less than 100,000 were for off-site.---Professional license type of the person that the permit was issued to. In most of the cases the person was holding GC type. Then the number showcased a decreasing trend with MP, FS, OB, SI, NW, OW, PE, RA, and HI where NW, OW, PE, RA, and HI are negligible in number.---Act as Superintendent indicates if the permittee acts as the Construction Superintendent for the work site. Only about 1.1M people responded 'Yes' to this and majority respondded with 'No'---Owner's Business Type indicates the type of entity that owns the building where the work will be performed. Mostly the entities owning the builing were 'Corporations'. With slightly less than 'Corporation', 'Individual' type stands at the second position and 'Partnership' holds the third position. Other business types like are less significant in number.---Non-Profit indicates if the building is owned by a non-profit. Less than 250,000 buildings were owned by 'Non-Profit' and more than 2.75M were not 'Non-Profit'. Borough-wise analysisLet's now dive deeper in the data and take a closer look. Is the trend we see above is same across all Boroughs ?. Does every borough have same type of building? Is 'EW' the most popular permit type across all cities? What about the 'owner's business type' and 'Job Type' ? I will try to find the answers to all these by exploring the pattern in each bouough. Bldg Type in each borough
manhattan_bldg_type = dataset[dataset.BOROUGH == 'MANHATTAN'][['Bldg Type']] manhattan_bldg_type.reset_index(drop=True, inplace=True) brooklyn_bldg_type = dataset[dataset.BOROUGH == 'BROOKLYN'][['Bldg Type']] brooklyn_bldg_type.reset_index(drop=True, inplace=True) bronx_bldg_type = dataset[dataset.BOROUGH == 'BRONX'][['Bldg Type']] bronx_bldg_type.reset_index(drop=True, inplace=True) queens_bldg_type = dataset[dataset.BOROUGH == 'QUEENS'][['Bldg Type']] queens_bldg_type.reset_index(drop=True, inplace=True) staten_island_bldg_type = dataset[dataset.BOROUGH == 'STATEN ISLAND'][['Bldg Type']] staten_island_bldg_type.reset_index(drop=True, inplace=True) building_type = pd.DataFrame() building_type['manhattan_bldg_type'] = manhattan_bldg_type #brooklyn_bldg_type building_type['brooklyn_bldg_type'] = brooklyn_bldg_type building_type['bronx_bldg_type'] = bronx_bldg_type building_type['queens_bldg_type'] = queens_bldg_type building_type['staten_island_bldg_type'] = staten_island_bldg_type plotColumns(building_type, 5, 3)
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
**Analysis**The builing type is either '1' or '2' and we earlier discovered that type '2' were significantly popular as compared to type '1'. However, this is not true for all the Boroughs. For manhattan, this trend still holds true. But for other locations, the type '1' buildings are comparable in number with that of type '2'. More interestingly, in case of Staten Island, the type '1' is significantly popular beating type '2' with a good margin. Permit Type in Each Borough
manhattan_permit_type = dataset[dataset.BOROUGH == 'MANHATTAN'][['Permit Type']] manhattan_permit_type.reset_index(drop=True, inplace=True) brooklyn_permit_type = dataset[dataset.BOROUGH == 'BROOKLYN'][['Permit Type']] brooklyn_permit_type.reset_index(drop=True, inplace=True) bronx_permit_type = dataset[dataset.BOROUGH == 'BRONX'][['Permit Type']] bronx_permit_type.reset_index(drop=True, inplace=True) queens_permit_type = dataset[dataset.BOROUGH == 'QUEENS'][['Permit Type']] queens_permit_type.reset_index(drop=True, inplace=True) staten_island_permit_type = dataset[dataset.BOROUGH == 'STATEN ISLAND'][['Permit Type']] staten_island_permit_type.reset_index(drop=True, inplace=True) permit_type = pd.DataFrame() permit_type['manhattan_permit_type'] = manhattan_permit_type #brooklyn_permit_type permit_type['brooklyn_permit_type'] = brooklyn_permit_type permit_type['bronx_permit_type'] = bronx_permit_type permit_type['queens_permit_type'] = queens_permit_type permit_type['staten_island_permit_type'] = staten_island_permit_type plotColumns(permit_type, 5, 3)
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
**Analysis**In Permit Type we earlier discovered that type 'EW' was the most popular and significantly higher in number as compared to other types. However, this is true for most of the Boroughs except Staten Island. However, other types of Permit are shuffled in each Borough. Below is the type of permits in decreasing order for each borough.Manhattan | Brooklyn | Queens |Bronx | Staten Island ------------|--------|--------------|-------------EW | EW | EW| EW | PL PL | PL | PL | PL |NB EQ | EQ | EQ |EQ |EW AL | AL | NB | AL | EQSQ | NB | AL | SQ | ALNB| FO| SG |FO | DMFO|DM|FO| DM| SGDM|SG|DM|SG|FO Owner's Business Type in Each Borough
manhattan_owners_business_type = dataset[dataset.BOROUGH == 'MANHATTAN'][['Owner\'s Business Type']] manhattan_owners_business_type.reset_index(drop=True, inplace=True) brooklyn_owners_business_type = dataset[dataset.BOROUGH == 'BROOKLYN'][['Owner\'s Business Type']] brooklyn_owners_business_type.reset_index(drop=True, inplace=True) bronx_owners_business_type = dataset[dataset.BOROUGH == 'BRONX'][['Owner\'s Business Type']] bronx_owners_business_type.reset_index(drop=True, inplace=True) queens_owners_business_type = dataset[dataset.BOROUGH == 'QUEENS'][['Owner\'s Business Type']] queens_owners_business_type.reset_index(drop=True, inplace=True) staten_island_owners_business_type = dataset[dataset.BOROUGH == 'STATEN ISLAND'][['Owner\'s Business Type']] staten_island_owners_business_type.reset_index(drop=True, inplace=True) owners_business_type = pd.DataFrame() owners_business_type['manhattan_owners_business_type'] = manhattan_owners_business_type #brooklyn_owners_business_type owners_business_type['brooklyn_owners_business_type'] = brooklyn_owners_business_type owners_business_type['bronx_owners_business_type'] = bronx_owners_business_type owners_business_type['queens_owners_business_type'] = queens_owners_business_type owners_business_type['staten_island_owners_business_type'] = staten_island_owners_business_type plotColumns(owners_business_type, 5, 3)
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
**Analysis**We earlier discovered that the 'Corporation' was the most popular 'Owner's Business type' and 'Individual' type was closely competing with it. Taking a closer look at each borough reveals that the trend highly varies in all the Boroughs. In Manhattan, the 'Corporation' is still the highest but the 'Individual' is substituted by 'Partnership'. In Brooklyn, 'Individual' holds the top place and 'Corporation' and 'Partnership' are on second and third place respectively.In Bronx, the 'Corporation' and 'Individual' are closely competing while 'Corporation' holds the highest number and 'Partnership' is on third place.For Queens and Staten Island, the 'Individual' holds the top place and 'Corporation' and 'Partnership' are on second and third place respectively. This is consistent with the trend observed in 'Brooklyn'. Job Type in Each Borough
manhattan_job_type = dataset[dataset.BOROUGH == 'MANHATTAN'][['Job Type']] manhattan_job_type.reset_index(drop=True, inplace=True) brooklyn_job_type = dataset[dataset.BOROUGH == 'BROOKLYN'][['Job Type']] brooklyn_job_type.reset_index(drop=True, inplace=True) bronx_job_type = dataset[dataset.BOROUGH == 'BRONX'][['Job Type']] bronx_job_type.reset_index(drop=True, inplace=True) queens_job_type = dataset[dataset.BOROUGH == 'QUEENS'][['Job Type']] queens_job_type.reset_index(drop=True, inplace=True) staten_island_job_type = dataset[dataset.BOROUGH == 'STATEN ISLAND'][['Job Type']] staten_island_job_type.reset_index(drop=True, inplace=True) job_type = pd.DataFrame() job_type['manhattan_job_type'] = manhattan_job_type #brooklyn_job_type job_type['brooklyn_job_type'] = brooklyn_job_type job_type['bronx_job_type'] = bronx_job_type job_type['queens_job_type'] = queens_job_type job_type['staten_island_job_type'] = staten_island_job_type plotColumns(job_type, 5, 3)
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
**Analysis**We earlier discovered that the 'A2' was the most popular 'Job type' and numbers showed a decreasing trend with 'NB', 'A3', 'A1', 'DM', and 'SG'. Taking a closer look at each borough revealsa slightly different trend. For example, in Manhattan, the 'A2' is still the highest but the 'NB' is pushed beyond 'A1' to the fourth place while in Staten Island 'NB' holds the first place. Below is the type of Jobs in decreasing order for each borough. Overall | Manhattan | Brooklyn | Queens |Bronx | Staten Island ------------|--------|--------------|-------------A2|A2|A2|A2|A2|NBNB|A3|NB|NB|NB|A2A3|A1|A1|A3|A1|A1A1|NB|A3|A1|A3|A3DM|SG|DM|SG|DM|DMSG|DM|SG|DM|SG|SG Permits Per Years Is there is trend in the number of permits issued each year? Let's find out! First, the date format of 'Issuance Date' needs to be converted to python Datetime format and then only the year needs to be extracted from the date.
dataset['Issuance Date'] = pd.to_datetime(dataset['Issuance Date']) dataset['Issuance Date'] =dataset['Issuance Date'].dt.year
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
Once the dates are replaced by the corresponding years, we can plot the graph.
timeline = dataset['Issuance Date'].value_counts(ascending=True) timeline = timeline.sort_index() timeline.to_frame() timeline.plot.bar()
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
**Analysis**We can observe that the number of permits has been consistently increasing each year. The number got stagnant from 1993 to 1996 and then it increased exponentially from 1997 until 2007. The applications decreasedfor a couple of years and then rose exponentially from 2010 to 2017. Borough-Wise Analysis of Timeline Whether the trend is consistent across all borough? Is there a time when construction slowed down in a city and surged in other ?
manhattan_permits_issued = dataset[dataset.BOROUGH == 'MANHATTAN'][['Issuance Date']] manhattan_permits_issued.reset_index(drop=True, inplace=True) brooklyn_permits_issued = dataset[dataset.BOROUGH == 'BROOKLYN'][['Issuance Date']] brooklyn_permits_issued.reset_index(drop=True, inplace=True) bronx_permits_issued = dataset[dataset.BOROUGH == 'BRONX'][['Issuance Date']] bronx_permits_issued.reset_index(drop=True, inplace=True) queens_permits_issued = dataset[dataset.BOROUGH == 'QUEENS'][['Issuance Date']] queens_permits_issued.reset_index(drop=True, inplace=True) staten_island_permits_issued = dataset[dataset.BOROUGH == 'STATEN ISLAND'][['Issuance Date']] staten_island_permits_issued.reset_index(drop=True, inplace=True) permits_issued = pd.DataFrame() permits_issued['manhattan_permits_issued'] = manhattan_permits_issued #brooklyn_permits_issued permits_issued['brooklyn_permits_issued'] = brooklyn_permits_issued permits_issued['bronx_permits_issued'] = bronx_permits_issued permits_issued['queens_permits_issued'] = queens_permits_issued permits_issued['staten_island_permits_issued'] = staten_island_permits_issued plotColumns(permits_issued, 5, 3)
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
In Manhattan and Brooklyn, most number of applications were filed during 2010 and 2015 period.Bronx and Queens observed the highest number of filings in recent years from 2015 to 2019. Staten Island had most applications during 2000 to 2015 and then there is again a surge in applications around 2015 but the number of applications is very less compared to its counterparts. Permits per month Is there is trend in the number of permits filed every month? Let's explore! First, the date format of 'Filing Date' needs to be converted to python Datetime format and then only the month needs to be extracted from the date.
dataset['Filing Date'] = pd.to_datetime(dataset['Filing Date']) dataset['Filing Date'] =dataset['Filing Date'].dt.month months = dataset['Filing Date'].value_counts() months.to_frame() months = months.sort_index() plotColumns(months, 1, 1)
_____no_output_____
MIT
NY Permits.ipynb
parasgulati8/Data-Analysis
Laboratorio 7
import numpy as np import pandas as pd import matplotlib.pyplot as plt import altair as alt from sklearn import datasets, linear_model from sklearn.metrics import mean_squared_error, r2_score alt.themes.enable('opaque') %matplotlib inline
_____no_output_____
MIT
labs/lab07.ipynb
Dhannai/MAT281_portfolio
En este laboratorio utilizaremos los mismos datos de diabetes vistos en la clase
diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True, as_frame=True) diabetes = pd.concat([diabetes_X, diabetes_y], axis=1) diabetes.head()
_____no_output_____
MIT
labs/lab07.ipynb
Dhannai/MAT281_portfolio
Pregunta 1(1 pto)* ¿Por qué la columna de sexo tiene esos valores?* ¿Cuál es la columna a predecir?* ¿Crees que es necesario escalar o transformar los datos antes de comenzar el modelamiento? __Respuesta:__* Porque los valores se encuentran escalados y centrados por su desviación estandar multiplicada por la cantidad de muestras, entonces tienen una norma unitaria.* La columna `target`* Sí, para que el algoritmo los pueda reconocer más fácil y el usuario pueda comparar de manera más simple, ya que los datos vienen con escalas y magnitudes distintas. Pregunta 2(1 pto)Realiza dos regresiones lineales con todas las _features_, el primer caso incluyendo intercepto y el segundo sin intercepto. Luego obtén la predicción para así calcular el error cuadrático medio y coeficiente de determinación de cada uno de ellos.
regr_with_incerpet = linear_model.LinearRegression(fit_intercept=True) regr_with_incerpet.fit(diabetes_X, diabetes_y) diabetes_y_pred_with_intercept = regr_with_incerpet.predict(diabetes_X) # Coeficientes print(f"Coefficients: \n{regr_with_incerpet.coef_}\n") # Intercepto print(f"Intercept: \n{regr_with_incerpet.intercept_}\n") # Error cuadrático medio print(f"Mean squared error: {mean_squared_error(diabetes_y, diabetes_y_pred_with_intercept):.2f}\n") # Coeficiente de determinación print(f"Coefficient of determination: {r2_score(diabetes_y, diabetes_y_pred_with_intercept):.2f}") regr_without_incerpet = linear_model.LinearRegression(fit_intercept=False) regr_without_incerpet.fit(diabetes_X, diabetes_y) diabetes_y_pred_without_intercept = regr_without_incerpet.predict(diabetes_X) # Coeficientes print(f"Coefficients: \n{regr_without_incerpet.coef_}\n") # Error cuadrático medio print(f"Mean squared error: {mean_squared_error(diabetes_y, diabetes_y_pred_without_intercept):.2f}\n") # Coeficiente de determinación print(f"Coefficient of determination: {r2_score(diabetes_y, diabetes_y_pred_without_intercept):.2f}")
Coefficients: [ -10.01219782 -239.81908937 519.83978679 324.39042769 -792.18416163 476.74583782 101.04457032 177.06417623 751.27932109 67.62538639] Mean squared error: 26004.29 Coefficient of determination: -3.39
MIT
labs/lab07.ipynb
Dhannai/MAT281_portfolio
**Pregunta: ¿Qué tan bueno fue el ajuste del modelo?** __Respuesta:__No es muy bueno, ya que el coeficiente de determinación es 0.52 para el caso con intercepto, y para que sea una buena aproximación debería ser cercano a 1. Para el caso sin intercepto el R2 es negativo por lo que es mejor utilizar el promedio de los datos, ya que estos entregarían un mejor ajuste, además se debe considerar que el error es mucho mayor que para el caso con intercepto, entonces tampoco es un buen ajuste. Pregunta 3(1 pto)Realizar multiples regresiones lineales utilizando una sola _feature_ a la vez. En cada iteración:- Crea un arreglo `X`con solo una feature filtrando `X`.- Crea un modelo de regresión lineal con intercepto.- Ajusta el modelo anterior.- Genera una predicción con el modelo.- Calcula e imprime las métricas de la pregunta anterior.
for col in diabetes_X.columns: X_i = diabetes_X[col].to_frame() regr_i = linear_model.LinearRegression(fit_intercept=True) regr_i.fit(X_i,diabetes_y) diabetes_y_pred_i = regr_i.predict(X_i) print(f"Feature: {col}") print(f"\tCoefficients: {regr_i.coef_}") print(f"\tIntercept: {regr_i.intercept_}") print(f"\tMean squared error: {mean_squared_error(diabetes_y, diabetes_y_pred_i):.2f}") print(f"\tCoefficient of determination: {r2_score(diabetes_y, diabetes_y_pred_i):.2f}\n")
Feature: age Coefficients: [304.18307453] Intercept: 152.13348416289605 Mean squared error: 5720.55 Coefficient of determination: 0.04 Feature: sex Coefficients: [69.71535568] Intercept: 152.13348416289594 Mean squared error: 5918.89 Coefficient of determination: 0.00 Feature: bmi Coefficients: [949.43526038] Intercept: 152.1334841628967 Mean squared error: 3890.46 Coefficient of determination: 0.34 Feature: bp Coefficients: [714.7416437] Intercept: 152.13348416289585 Mean squared error: 4774.10 Coefficient of determination: 0.19 Feature: s1 Coefficients: [343.25445189] Intercept: 152.13348416289597 Mean squared error: 5663.32 Coefficient of determination: 0.04 Feature: s2 Coefficients: [281.78459335] Intercept: 152.1334841628959 Mean squared error: 5750.24 Coefficient of determination: 0.03 Feature: s3 Coefficients: [-639.14527932] Intercept: 152.13348416289566 Mean squared error: 5005.66 Coefficient of determination: 0.16 Feature: s4 Coefficients: [696.88303009] Intercept: 152.13348416289568 Mean squared error: 4831.14 Coefficient of determination: 0.19 Feature: s5 Coefficients: [916.13872282] Intercept: 152.13348416289628 Mean squared error: 4030.99 Coefficient of determination: 0.32 Feature: s6 Coefficients: [619.22282068] Intercept: 152.13348416289614 Mean squared error: 5062.38 Coefficient of determination: 0.15
MIT
labs/lab07.ipynb
Dhannai/MAT281_portfolio
**Pregunta: Si tuvieras que escoger una sola _feauture_, ¿Cuál sería? ¿Por qué?** **Respuesta:** Escogería la Feature bmi, porque es la que tiene le menor error y el coeficiente de determinación más cercano a 1. Ejercicio 4(1 pto)Con la feature escogida en el ejercicio 3 realiza el siguiente gráfico:- Scatter Plot- Eje X: Valores de la feature escogida.- Eje Y: Valores de la columna a predecir (target).- En color rojo dibuja la recta correspondiente a la regresión lineal (utilizando `intercept_`y `coefs_`).- Coloca un título adecuado, nombre de los ejes, etc.Puedes utilizar `matplotlib` o `altair`, el que prefieras.
regr = linear_model.LinearRegression(fit_intercept=True).fit(diabetes_X['bmi'].to_frame(), diabetes_y) diabetes_y_pred_bmi = regr.predict(diabetes_X['bmi'].to_frame()) import matplotlib.pyplot as plt %matplotlib inline from matplotlib import rcParams rcParams['font.family'] = 'serif' rcParams['font.size'] = 16 x = diabetes_X['bmi'].values y = diabetes_y x_reg = np.arange(-0.1, 0.2, 0.01) y_reg = regr.intercept_ + regr.coef_ * x_reg fig = plt.figure(figsize=(20,8)) fig.suptitle('Regresión lineal body mass index (bmi) versus progresión de la enfermedad (target)') ax = fig.add_subplot() ax.figsize=(20,8) ax.scatter(x,y,c='k'); ax.plot(x_reg,y_reg,c='r'); ax.legend(["Regresión", "Datos "]); ax.set_xlabel('Body Mass Index') ax.set_ylabel('target');
_____no_output_____
MIT
labs/lab07.ipynb
Dhannai/MAT281_portfolio
Design 1
!tar -xzvf ../Data/Design1/entk.session-design1-54875/entk.session-design1-54875.tar.gz -C ../Data/Design1/entk.session-design1-54875/ !tar -xzvf ../Data/Design1/entk.session-design1-54875/entk_workflow.tar.gz -C ../Data/Design1/entk.session-design1-54875/ des1DF = pd.DataFrame(columns=['TTX','AgentOverhead','ClientOverhead','EnTKOverhead']) work_file = open('../Data/Design1/entk.session-design1-54875/entk_workflow.json') work_json = json.load(work_file) work_file.close() workflow = work_json['workflows'][1] unit_ids = list() for pipe in workflow['pipes']: unit_path = pipe['stages'][1]['tasks'][0]['path'] unit_id = unit_path.split('/')[-2] if unit_id != 'unit.000000': unit_ids.append(unit_id) sids=['entk.session-design1-54875'] for sid in sids: re_session = ra.Session(stype='radical.entk',src='../Data/Design1',sid=sid) rp_session = ra.Session(stype='radical.pilot',src='../Data/Design1/'+sid) units = rp_session.filter(etype='unit', inplace=False, uid=unit_ids) pilot = rp_session.filter(etype='pilot', inplace=False) units_duration = units.duration(event=[{ru.EVENT: 'exec_start'},{ru.EVENT: 'exec_stop'}]) units_agent = units.duration (event=[{ru.EVENT: 'state',ru.STATE: rp.AGENT_STAGING_INPUT},{ru.EVENT: 'staging_uprof_stop'}]) all_units = rp_session.filter(etype='unit', inplace=False) disc_unit = rp_session.filter(etype='unit', inplace=False, uid='unit.000000') disc_time = disc_unit.duration([rp.NEW, rp.DONE]) units_client = units.duration([rp.NEW, rp.DONE]) appmanager = re_session.filter(etype='appmanager',inplace=False) t_p2 = pilot.duration(event=[{ru.EVENT: 'bootstrap_0_start'}, {ru.EVENT: 'cmd'}]) resource_manager = re_session.filter(etype='resource_manager',inplace=False) app_duration = appmanager.duration(event=[{ru.EVENT:"amgr run started"},{ru.EVENT:"termination done"}]) res_duration = resource_manager.duration(event=[{ru.EVENT:"rreq submitted"},{ru.EVENT:"resource active"}]) ttx = units_duration agent_overhead = abs(units_agent - units_duration) client_overhead = units_client - units_agent entk_overhead = app_duration - units_client - res_duration - all_units.duration(event=[{ru.EVENT: 'exec_start'},{ru.EVENT: 'exec_stop'}]) + ttx des1DF.loc[len(des1DF)] = [ttx, agent_overhead, client_overhead, entk_overhead] print(des1DF)
TTX AgentOverhead ClientOverhead EnTKOverhead 0 11717.529445 978.254925 45306.359832 12857.508548
MIT
Geolocation/Notebooks/Des1Des2OverheadComp.ipynb
radical-experiments/iceberg_escience
Design 2
des2DF = pd.DataFrame(columns=['TTX','SetupOverhead','SetupOverhead2','AgentOverhead','ClientOverhead']) sids = ['design2_11K_run5'] for sid in sids: Node1 = pd.DataFrame(columns=['Start','End','Type']) Node1Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000002/geolocate1.csv') for index,row in Node1Tilling.iterrows(): Node1.loc[len(Node1)] = [row['Start'],row['End'],'Geo1'] Node1Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000002/geolocate2.csv') for index,row in Node1Tilling.iterrows(): Node1.loc[len(Node1)] = [row['Start'],row['End'],'Geo2'] Node1Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000002/ransac1.csv') for index,row in Node1Tilling.iterrows(): Node1.loc[len(Node1)] = [row['Start'],row['End'],'Ransac1'] Node2 = pd.DataFrame(columns=['Start','End','Type']) Node2Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000003/geolocate3.csv') for index,row in Node2Tilling.iterrows(): Node2.loc[len(Node2)] = [row['Start'],row['End'],'Geo3'] Node2Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000003/geolocate4.csv') for index,row in Node2Tilling.iterrows(): Node2.loc[len(Node2)] = [row['Start'],row['End'],'Geo4'] Node2Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000003/ransac2.csv') for index,row in Node2Tilling.iterrows(): Node2.loc[len(Node2)] = [row['Start'],row['End'],'Ransac2'] Node3 = pd.DataFrame(columns=['Start','End','Type']) Node3Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000004/geolocate5.csv') for index,row in Node3Tilling.iterrows(): Node3.loc[len(Node3)] = [row['Start'],row['End'],'Geo5'] Node3Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000004/geolocate6.csv') for index,row in Node3Tilling.iterrows(): Node3.loc[len(Node3)] = [row['Start'],row['End'],'Geo6'] Node3Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000004/ransac3.csv') for index,row in Node3Tilling.iterrows(): Node3.loc[len(Node3)] = [row['Start'],row['End'],'Ransac3'] Node4 = pd.DataFrame(columns=['Start','End','Type']) Node4Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000005/geolocate7.csv') for index,row in Node4Tilling.iterrows(): Node4.loc[len(Node4)] = [row['Start'],row['End'],'Geo7'] Node4Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000005/geolocate8.csv') for index,row in Node4Tilling.iterrows(): Node4.loc[len(Node4)] = [row['Start'],row['End'],'Geo8'] Node4Tilling = pd.read_csv('../Data/Design2/'+sid+'/pilot.0000/unit.000005/ransac4.csv') for index,row in Node4Tilling.iterrows(): Node4.loc[len(Node4)] = [row['Start'],row['End'],'Ransac4'] AllNodes = pd.DataFrame(columns=['Start','End','Type']) AllNodes = AllNodes.append(Node1) AllNodes = AllNodes.append(Node2) AllNodes = AllNodes.append(Node3) AllNodes = AllNodes.append(Node4) AllNodes.reset_index(inplace=True,drop='index') rp_sessionDes2 = ra.Session(stype='radical.pilot',src='../Data/Design2/'+sid) unitsDes2 = rp_sessionDes2.filter(etype='unit', inplace=False) execUnits = unitsDes2.filter(uid=['unit.000002','unit.000003','unit.000004','unit.000005'],inplace=False) exec_units_setup_des2 = execUnits.duration(event=[{ru.EVENT: 'exec_start'},{ru.EVENT: 'exec_stop'}]) exec_units_agent_des2 = execUnits.duration([rp.AGENT_STAGING_INPUT, rp.UMGR_STAGING_OUTPUT_PENDING]) exec_units_clientDes2 = execUnits.duration([rp.NEW, rp.DONE]) SetupUnit = unitsDes2.filter(uid=['unit.000000'],inplace=False) setup_units_clientDes2 = SetupUnit.duration(event=[{ru.STATE: rp.NEW},{ru.EVENT: 'exec_start'}]) pilotDes2 = rp_sessionDes2.filter(etype='pilot', inplace=False) pilot_duration = pilotDes2.duration([rp.PMGR_ACTIVE,rp.FINAL]) des2_duration = AllNodes['End'].max() - AllNodes['Start'].min() setupDes2_overhead = exec_units_setup_des2 - des2_duration agentDes2_overhead = exec_units_agent_des2 - exec_units_setup_des2 clientDes2_overhead = exec_units_clientDes2 - exec_units_agent_des2 des2DF.loc[len(des2DF)] = [des2_duration, setup_units_clientDes2, setupDes2_overhead, agentDes2_overhead, clientDes2_overhead] print(des2DF)
TTX SetupOverhead SetupOverhead2 AgentOverhead ClientOverhead 0 6703.852195 81.679263 7.051244 0.18611 22.274512
MIT
Geolocation/Notebooks/Des1Des2OverheadComp.ipynb
radical-experiments/iceberg_escience