markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Plotting the Data* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.* Save the plotted figures as .pngs. Latitude vs. Temperature Plot | #Make plot
temp = city_df["Max Temp"]
lat = city_df["Lat"]
plt.scatter(lat, temp, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min temp
plt.ylim(min(temp) - 5, max(temp) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Latitude v Temperature Plot")
plt.xlabel("Latitude")
plt.ylabel("Temperature (Fahrenheit)")
plt.savefig("Latitude_Temperature.png")
# Prints the scatter plot to the screen
plt.show()
#The plot shows that the further away a location is from the equater, the lower the max temperate
#The more extreme cold temperates (< 0 degrees F) are all in the Northern hemisphere. | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Latitude vs. Humidity Plot | #Make plot
humid = city_df["Humidity"]
lat = city_df["Lat"]
plt.scatter(lat, humid, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min humudity
plt.ylim(min(humid) - 5, max(humid) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Latitude v Humidity Plot")
plt.xlabel("Latitude")
plt.ylabel("Humity (%)")
plt.savefig("Latitude_Humidity.png")
# Prints the scatter plot to the screen
plt.show()
#There isn't too much of a trend for this scatter plot
#The cities with a higher percent of humidity are near the equater and around a latitude of 50 | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Latitude vs. Cloudiness Plot | #Make plot
cloud = city_df["Cloudiness"]
lat = city_df["Lat"]
plt.scatter(lat, cloud, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min cloudiness
plt.ylim(min(cloud) - 5, max(cloud) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Latitude v Cloudiness Plot")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.savefig("Latitude_Cloudiness.png")
# Prints the scatter plot to the screen
plt.show()
#Once again not too much of a general trend
#But there does seem to be either high cloudines or no cloudiness for most the data (some in between) | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Latitude vs. Wind Speed Plot | #Make plot
wind = city_df["Wind Speed"]
lat = city_df["Lat"]
plt.scatter(lat, wind, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min cloudiness
plt.ylim(-0.75, max(wind) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Latitude v Cloudiness Plot")
plt.xlabel("Latitude")
plt.ylabel("Windiness (mph)")
plt.savefig("Latitude_WindSpeed.png")
# Prints the scatter plot to the screen
plt.show()
#Most of the data has wind speeds between 0 and 15 mph
#Latitude does not seem to have an effect on wind speed | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Linear Regression | #Separate our data frame into different hemispheres
city_df.head()
nor_hem_df = city_df.loc[city_df["Lat"] >= 0]
so_hem_df = city_df.loc[city_df["Lat"] <= 0] | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Northern Hemisphere - Max Temp vs. Latitude Linear Regression | #Make plot
temp = nor_hem_df["Max Temp"]
lat = nor_hem_df["Lat"]
plt.scatter(lat, temp, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min temp
plt.ylim(min(temp) - 5, max(temp) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Northern Hemisphere - Latitude v Max Temperature Plot")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (Fahrenheit)")
# Add linear regression Line
(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, temp)
regress_values = lat * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.plot(lat,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.savefig("NH_Latitude_Tem.png")
# Prints the scatter plot to the screen
plt.show()
#This plot is showing the relationship between latitude of cities in the Northern Hemisphere and their max temperature
#The further a city is from the equater(x > 0), the lower the max temp | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Southern Hemisphere - Max Temp vs. Latitude Linear Regression | #Make plot
temp = so_hem_df["Max Temp"]
lat = so_hem_df["Lat"]
plt.scatter(lat, temp, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min temp
plt.ylim(min(temp) - 5, max(temp) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 2, max(lat) + 2)
# Create a title, x label, and y label for our chart
plt.title("Southern Hemisphere - Latitude v Max Temperature Plot")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (Fahrenheit)")
# Add linear regression Line
(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, temp)
regress_values = lat * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.plot(lat,regress_values,"r-")
plt.annotate(line_eq,(0, 80),fontsize=15,color="red")
plt.savefig("SH_Latitude_Temp.png")
# Prints the scatter plot to the screen
plt.show()
#This plot is showing the relationship between latitude of cities in the Southern Hemisphere and their max temperature
#The further a city is from the equater(x < 0), the lower the max temp | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression | #Make plot
humid = nor_hem_df["Humidity"]
lat = nor_hem_df["Lat"]
plt.scatter(lat, humid, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min humudity
plt.ylim(min(humid) - 5, max(humid) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Northern Hemisphere - Latitude v Humidity Plot")
plt.xlabel("Latitude")
plt.ylabel("Humity (%)")
# Add linear regression Line
(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, humid)
regress_values = lat * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.plot(lat,regress_values,"r-")
plt.annotate(line_eq,(48, 40),fontsize=15,color="red")
plt.savefig("NH_Latitude_Humidity.png")
# Prints the scatter plot to the screen
plt.show()
#This plot is showing the relationship between latitude of cities in the Northern Hemisphere and their humidity percentage
#The further a city is from the equater(x > 0), the higher percent humidity | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression | #Make plot
humid = so_hem_df["Humidity"]
lat = so_hem_df["Lat"]
plt.scatter(lat, humid, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min humudity
plt.ylim(min(humid) - 5, max(humid) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Southern Hemisphere - Latitude v Humidity Plot")
plt.xlabel("Latitude")
plt.ylabel("Humity (%)")
# Add linear regression Line
(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, humid)
regress_values = lat * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.plot(lat,regress_values,"r-")
plt.annotate(line_eq,(5,85),fontsize=15,color="red")
plt.savefig("SH_Latitude_Humidity.png")
# Prints the scatter plot to the screen
plt.show()
#This plot is showing the relationship between latitude of cities in the Southern Hemisphere and their humidty percentage
#The further a city is from the equater(x < 0), the lower the percent of humdity
| _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression | #Make plot
cloud = nor_hem_df["Cloudiness"]
lat = nor_hem_df["Lat"]
plt.scatter(lat, cloud, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min humudity
plt.ylim(min(cloud) - 5, max(cloud) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Northern Hemisphere - Latitude v Cloudiness Plot")
plt.xlabel("Latitude")
plt.ylabel("Humity (%)")
# Add linear regression Line
(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, cloud)
regress_values = lat * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.plot(lat,regress_values,"r-")
plt.annotate(line_eq,(60,50),fontsize=15,color="red")
plt.savefig("NH_Latitude_Cloudiness.png")
# Prints the scatter plot to the screen
plt.show()
#This plot is showing the relationship between latitude of cities in the Northern Hemisphere and their cloudiness percentage
#In general, the further a city is from the equater(x > 0), the higher the cloudiness percentage | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression | #Make plot
cloud = so_hem_df["Cloudiness"]
lat = so_hem_df["Lat"]
plt.scatter(lat, cloud, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min humudity
plt.ylim(min(cloud) - 5, max(cloud) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Southern Hemisphere - Latitude v Cloudiness Plot")
plt.xlabel("Latitude")
plt.ylabel("Humity (%)")
# Add linear regression Line
(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, cloud)
regress_values = lat * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.plot(lat,regress_values,"r-")
plt.annotate(line_eq,(0,60),fontsize=15,color="red")
plt.savefig("SH_Latitude_Cloudiness.png")
# Prints the scatter plot to the screen
plt.show()
#This plot is showing the relationship between latitude of cities in the Southern Hemisphere and their cloudiness percentage
#In general, the further a city is from the equater(x < 0), the lower the cloudiness percentage | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression | #Make plot
wind = nor_hem_df["Wind Speed"]
lat = nor_hem_df["Lat"]
plt.scatter(lat, wind, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min humudity
plt.ylim(min(wind) - 5, max(wind) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Northern Hemisphere - Latitude v Windiness Plot")
plt.xlabel("Latitude")
plt.ylabel("Windiness (mph)")
# Add linear regression Line
(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, wind)
regress_values = lat * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.plot(lat,regress_values,"r-")
plt.annotate(line_eq,(6,30),fontsize=15,color="red")
plt.savefig("NH_Latitude_Windiness.png")
# Prints the scatter plot to the screen
plt.show()
#This plot is showing the relationship between latitude of cities in the Northern Hemisphere and their wind speed
#The slope in this case is very small. There is not a significant change in wind spped the further a city is from the equater | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression | #Make plot
wind = so_hem_df["Wind Speed"]
lat = so_hem_df["Lat"]
plt.scatter(lat, wind, marker="o", facecolors="blue", edgecolors="black", alpha=0.75)
# Set y lim based on max and min humudity
plt.ylim(min(wind) - 5, max(wind) + 5)
# Set the x lim based on max and min lat
plt.xlim(min(lat) - 5, max(lat) + 5)
# Create a title, x label, and y label for our chart
plt.title("Southern Hemisphere - Latitude v Windiness Plot")
plt.xlabel("Latitude")
plt.ylabel("Windiness (mph)")
# Add linear regression Line
(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, wind)
regress_values = lat * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.plot(lat,regress_values,"r-")
plt.annotate(line_eq,(-50,23),fontsize=15,color="red")
plt.savefig("SH_Latitude_Windiness.png")
# Prints the scatter plot to the screen
plt.show()
#This plot is showing the relationship between latitude of cities in the Southern Hemisphere and their wind speed
#The slope in this case is also small but there is a slight change in wind speed the further a city is from the equater | _____no_output_____ | MIT | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge |
Exact Equation | x, p = np.cos(t - np.pi), -np.sin(t - np.pi)
fig = plt.figure(figsize=(5, 5))
for i in range(0, len(t), 1):
plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0) | _____no_output_____ | MIT | hmc/example4-leapfrog.ipynb | bjlkeng/sandbox |
Euler's Method Equation Euler's Method | # x' = p/m = p
# p' = -kx + mg = -x
# x = x + \eps * p' = x + \eps*(p)
# p = p + \eps * x' = p - \eps*(x)
fig = plt.figure(figsize=(5, 5))
plt.title("Euler's Method (eps=0.1)")
plt.xlabel("position (q)")
plt.ylabel("momentum (p)")
for i in range(0, len(t), 1):
plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
x_prev = 0
p_prev = 1
eps = 0.1
steps = 100
for i in range(0, steps, 1):
x_next = x_prev + eps * p_prev
p_next = p_prev - eps * x_prev
plt.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)
x_prev, p_prev = x_next, p_next | _____no_output_____ | MIT | hmc/example4-leapfrog.ipynb | bjlkeng/sandbox |
Modified Euler's Method | # x' = p/m = p
# p' = -kx + mg = -x
# x = x + \eps * p' = x + \eps*(p)
# p = p + \eps * x' = p - \eps*(x)
fig = plt.figure(figsize=(5, 5))
plt.title("Modified Euler's Method (eps=0.2)")
plt.xlabel("position (q)")
plt.ylabel("momentum (p)")
for i in range(0, len(t), 1):
plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
x_prev = 0
p_prev = 1
eps = 0.2
steps = int(2*np.pi / eps)
for i in range(0, steps, 1):
p_next = p_prev - eps * x_prev
x_next = x_prev + eps * p_next
plt.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)
x_prev, p_prev = x_next, p_next
# x' = p/m = p
# p' = -kx + mg = -x
# x = x + \eps * p' = x + \eps*(p)
# p = p + \eps * x' = p - \eps*(x)
fig = plt.figure(figsize=(5, 5))
plt.title("Modified Euler's Method (eps=0.2)")
plt.xlabel("position (q)")
plt.ylabel("momentum (p)")
for i in range(0, len(t), 1):
plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
x_prev = 0.1
p_prev = 1
eps = 1.31827847281
#eps = 1.31827847281
steps = 50 #int(2*np.pi / eps)
for i in range(0, steps, 1):
p_next = p_prev - eps * x_prev
x_next = x_prev + eps * p_next
plt.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)
x_prev, p_prev = x_next, p_next | _____no_output_____ | MIT | hmc/example4-leapfrog.ipynb | bjlkeng/sandbox |
Leapfrog Method | # x' = p/m = p
# p' = -kx + mg = -x
# x = x + \eps * p' = x + \eps*(p)
# p = p + \eps * x' = p - \eps*(x)
fig = plt.figure(figsize=(5, 5))
plt.title("Leapfrog Method (eps=0.2)")
plt.xlabel("position (q)")
plt.ylabel("momentum (p)")
for i in range(0, len(t), 1):
plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
x_prev = 0
p_prev = 1
eps = 0.2
steps = int(2*np.pi / eps)
for i in range(0, steps, 1):
p_half = p_prev - eps/2 * x_prev
x_next = x_prev + eps * p_half
p_next = p_half - eps/2 * x_next
plt.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)
x_prev, p_prev = x_next, p_next
# x' = p/m = p
# p' = -kx + mg = -x
# x = x + \eps * p' = x + \eps*(p)
# p = p + \eps * x' = p - \eps*(x)
fig = plt.figure(figsize=(5, 5))
plt.title("Leapfrog Method (eps=0.9)")
plt.xlabel("position (q)")
plt.ylabel("momentum (p)")
for i in range(0, len(t), 1):
plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
x_prev = 0
p_prev = 1
eps = 0.9
steps = 3 * int(2*np.pi / eps + 0.1)
for i in range(0, steps, 1):
p_half = p_prev - eps/2 * x_prev
x_next = x_prev + eps * p_half
p_next = p_half - eps/2 * x_next
plt.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)
x_prev, p_prev = x_next, p_next | _____no_output_____ | MIT | hmc/example4-leapfrog.ipynb | bjlkeng/sandbox |
Combined Figure | fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15,15))
# subplot1
ax1.set_title("Euler's Method (eps=0.1)")
ax1.set_xlabel("position (q)")
ax1.set_ylabel("momentum (p)")
for i in range(0, len(t), 1):
ax1.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
x_prev = 0
p_prev = 1
eps = 0.1
steps = 100
for i in range(0, steps, 1):
x_next = x_prev + eps * p_prev
p_next = p_prev - eps * x_prev
ax1.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)
x_prev, p_prev = x_next, p_next
# subplot2
ax2.set_title("Modified Euler's Method (eps=0.2)")
ax2.set_xlabel("position (q)")
ax2.set_ylabel("momentum (p)")
for i in range(0, len(t), 1):
ax2.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
x_prev = 0
p_prev = 1
eps = 0.2
steps = int(2*np.pi / eps)
for i in range(0, steps, 1):
p_next = p_prev - eps * x_prev
x_next = x_prev + eps * p_next
ax2.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)
x_prev, p_prev = x_next, p_next
# subplot3
ax3.set_title("Leapfrog Method (eps=0.2)")
ax3.set_xlabel("position (q)")
ax3.set_ylabel("momentum (p)")
for i in range(0, len(t), 1):
ax3.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
x_prev = 0
p_prev = 1
eps = 0.2
steps = int(2*np.pi / eps)
for i in range(0, steps, 1):
p_half = p_prev - eps/2 * x_prev
x_next = x_prev + eps * p_half
p_next = p_half - eps/2 * x_next
ax3.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)
x_prev, p_prev = x_next, p_next
# subplot4
ax4.set_title("Leapfrog Method (eps=0.9)")
ax4.set_xlabel("position (q)")
ax4.set_ylabel("momentum (p)")
for i in range(0, len(t), 1):
ax4.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
x_prev = 0
p_prev = 1
eps = 0.9
steps = 3 * int(2*np.pi / eps + 0.1)
for i in range(0, steps, 1):
p_half = p_prev - eps/2 * x_prev
x_next = x_prev + eps * p_half
p_next = p_half - eps/2 * x_next
ax4.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)
x_prev, p_prev = x_next, p_next | _____no_output_____ | MIT | hmc/example4-leapfrog.ipynb | bjlkeng/sandbox |
Combined Figure - Square | fig, ((ax1, ax2)) = plt.subplots(1, 2, figsize=(15, 7.5))
# subplot1
ax1.set_title("Euler's Method (eps=0.2)")
ax1.set_xlabel("position (q)")
ax1.set_ylabel("momentum (p)")
for i in range(0, len(t), 1):
ax1.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
def draw_square(ax, x, p, **args):
assert len(x) == len(p) == 4
x = list(x) + [x[0]]
p = list(p) + [p[0]]
ax.plot(x, p, **args)
def euler_update(x, p, eps):
assert len(x) == len(p) == 4
x_next = [0.]* 4
p_next = [0.]* 4
for i in range(4):
x_next[i] = x[i] + eps * p[i]
p_next[i] = p[i] - eps * x[i]
return x_next, p_next
def mod_euler_update(x, p, eps):
assert len(x) == len(p) == 4
x_next = [0.]* 4
p_next = [0.]* 4
for i in range(4):
x_next[i] = x[i] + eps * p[i]
p_next[i] = p[i] - eps * x_next[i]
return x_next, p_next
delta = 0.1
eps = 0.2
x_prev = np.array([0.0, 0.0, delta, delta]) + 0.0
p_prev = np.array([0.0, delta, delta, 0.0]) + 1.0
steps = int(2*np.pi / eps)
for i in range(0, steps, 1):
draw_square(ax1, x_prev, p_prev, marker='o', color='blue', markersize=5)
x_next, p_next = euler_update(x_prev, p_prev, eps)
x_prev, p_prev = x_next, p_next
# subplot2
ax2.set_title("Modified Euler's Method (eps=0.2)")
ax2.set_xlabel("position (q)")
ax2.set_ylabel("momentum (p)")
for i in range(0, len(t), 1):
ax2.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)
x_prev = np.array([0.0, 0.0, delta, delta]) + 0.0
p_prev = np.array([0.0, delta, delta, 0.0]) + 1.0
for i in range(0, steps, 1):
draw_square(ax2, x_prev, p_prev, marker='o', color='blue', markersize=5)
x_next, p_next = mod_euler_update(x_prev, p_prev, eps)
x_prev, p_prev = x_next, p_next
| _____no_output_____ | MIT | hmc/example4-leapfrog.ipynb | bjlkeng/sandbox |
Import and settings | import numpy as np
import matplotlib.pyplot as plt
from matplotlib.path import Path
import matplotlib.patches as patches
from snaptools import manipulate as man
from snaptools import snapio
from snaptools import plot_tools
from snaptools import utils
from scipy.stats import binned_statistic
from mpl_toolkits.axes_grid1 import Grid
from snaptools import simulation
from snaptools import snapshot
from snaptools import measure
from pathos.multiprocessing import ProcessingPool as Pool
from mpl_toolkits.axes_grid1 import make_axes_locatable
import h5py
import pandas as PD
from scipy.interpolate import interp2d
colors = ['#332288', '#CC6677', '#6699CC', '#117733']
import matplotlib
matplotlib.rc('xtick', labelsize=10)
matplotlib.rc('ytick', labelsize=10)
matplotlib.rc('lines', linewidth=3)
%matplotlib inline
| _____no_output_____ | MIT | Measure_velocities.ipynb | stephenpardy/Offsets_Notebooks |
Snapshot | settings = plot_tools.make_defaults(first_only=True, com=True, xlen=20, ylen=20, in_min=0)
snap = io.load_snap('/usr/users/spardy/coors2/hpc_backup/working/Gas/Dehnen_LMC/collision/output_Dehnen_smc_45deg/snap_007.hdf5')
velfield = snap.to_velfield(parttype='gas', write=False, first_only=True, com=True)
centDict = snap.find_centers(settings)
com1, com2, gal1id, gal2id = snap.center_of_mass('stars')
velx = snap.vel['stars'][gal1id, 0]
vely = snap.vel['stars'][gal1id, 1]
velz = snap.vel['stars'][gal1id, 2]
posx = snap.pos['stars'][gal1id, 0]
posy = snap.pos['stars'][gal1id, 1]
posz = snap.pos['stars'][gal1id, 2]
posx -= com1[0]
posy -= com1[1]
posz -= com1[2]
x_axis = np.linspace(-15, 15, 512)
y_axis = x_axis
X, Y = np.meshgrid(x_axis, y_axis)
angle = np.arctan2(X, Y)
R = np.sqrt(X**2 + Y**2)*(-1)**(angle < 0)
# Use arctan to make all R values negative on other side of Y axis
#sparse_vfield = snap.to_velfield(lengthX=10, lengthY=10, BINS=128, write=False, first_only=True, com=True)
settings = plot_tools.make_defaults(first_only=True, com=True, xlen=10, ylen=10, in_min=0, BINS=128)
Z2 = snap.to_cube(theta=45, write=False, first_only=True, com=True, BINS=128, lengthX=10, lengthY=10)
mom1 = np.zeros((128, 128))
velocities = np.linspace(-200, 200, 100)
for i in xrange(Z2.shape[2]):
mom1 += Z2[:,:,i]*velocities[i]
mom1 /= np.sum(Z2, axis=2)
sparse_vfield = mom1
sparse_vfield[sparse_vfield != sparse_vfield] = 0
sparse_X, sparse_Y = np.meshgrid(np.linspace(-10, 10, 128), np.linspace(-10, 10, 128))
with file('./vels_i45deg.txt', 'w') as velfile:
velfile.write(' X Y VEL EVEL\n')
velfile.write(' asec asec km/s km/s\n')
velfile.write('-----------------------------------------\n')
for xi, yi, vi in zip(sparse_X.flatten(), sparse_Y.flatten(), sparse_vfield.flatten()):
velfile.write('%3.2f %3.2f %3.2f 0.001\n' % (xi, yi, vi))
com1, com2, gal1id, gal2id = snap.center_of_mass('stars')
v1 = snap.vel['stars'][gal1id, :].mean(axis=0)
v2 = snap.vel['stars'][gal2id, :].mean(axis=0)
print(np.sqrt(np.sum((v1-v2)**2))) | 255.248
| MIT | Measure_velocities.ipynb | stephenpardy/Offsets_Notebooks |
Measure Velocities from Velfield | # Now try with the velfield
settings = plot_tools.make_defaults(first_only=True, com=True, xlen=20, ylen=20, in_min=0)
binDict = snap.bin_snap(settings)
Z2 = binDict['Z2']
measurements = man.fit_contours(Z2, settings, plot=True)
#measurementsV2 = man.fit_contours(~np.isnan(velfield), settingsV, plot=True, numcontours=1)
length = 10
thick = 0.1
codes = [Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
fig, axes = plt.subplots(2, 2, figsize=(10, 10))
axes = axes.flatten()
if len(np.where(measurements['eccs'] > 0.5)[0]) > 0:
bar_ind = np.max(np.where(measurements['eccs'] > 0.5)[0])
theta = measurements['angles'][bar_ind]
else:
theta = measurements['angles'][measurements['angles'] == measurements['angles']][-1]
#print(theta)
r = 0
im = axes[1].imshow(velfield, origin='lower', extent=[-20, 20, -20, 20], cmap='gnuplot')
im = axes[3].imshow(velfield, origin='lower', extent=[-20, 20, -20, 20], cmap='gnuplot')
#fig.colorbar(im)
#axes[1].add_artist(measurementsV['ellipses'][0])
#axes[3].add_artist(measurementsV2['ellipses'][0])
for i, t in enumerate(np.radians([theta, theta+90])):
x = r*np.sin(t)
y = r*np.cos(t)
verts = [
[length*np.cos(t)-thick*np.sin(t)-x,
length*np.sin(t)+thick*np.cos(t)+y],
[length*np.cos(t)+thick*np.sin(t)-x,
length*np.sin(t)-thick*np.cos(t)+y],
[-length*np.cos(t)+thick*np.sin(t)-x,
-length*np.sin(t)-thick*np.cos(t)+y],
[-length*np.cos(t)-thick*np.sin(t)-x,
-length*np.sin(t)+thick*np.cos(t)+y],
[0, 0]]
path = Path(verts, codes)
within_box = path.contains_points(np.array([X.flatten(), Y.flatten()]).T)
s = R.flatten()[within_box].argsort()
dist = R.flatten()[within_box][s]
vel = velfield.flatten()[within_box][s]
vel, binEdges, binNum = binned_statistic(dist, vel, bins=50)
rcoord = binEdges[np.nanargmin(np.abs(vel))]
print(np.abs(vel))
print(rcoord)
axes[i*2].set_title(str(i))
divider = make_axes_locatable(axes[i*2])
axOff = divider.append_axes("bottom", size=1.5, pad=0.1)
axes[i*2].set_xticks([])
axOff.set_xlabel('R [kpc]')
axOff.set_ylabel('Velocity [km s$^{-1}$]')
axOff.axvline(x=rcoord)
axOff.plot(binEdges[:-1], vel, 'b.')
axes[i*2].plot(binEdges[:-2], np.diff(vel), 'b.')
#diffs = np.abs(np.diff(vel))
#if np.any(diffs == diffs):
#rcoord = binEdges[np.nanargmax(np.abs(np.diff(vel)))]
xcoord = np.cos(t)*rcoord
ycoord = np.sin(t)*rcoord
patch = patches.PathPatch(path, facecolor='none', lw=2, alpha=0.5)
axes[1+2*i].add_patch(patch)
#axes[1+2*i].text((1+length)*np.cos(t)-thick*np.sin(t)-x,
# (1+length)*np.sin(t)+thick*np.cos(t)+y,
# str(i), fontsize=15, color='black')
axes[1+2*i].plot(xcoord, ycoord, 'k+', markersize=15, markeredgewidth=2)
axes[1+2*i].plot(centDict['barCenter'][0], centDict['barCenter'][1], 'g^', markersize=15, markeredgewidth=1, markerfacecolor=None)
axes[1+2*i].plot(centDict['haloCenter'][0], centDict['haloCenter'][1], 'bx', markersize=15, markeredgewidth=2)
axes[1+2*i].plot(centDict['diskCenters'][0], centDict['diskCenters'][1], 'c*', markersize=15, markeredgewidth=2)
centDict
#plt.tight_layout()
plt.show()
print np.nanargmin(np.abs(vel))
print binEdges[np.nanargmin(np.abs(vel))]
#fig, axes = plt.subplots(1, 2, figsize=(20, 10))
fig, axis = plt.subplots(1, figsize=(10,10))
#plot_tools.plot_contours(density, measurements, 0, -1, [0, 0], settings, axis=axis)
im = axis.imshow(velfield, origin='lower', extent=[-15, 15, -15, 15], cmap='gnuplot')
#axes[1].imshow(mom1, origin='lower', extent=[-15, 15, -15, 15], cmap='gnuplot')
fig.colorbar(im)
length = 10
thick = 0.1
codes = [Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
theta = 110
for i, r in enumerate(xrange(-5, 5, 1)):
x = r*np.sin(np.radians(theta))
y = r*np.cos(np.radians(theta))
verts = [
[length*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta))-x,
length*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta))+y],
[length*np.cos(np.radians(theta))+thick*np.sin(np.radians(theta))-x,
length*np.sin(np.radians(theta))-thick*np.cos(np.radians(theta))+y],
[-length*np.cos(np.radians(theta))+thick*np.sin(np.radians(theta))-x,
-length*np.sin(np.radians(theta))-thick*np.cos(np.radians(theta))+y],
[-length*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta))-x,
-length*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta))+y],
[0, 0]]
path = Path(verts, codes)
patch = patches.PathPatch(path, facecolor='none', lw=2, alpha=0.75)
axis.add_patch(patch)
axis.text((1+length)*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta))-x,
(1+length)*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta))+y,
str(i), fontsize=15, color='black')
#axes[0].set_xlim(-15,15)
#axes[0].set_ylim(-15,15)
# Now try with the velfield
length = 10
thick = 0.1
codes = [Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
fig, axes = plt.subplots(2, 5, figsize=(20, 6))
axes = axes.flatten()
theta = np.radians(110)
for i, r in enumerate(xrange(-5, 5, 1)):
x = r*np.sin(theta)
y = r*np.cos(theta)
verts = [
[length*np.cos(theta)-thick*np.sin(theta)-x,
length*np.sin(theta)+thick*np.cos(theta)+y],
[length*np.cos(theta)+thick*np.sin(theta)-x,
length*np.sin(theta)-thick*np.cos(theta)+y],
[-length*np.cos(theta)+thick*np.sin(theta)-x,
-length*np.sin(theta)-thick*np.cos(theta)+y],
[-length*np.cos(theta)-thick*np.sin(theta)-x,
-length*np.sin(theta)+thick*np.cos(theta)+y],
[0, 0]]
path = Path(verts, codes)
within_box = path.contains_points(np.array([X.flatten(), Y.flatten()]).T)
s = Y.flatten()[within_box].argsort()
dist = Y.flatten()[within_box][s]
vel = velfield.flatten()[within_box][s]
vel, binEdges, binNum = binned_statistic(dist, vel, bins=50)
axes[i].set_title(str(i))
divider = make_axes_locatable(axes[i])
axOff = divider.append_axes("bottom", size=1, pad=0.1)
axes[i].set_xticks([])
axOff.plot(binEdges[:-1], vel, 'b.')
axes[i].plot(binEdges[:-2], np.diff(vel), 'b.')
xcoord = binEdges[np.nanargmax(np.abs(np.diff(vel)))]
ycoord = np.tan(theta)*xcoord
print(xcoord, ycoord)
#plt.tight_layout()
plt.show()
!ls ../ | Astro715_HW3.ipynb Illustris.ipynb plottingtest.ipynb
Data Impact_parameter.ipynb PythonNotebooks
Dehnen_Burkert_fit.ipynb isima_notebooks Stream_Notebooks
Ellipse_Stuff.ipynb llustrisPlots.ipynb test.out
GalCoords.ipynb Mathematica_Notebooks TestPotential.ipynb
Haro11.ipynb Offsets_Notebooks vels_i45deg.txt
Haro11_planning.ipynb outfile.txt
| MIT | Measure_velocities.ipynb | stephenpardy/Offsets_Notebooks |
Plot 2d velocities vs. disk fit | names = [r'$\theta = 45$', r'$\theta = 90$',
r'$\theta = 0$', r'$\theta = 0$ - Retrograde']
fig = plt.figure(figsize=(15, 10))
colors = ['#332288', '#CC6677', '#6699CC', '#117733']
grid = Grid(fig, 111,
nrows_ncols=(2, 2),
axes_pad=0.0,
label_mode="L",
share_all=True
)
groups = ['45deg',
'90deg',
'0deg',
'0deg_retro']
for group, ax in zip(groups, grid):
with h5py.File('../Data/offSetsDehnen_best.hdf5', 'r') as offsets:
centers = offsets['/stars/%s/' % group]
haloCenters = centers['halo_pos'][()]
diskCenters = centers['disk_pos'][()]
times = centers['time'][()]
velcents2d = np.loadtxt('/usr/users/spardy/coors/data/2dVels/xy_%s.txt' % group)
velcents2d = np.array(velcents2d).reshape(len(velcents2d)/2, 2, order='F')
ax.plot(times[:-1],
np.sqrt(np.sum((diskCenters[:-1, :]-haloCenters[:-1, :])**2, axis=1)),
label='Photometric')
ax.plot(times[:-1],
np.sqrt(np.sum((velcents2d-haloCenters[:-1, :])**2, axis=1)),
label='2D Velocity', color=colors[1])
for i, (ax, name) in enumerate(zip(grid, names)):
if i == 0:
yticks = ax.yaxis.get_major_ticks()
yticks[0].label1.set_visible(False)
ax.set_xlim(0, 1.9)
#ax.set_ylim(0, 4.0)
#ax.errorbar([-0.75], [1.1], yerr=distErrs, label='Typical Error')
ax.legend(fancybox=True, loc='upper right')
if (i == 0) or (i == 2):
ax.set_ylabel('Offset from Halo \nCenter [kpc]', fontsize=20)
#axOff.set_ylabel('D$_{Disk}$ - D$_{Bar}$ \n [kpc]', fontsize=20)
ax.set_xlabel("Time [Gyr]", fontsize=20)
ax.annotate(name, xy=(0.05, 0.8), color='black', xycoords='axes fraction',
bbox=dict(facecolor='gray', edgecolor='black',
boxstyle='round, pad=1', alpha=0.5))
plt.subplots_adjust(wspace=0.04) # Default is 0.2
plt.savefig('../../Offsets_paper/plots/velocity_centers.pdf', dpi=600)
fig, axes = plt.subplots(1, 3, figsize=(22.5, 7.5))
with h5py.File('/usr/users/spardy/velocity_offsets.hdf5', 'r') as velFile:
grp = velFile['Dehnen_45deg/']
velcents2d = np.loadtxt('/usr/users/spardy/coors/data/2dVels/xy.txt')
velcents2d = np.array(velcents2d).reshape(len(velcents2d)/2, 2, order='F')
# Minor Axis
velCenters = np.sqrt(np.sum(grp['velCenters'][()]**2, axis=1))
velCent = velCenters[:, 1]
axes[0].plot(times, velCent, zorder=-1, label='Minor-Axis', color=colors[1], linestyle='--')
velCent = PD.rolling_mean(velCenters[:, 1], 3)
times = grp['time'][()]
axes[0].plot(times, np.sqrt(np.sum(grp['diskCenters'][()]**2, axis=1)), label='Disk')
axes[0].plot(times, velCent, zorder=-1, label='Avg.', color='gray')
# major axis
velCent = velCenters[:, 0]
axes[1].plot(times, velCent, zorder=-1, label='Major-Axis', color=colors[1], linestyle='--')
velCent = PD.rolling_mean(velCenters[:, 1], 3)
axes[1].plot(times, np.sqrt(np.sum(grp['diskCenters'][()]**2, axis=1)), label='Disk')
axes[1].plot(times, velCent, zorder=-1, label='Avg.', color='gray')
# 2d fit
axes[2].plot(times, np.sqrt(np.sum(grp['diskCenters'][()]**2, axis=1)), label='Disk')
axes[2].plot(times, np.sqrt(np.sum(velcents2d**2, axis=1)), label='2D Velocity', color=colors[1])
for axis in axes:
axis.legend()
axis.set_xlabel('Time [Gyr]')
axis.set_ylabel('Distance from Frame Center [kpc]')
data = np.loadtxt("/usr/users/spardy/coors/data/2dVels/vel008_0.txt", skiprows=3, usecols=(0,1,2))
model = np.loadtxt("/usr/users/spardy/coors/data/2dVels/LMC_OUT_0/vel008_0.mod", skiprows=2, usecols=(0,1,2))
#dataX = data[:, 0].reshape(256, 256)
#dataY = data[:, 1].reshape(256, 256)
dataZ = data[:, 2].reshape(256, 256)
print dataZ.shape
#ataF = interp2d(data[:, 0], data[:, 1], data[:, 2]
print model.shape
binsize = 20./256.
Xind = np.array(np.floor((model[:, 0]+10)/binsize)).astype(int)
Yind = np.array(np.floor((model[:, 1]+10)/binsize)).astype(int)
#modelX = model[:, 0].reshape(sz, sz)
#modelY = model[:, 1].reshape(sz, sz)
#modelZ = model[:, 2].reshape(sz, sz)
#XIND, YIND = np.meshgrid(Xind, Yind)
sparseImg = np.ones((256, 256))*np.nan
#sparseImg[XIND, YIND] = dataZ[XIND, YIND]
sparseModel = np.ones((256, 256))*np.nan
for xi, yi, z in zip(Xind, Yind, model[:, 2]):
sparseModel[xi, yi] = z
sparseImg[Xind, Yind] = dataZ[Xind, Yind]
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
axes[0].imshow(sparseImg, extent=[-10, 10, -10, 10])
axes[1].imshow(sparseModel.T, extent=[-10, 10, -10, 10])
axes[2].imshow(sparseModel.T-sparseImg, extent=[-10, 10, -10, 10])
#axes[1].plot(model[:, 0], model[:, 2]) | _____no_output_____ | MIT | Measure_velocities.ipynb | stephenpardy/Offsets_Notebooks |
OLD STUFF | theta = np.radians(20)
r = 0
x = r*np.sin(theta)
y = r*np.cos(theta)
verts = [
[length*np.cos(theta)-thick*np.sin(theta)-x,
length*np.sin(theta)+thick*np.cos(theta)+y],
[length*np.cos(theta)+thick*np.sin(theta)-x,
length*np.sin(theta)-thick*np.cos(theta)+y],
[-length*np.cos(theta)+thick*np.sin(theta)-x,
-length*np.sin(theta)-thick*np.cos(theta)+y],
[-length*np.cos(theta)-thick*np.sin(theta)-x,
-length*np.sin(theta)+thick*np.cos(theta)+y],
[0, 0]]
path = Path(verts, codes)
within_box = path.contains_points(np.array([X.flatten(), Y.flatten()]).T)
s = Y.flatten()[within_box].argsort()
dist = Y.flatten()[within_box][s]
vel = velfield.flatten()[within_box][s]
vel, binEdges, binNum = binned_statistic(dist, vel, bins=50)
xcoord = binEdges[np.nanargmax(np.abs(np.diff(vel)))]
ycoord = np.tan(theta)*xcoord
print(xcoord, ycoord)
# MOM1 maps
settings = plot_tools.make_defaults(first_only=True, com=True, xlen=20, ylen=20, in_min=0)
Z2 = snap.to_cube(theta=20, write=False, first_only=True, com=True)
mom1 = np.zeros((512, 512))
velocities = np.linspace(-200, 200, 100)
for i in xrange(Z2.shape[2]):
mom1 += Z2[:,:,i]*velocities[i]
mom1 /= np.sum(Z2, axis=2)
x_axis = np.linspace(-15, 15, 512)
y_axis = x_axis
X, Y = np.meshgrid(x_axis, y_axis)
density = np.sum(Z2, axis=2)
density[density > 0] = np.log10(density[density > 0])
settings = plot_tools.make_defaults(xlen=20, ylen=20, in_min=0, in_max=6)
measurements = man.fit_contours(density, settings, plot=True)
#Using the moment1 map
length = 10
thick = 0.1
codes = [Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
fig, axes = plt.subplots(2, 5, figsize=(20, 6))
axes = axes.flatten()
theta = np.radians(measurements['angles'][0]-90)
for i, r in enumerate(xrange(-5, 5, 1)):
x = r*np.sin(theta)
y = r*np.cos(theta)
verts = [
[length*np.cos(theta)-thick*np.sin(theta)-x,
length*np.sin(theta)+thick*np.cos(theta)+y],
[length*np.cos(theta)+thick*np.sin(theta)-x,
length*np.sin(theta)-thick*np.cos(theta)+y],
[-length*np.cos(theta)+thick*np.sin(theta)-x,
-length*np.sin(theta)-thick*np.cos(theta)+y],
[-length*np.cos(theta)-thick*np.sin(theta)-x,
-length*np.sin(theta)+thick*np.cos(theta)+y],
[0, 0]]
path = Path(verts, codes)
within_box = path.contains_points(np.array([X.flatten(), Y.flatten()]).T)
s = X.flatten()[within_box].argsort()
dist = X.flatten()[within_box][s]
vel = mom1.flatten()[within_box][s]
vel, binEdges, binNum = binned_statistic(dist, vel, bins=50)
axes[i].set_title(str(i))
divider = make_axes_locatable(axes[i])
axOff = divider.append_axes("bottom", size=1, pad=0.1)
axes[i].set_xticks([])
axOff.plot(binEdges[:-1], vel, 'b.')
axes[i].plot(binEdges[:-2], np.diff(vel), 'b.')
#plt.tight_layout()
plt.show()
for i, theta in enumerate(xrange(0, 180, 18)):
verts = [
[length*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta)),
length*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta))],
[length*np.cos(np.radians(theta))+thick*np.sin(np.radians(theta)),
length*np.sin(np.radians(theta))-thick*np.cos(np.radians(theta))],
[-length*np.cos(np.radians(theta))+thick*np.sin(np.radians(theta)),
-length*np.sin(np.radians(theta))-thick*np.cos(np.radians(theta))],
[-length*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta)),
-length*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta))],
[0, 0]]
path = Path(verts, codes)
patch = patches.PathPatch(path, facecolor='none', lw=2, alpha=0.75)
axes[1].add_patch(patch)
axes[1].text((1+length)*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta)),
(1+length)*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta)),
str(i), fontsize=15)
x1 = 10
dy = 0.1
codes = [Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
fig, axes = plt.subplots(2, 5, figsize=(20, 4))
axes = axes.flatten()
for i, theta in enumerate(xrange(0, 180, 18)):
verts = [
[x1*np.cos(np.radians(theta))-dy*np.sin(np.radians(theta)),
x1*np.sin(np.radians(theta))+dy*np.cos(np.radians(theta))],
[x1*np.cos(np.radians(theta))+dy*np.sin(np.radians(theta)),
x1*np.sin(np.radians(theta))-dy*np.cos(np.radians(theta))],
[-x1*np.cos(np.radians(theta))+dy*np.sin(np.radians(theta)),
-x1*np.sin(np.radians(theta))-dy*np.cos(np.radians(theta))],
[-x1*np.cos(np.radians(theta))-dy*np.sin(np.radians(theta)),
-x1*np.sin(np.radians(theta))+dy*np.cos(np.radians(theta))],
[0, 0]]
path = Path(verts, codes)
within_box = path.contains_points(np.array([X.flatten(), Y.flatten()]).T)
axes[i].plot(X.flatten()[within_box], mom1.flatten()[within_box], 'b.')
axes[i].set_title(str(i))
plt.tight_layout()
plt.show()
x1 = 10
dy = 0.1
codes = [Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
fig, axes = plt.subplots(2, 5, figsize=(20, 4))
axes = axes.flatten()
for i, y in enumerate(xrange(-10, 10, 2)):
verts = [
[x1, dy+y],
[x1, -dy+y],
[-x1, -dy+y],
[-x1, dy+y],
[0, 0]]
path = Path(verts, codes)
patch = patches.PathPatch(path, facecolor='none', lw=2)
#axes[i].add_patch(patch)
#axes[i].set_xlim(-20,20)
#axes[i].set_ylim(-20,20)
within_box = path.contains_points(np.array([posx, posy]).T)
axes[i].plot(posx[within_box[::10]], vely[within_box[::10]], 'b.')
plt.show()
#fig = plt.figure()
#ax = fig.add_subplot(111)
#patch = patches.PathPatch(path, facecolor='none', lw=2)
#ax.add_patch(patch)
#ax.set_xlim(-20,20)
#ax.set_ylim(-20,20)
#plt.show() | _____no_output_____ | MIT | Measure_velocities.ipynb | stephenpardy/Offsets_Notebooks |
A CASE STUDY OF FACTORS AFFECTING LOAN APPROVAL 1. Defining the question a) Specifying the analysis questionIs there a relationship between gender, credit history and the area one lives and loan status? b) Defining the metric for successBe able to obtain and run statistically correct hypothesis tests, and come to a meaningful conclusion c) Understanding the contextIn finance, a loan is the lending of money by one or more individuals, organizations, or other entities to other individuals and organizations.Borrowing a Loan will build your confidence in securing a loan. If you repay well your loan, you will have a good credit history and stand a chance of more loan. Borrowing loan is important. It helps you when you don't have cash on hand and will are of great help whenever you are in a fix. d) Recording the experimental designWe will be conducting Exploratory data analysis which includes Univariate analysis, Bivariate and multivariate analysis.In order to answer our research question we will be carrying out hypothesis testing using Chi-square test to get the relationships and differences between our independent and target variables hence coming up with significant conclusions. e) Data RelevanceThe dataset contains demographic information on factors that determine whether one gets a loan or not. This data was extracted from Kaggle, which is a reputable organization.The information contained in our dataset was relevant for our analysis. 2. Importing relevant libraries | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import f_oneway
from scipy.stats import ttest_ind
import scipy.stats as stats
from sklearn.decomposition import PCA | _____no_output_____ | MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
3. Loading and checking the data | # Loading our dataset
loans_df = pd.read_csv('loans.csv')
# Getting a preview of the first 10 rows
loans_df.head(10)
# Determining the number of rows and columns in the dataset
loans_df.shape
# Determining the names of the columns present in the dataset
loans_df.columns
# Description of the quantitative columns
loans_df.describe()
# Description of the qualitative columns
loans_df.describe(include = 'object')
# Checking if each column is of the appropriate data type
loans_df.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 614 entries, 0 to 613
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Loan_ID 614 non-null object
1 Gender 601 non-null object
2 Married 611 non-null object
3 Dependents 599 non-null object
4 Education 614 non-null object
5 Self_Employed 582 non-null object
6 ApplicantIncome 614 non-null int64
7 CoapplicantIncome 614 non-null float64
8 LoanAmount 592 non-null float64
9 Loan_Amount_Term 600 non-null float64
10 Credit_History 564 non-null float64
11 Property_Area 614 non-null object
12 Loan_Status 614 non-null object
dtypes: float64(4), int64(1), object(8)
memory usage: 62.5+ KB
| MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
4. External data source validation > We validated our dataset using information from the following link:> http://calcnet.mth.cmich.edu/org/spss/prj_loan_data.htm 5. Data cleaning Uniformity | # Changing all column names to lowercase, stripping white spaces
# and removing all underscores
loans_df.columns = loans_df.columns.str.lower().str.strip().str.replace("_","")
# Confirming the changes made
loans_df.head(5) | _____no_output_____ | MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
Data Completeness | # Determining the number of null values in each column
loans_df.isnull().sum()
#Imputing Loan Amount with mean
loans_df['loanamount'] = loans_df['loanamount'].fillna(loans_df['loanamount'].mean())
#FowardFill For LoanTerm
loans_df['loanamountterm'] = loans_df['loanamountterm'].fillna(method = "ffill")
#Assuming Missing values imply bad credit History - replacing nulls with 0
loans_df['credithistory'] = loans_df['credithistory'].fillna(0)
#Imputing gender, married, and selfemployed
loans_df['dependents']=loans_df['dependents'].fillna(loans_df['dependents'].mode()[0])
loans_df['gender']=loans_df['gender'].fillna(loans_df['gender'].mode()[0])
loans_df['married']=loans_df['married'].fillna(loans_df['married'].mode()[0])
loans_df['selfemployed']=loans_df['selfemployed'].fillna(loans_df['selfemployed'].mode()[0])
# Confirming our changes after dealing with null values
loans_df.isnull().sum()
# Previewing the data
loans_df.head(10) | _____no_output_____ | MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
Data Consistency | # Checking if there are any duplicated rows
loans_df.duplicated().sum()
# Checking for any anomalies in the qualitative variables
qcol = ['gender', 'married', 'dependents', 'education',
'selfemployed','credithistory', 'propertyarea', 'loanstatus']
for col in qcol:
print(col, ':', loans_df[col].unique())
#Checking for Outliers
cols = ['applicantincome','coapplicantincome', 'loanamount', 'loanamountterm']
for column in cols:
plt.figure()
loans_df.boxplot([column], fontsize= 12)
plt.ylabel('count', fontsize = 12)
plt.title('Boxplot - {}'.format(column), fontsize = 16)
# Determining how many rows would be lost if outliers were removed
# Calculating our first, third quantiles and then later our IQR
# ---
Q1 = loans_df.quantile(0.25)
Q3 = loans_df.quantile(0.75)
IQR = Q3 - Q1
# Removing outliers based on the IQR range and stores the result in the data frame 'auto'
# ---
#
loans_df_new = loans_df[~((loans_df < (Q1 - 1.5 * IQR)) | (loans_df > (Q3 + 1.5 * IQR))).any(axis=1)]
# Printing the shape of our new dataset
# ---
#
print(loans_df_new.shape)
# Printing the shape of our old dataset
# ---
#
print(loans_df.shape)
# Number of rows removed
rows_removed = loans_df.shape[0] - loans_df_new.shape[0]
rows_removed
# Percentage of rows removed of the percentage
row_percent = (rows_removed/loans_df.shape[0]) * 100
row_percent
# Exporting our data
loans_df.to_csv('loanscleaned.csv') | _____no_output_____ | MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
6. Exploratory Data Analysis a) Univariate Analysis | # Previewing the dataset
loans_df.head(4)
# Loan Status
Yes = loans_df[loans_df["loanstatus"] == 'Y'].shape[0]
No = loans_df[loans_df["loanstatus"] == 'N'].shape[0]
print(f"Yes = {Yes}")
print(f"No = {No}")
print(' ')
print(f"Proportion of Yes = {(Yes / len(loans_df['loanstatus'])) * 100:.2f}%")
print(f"Proportion of No = {(No / len(loans_df['loanstatus'])) * 100:.2f}%")
print(' ')
plt.figure(figsize=(10, 8))
sns.countplot(x = loans_df["loanstatus"])
plt.xticks((0, 1), ["Yes", "No"], fontsize = 14)
plt.xlabel("Loan Approval Status", fontsize = 14)
plt.ylabel("Frequency", fontsize = 14)
plt.title("Number of Approved and Disapproved Loans", y=1, fontdict={"fontsize": 20});
# Pie Chart for Gender
gender = loans_df.gender.value_counts()
plt.figure(figsize= (8,5), dpi=100)
# Highlighting yes
explode = (0.1, 0)
colors = ['blue', 'orange']
# Plotting our pie chart
gender.plot.pie(explode = explode, colors = colors, autopct='%1.1f%%', shadow=True, startangle=140)
plt.axis('equal')
plt.title('Pie chart of Gender Distribution')
plt.show()
# Pie Chart for Education
education = loans_df.education.value_counts()
plt.figure(figsize= (8,5), dpi=100)
# Highlighting yes
explode = (0.1, 0)
colors = ['blue', 'orange']
# Plotting our pie chart
education.plot.pie(explode = explode, colors = colors, autopct='%1.1f%%', shadow=True, startangle=140)
plt.axis('equal')
plt.title('Pie chart of Education')
plt.show()
# Marital status
Yes = loans_df[loans_df["married"] == 'Yes'].shape[0]
No = loans_df[loans_df["married"] == 'No'].shape[0]
print(f"Yes = {Yes}")
print(f"No = {No}")
print(' ')
print(f"Proportion of Yes = {(Yes / len(loans_df['married'])) * 100:.2f}%")
print(f"Proportion of No = {(No / len(loans_df['married'])) * 100:.2f}%")
print(' ')
plt.figure(figsize=(10, 8))
sns.countplot(x = loans_df["married"])
plt.xticks((0, 1), ["No", "Yes"], fontsize = 14)
plt.xlabel("Marital Status", fontsize = 14)
plt.ylabel("Frequency", fontsize = 14)
plt.title("Marital Status", y=1, fontdict={"fontsize": 20});
# Frequency table for Property Area in percentage
round(loans_df.propertyarea.value_counts(normalize = True),2)
# Pie Chart for Credit History
credit = loans_df.credithistory.value_counts()
plt.figure(figsize= (8,5), dpi=100)
# Highlighting yes
explode = (0.1, 0)
colors = ['blue', 'orange']
# Plotting our pie chart
credit.plot.pie(explode = explode, colors = colors, autopct='%1.1f%%', shadow=True, startangle=140)
plt.axis('equal')
plt.title('Pie chart of Credit History')
plt.show()
# Frequency table for Self Employed status in percentage
round(loans_df.selfemployed.value_counts(normalize = True),2)
# Frequency table for Dependents in percentage
round(loans_df.dependents.value_counts(normalize = True),2)
# Histogram for Applicant Income
def histogram(var1, bins):
plt.figure(figsize= (10,8)),
sns.set_style('darkgrid'),
sns.set_palette('colorblind'),
sns.histplot(x = var1, data=loans_df, bins = bins , shrink= 0.9, kde = True)
histogram('applicantincome', 50)
plt.title('Histogram of the Applicant Income', fontsize = 16)
plt.xlabel('Applicant Income', fontsize = 14)
plt.ylabel('Count', fontsize = 14)
plt.xticks(fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Checking on coefficent of variance, skewness and kurtosis
print('The skewness is:', loans_df['applicantincome'].skew())
print('The kurtosis is:', loans_df['applicantincome'].kurt())
print('The coefficient of variation is:', loans_df['applicantincome'].std()/loans_df['applicantincome'].mean())
# Histogram for Loan Amount
histogram('loanamount', 50)
plt.title('Histogram of the Loan Amount Given', fontsize = 16)
plt.xlabel('Applicant Income', fontsize = 14)
plt.ylabel('Count', fontsize = 14)
plt.xticks(fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Checking on coefficent of variance, skewness and kurtosis
print('The skewness is:', loans_df['loanamount'].skew())
print('The kurtosis is:', loans_df['loanamount'].kurt())
print('The coefficient of variation is:', loans_df['loanamount'].std()/loans_df['loanamount'].mean())
# Histogram for Co-applicant Income
histogram('coapplicantincome', 50)
plt.title('Histogram of the Co-applicant Income', fontsize = 16)
plt.xlabel('Co-applicant Income', fontsize = 14)
plt.ylabel('Count', fontsize = 14)
plt.xticks(fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Checking on coefficent of variance, skewness and kurtosis
print('The skewness is:', loans_df['coapplicantincome'].skew())
print('The kurtosis is:', loans_df['coapplicantincome'].kurt())
print('The coefficient of variation is:', loans_df['coapplicantincome'].std()/loans_df['coapplicantincome'].mean())
# Looking at the unique variables in amount
loans_df.loanamountterm.unique()
# Measures of central tendency for our quantitative variables
loans_df.describe() | _____no_output_____ | MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
b) Bivariate Analysis | # Preview of dataset
loans_df.head(3)
# Comparison of Self employment Status and Loan Status
table=pd.crosstab(loans_df['selfemployed'],loans_df['loanstatus'])
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize= (10,8), stacked=False)
plt.title('Stacked Bar Chart of Self Employed to Loan Status', fontsize = 16)
plt.xlabel('Self Employed', fontsize = 14)
plt.ylabel('Proportion of Respondents', fontsize = 14)
plt.xticks(rotation = 360, fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Comparison of Education and Loan Status
table=pd.crosstab(loans_df['education'],loans_df['loanstatus'])
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False)
plt.title('Stacked Bar Chart of Education and Loan Status', fontsize = 16)
plt.xlabel('Education', fontsize = 14)
plt.ylabel('Proportion of Respondents', fontsize = 14)
plt.xticks(rotation = 360, fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Comparison of Gender and Loan Status
table=pd.crosstab(loans_df['gender'],loans_df['loanstatus'])
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar',figsize = (10,8), stacked=False)
plt.title('Bar Chart of Gender to loanstatus', fontsize = 16)
plt.xlabel('Gender', fontsize = 14)
plt.ylabel('Proportion of Respondents', fontsize = 14)
plt.xticks(rotation = 360, fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Comparison of Marital Status and Loan Status
table=pd.crosstab(loans_df['married'],loans_df['loanstatus'])
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False)
plt.title('Bar Chart of Marital Status to Loan Status', fontsize = 16)
plt.xlabel('Marital Status',fontsize = 14)
plt.ylabel('Proportion of Respondents', fontsize = 14)
plt.xticks(rotation = 360, fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Comparison of Credit History and Loan Status
table=pd.crosstab(loans_df['credithistory'],loans_df['loanstatus'])
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False)
plt.title('Bar Chart of Credit History and Loanstatus', fontsize = 16)
plt.xlabel('Credit History', fontsize = 14)
plt.ylabel('Proportion of Respondents', fontsize = 14)
plt.xticks(rotation = 360, fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Comparison of Property Area and Loan Status
table=pd.crosstab(loans_df['propertyarea'],loans_df['loanstatus'])
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False)
plt.title('Bar Chart of Area and Loan Status', fontsize = 16)
plt.xlabel('Area', fontsize = 14)
plt.ylabel('Proportion of Respondents', fontsize = 14)
plt.xticks(rotation = 360, fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Comparison of Dependents and Loan Status
table=pd.crosstab(loans_df['dependents'],loans_df['loanstatus'])
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False)
plt.title('Bar Chart of Dependents and Loan Status', fontsize = 16)
plt.xlabel('Dependents', fontsize = 14)
plt.ylabel('Proportion of Respondents', fontsize = 14)
plt.xticks(rotation = 360, fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
#Scatterplot to show correlation between Applicant Income and Loan amount
plt.figure(figsize= (10,8))
sns.scatterplot(x= loans_df.applicantincome, y = loans_df.loanamount)
plt.title('Applicant Income Vs Loan Amount', fontsize = 16)
plt.ylabel('Loan Amount', fontsize=14)
plt.xlabel('Applicant Income', fontsize=14)
plt.xticks(rotation = 75, fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Correlation coefficient between applicant income and loan amount
loans_df['applicantincome'].corr(loans_df['loanamount'])
#Scatterplot to show correlation between Co-Applicant Income and Loan amount
plt.figure(figsize= (10,8))
sns.scatterplot(x= loans_df.coapplicantincome, y = loans_df.loanamount)
plt.title('Co-Applicant Income Vs Loan Amount', fontsize = 16)
plt.ylabel('Loan Amount', fontsize=14)
plt.xlabel('Co-Applicant Income', fontsize=14)
plt.xticks(rotation = 75, fontsize = 14)
plt.yticks(fontsize = 14)
plt.show()
# Correlation coefficient between loan amount and co-applicant income
loans_df['coapplicantincome'].corr(loans_df['loanamount'])
# Scatterplot between Co-applicant income and Loan amount for
# income less that 2000
loans_df[loans_df['coapplicantincome'] < 2000].sample(200).plot.scatter(x='applicantincome', y='loanamount')
# Correlation Heatmap
plt.figure(figsize=(7,4))
sns.heatmap(loans_df.corr(),annot=True,cmap='cubehelix_r')
plt.show() | _____no_output_____ | MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
c) Multivariate Analysis | # Analysis of Loan Status, Applicant income and Loan Amount
plt.figure(figsize=(10,8))
sns.scatterplot(x= loans_df['loanamount'], y=loans_df['applicantincome'], hue= loans_df['loanstatus'])
plt.title('Loan Amount vs Applicant Income vs Loan Status', fontsize = 16)
plt.xlabel('Loan Amount', fontsize = 14)
plt.ylabel('Applicant Income', fontsize = 14)
plt.xticks(fontsize = 14)
plt.yticks(fontsize = 14)
# Analysis of Loan Status, Applicant income and Credit History
plt.figure(figsize=(10,8))
sns.scatterplot(x= loans_df['loanamount'], y=loans_df['applicantincome'], hue= loans_df['credithistory'])
plt.title('Loan Amount vs Applicant Income vs Credit History', fontsize = 16)
plt.xlabel('Loan Amount', fontsize = 14)
plt.ylabel('Applicant Income', fontsize = 14)
plt.xticks(fontsize = 14)
plt.yticks(fontsize = 14) | _____no_output_____ | MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
7. Hypothesis testing - The Chi-square test will be used for all hypothesis tests in our analysis.- The level of significance to be used in all tests below will be 0.05 or 5% **Hypothesis 1:**Ho : There is no relationship between credit history and the loan statusHa : There is a relationship between credit history and the loan status | # Creating a crosstab
tab = pd.crosstab(loans_df['loanstatus'], loans_df['credithistory'])
tab
# Obtaining the observed values
observed_values = tab.values
print('Observed values: -\n', observed_values)
# Creating the chi square contingency table
val = stats.chi2_contingency(tab)
val
# Obtaining the expected values
expected_values = val[3]
expected_values
# Obtaining the degrees of freedom
rows = len(tab.iloc[0:2, 0])
columns = len(tab.iloc[0, 0:2])
dof = (rows-1)*(columns-1)
print('Degrees of Freedom', dof)
# Obtaining the chi-square statistic
chi_square =sum([(o-e)**2./e for o,e in zip(observed_values,expected_values)])
chi_square
chi_square_statistic = chi_square[0]+chi_square[1]
chi_square_statistic
# Getting the critical value
alpha = 0.05
critical_value = stats.chi2.ppf(q = 1-alpha, df = dof)
print('Critical Value:', critical_value)
# Getting p value
p_value = 1 - stats.chi2.cdf(x = chi_square_statistic, df= dof)
p_value
# Conclusion
if chi_square_statistic>=critical_value:
print('Reject Null Hypothesis')
else:
print('Do not Reject Null Hypothesis') | Reject Null Hypothesis
| MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
The chi-square statistic is greater than the critical value hence we reject the null hypothesis that there is no relationship between credit history and loan statusAt 5% level of significance, there is enough evidence to conclude that there is a relationship between credit history and loan status **Hypothesis 2 :**Ho : There is no relationship between area and the loan statusHa : There is a relationship between area and the loan status | # Creating a crosstab
tab = pd.crosstab(loans_df['loanstatus'], loans_df['propertyarea'])
tab
# Obtaining the observed values
observed_values = tab.values
print('Observed values: -\n', observed_values)
# Creating the chi square contingency table
val = stats.chi2_contingency(tab)
val
# Obtaining the expected values
expected_values = val[3]
expected_values
# Obtaining the degrees of freedom
rows = len(tab.iloc[0:2, 0])
columns = len(tab.iloc[0, 0:2])
dof = (rows-1)*(columns-1)
print('Degrees of Freedom', dof)
# Obtaining the chi-square statistic
chi_square =sum([(o-e)**2./e for o,e in zip(observed_values,expected_values)])
chi_square
chi_square_statistic = chi_square[0]+chi_square[1]
chi_square_statistic
# Getting the critical value
alpha = 0.05
critical_value = stats.chi2.ppf(q = 1-alpha, df = dof)
print('Critical Value:', critical_value)
# Getting p value
p_value = 1 - stats.chi2.cdf(x = chi_square_statistic, df= dof)
p_value
# Conclusion
if chi_square_statistic>=critical_value:
print('Reject Null Hypothesis')
else:
print('Do not Reject Null Hypothesis') | Reject Null Hypothesis
| MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
The chi-square statistic is greater than the critical value hence we reject the null hypothesis that there is no relationship between area and loan statusAt 5% level of significance, there is enough evidence to conclude that there is a relationship between credit area and loan status **Hypothesis 3 :**Ho : There is no relationship between gender and the loan statusHa : There is a relationship between gender and the loan status | # Creating a crosstab
tab = pd.crosstab(loans_df['loanstatus'], loans_df['gender'])
tab
# Obtaining the observed values
observed_values = tab.values
print('Observed values: -\n', observed_values)
# Creating the chi square contingency table
val = stats.chi2_contingency(tab)
val
# Obtaining the expected values
expected_values = val[3]
expected_values
# Obtaining the degrees of freedom
rows = len(tab.iloc[0:2, 0])
columns = len(tab.iloc[0, 0:2])
dof = (rows-1)*(columns-1)
print('Degrees of Freedom', dof)
# Obtaining the chi-square statistic
chi_square =sum([(o-e)**2./e for o,e in zip(observed_values,expected_values)])
chi_square
chi_square_statistic = chi_square[0]+chi_square[1]
chi_square_statistic
# Getting the critical value
alpha = 0.05
critical_value = stats.chi2.ppf(q = 1-alpha, df = dof)
print('Critical Value:', critical_value)
# Getting p value
p_value = 1 - stats.chi2.cdf(x = chi_square_statistic, df= dof)
p_value
# Conclusion
if chi_square_statistic>=critical_value:
print('Reject Null Hypothesis')
else:
print('Do not Reject Null Hypothesis') | Do not Reject Null Hypothesis
| MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
The chi-square statistic is less than the critical value hence we do not reject the null hypothesis that there is no relationship between area and loan statusAt 5% level of significance, there is not enough evidence to conclude that there is a relationship between credit area and loan status 8. Dimensionality reduction | # PCA analysis with One Hot Encoding
dummy_Gender = pd.get_dummies(loans_df['gender'], prefix = 'Gender')
dummy_Married = pd.get_dummies(loans_df['married'], prefix = "Married")
dummy_Education = pd.get_dummies(loans_df['education'], prefix = "Education")
dummy_Self_Employed = pd.get_dummies(loans_df['selfemployed'], prefix = "Selfemployed")
dummy_Property_Area = pd.get_dummies(loans_df['propertyarea'], prefix = "Property")
dummy_Dependents = pd.get_dummies(loans_df['dependents'], prefix = "Dependents")
dummy_Loan_status = pd.get_dummies(loans_df['loanstatus'], prefix = "Approve")
# Creating a list of our dummy data
frames = [loans_df,dummy_Gender,dummy_Married,dummy_Education,dummy_Self_Employed,dummy_Property_Area,dummy_Dependents,dummy_Loan_status]
# Combining the dummy data with our dataframe
df_train = pd.concat(frames, axis = 1)
# Previewing our training dataset
df_train.head(10)
# Dropping of non-numeric columns as part of pre-processing
df_train = df_train.drop(columns = ['loanid', 'gender', 'married', 'dependents', 'education','selfemployed', 'propertyarea','loanstatus','Approve_N'])
# Previewing the final dataset for our analysis
df_train
# Preprocessing
X=df_train.drop(['Approve_Y'],axis=1)
y=df_train['Approve_Y']
# Splitting into training and test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
# Normalization
# Dependents had an issue because of the +
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Applying PCA
from sklearn.decomposition import PCA
pca = PCA(n_components=6)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
# Obtaining the explained variance ratio which returns the variance caused by each of the principal components.
# We execute the following line of code to find the "explained variance ratio"
explained_variance = pca.explained_variance_ratio_
explained_variance
# Plotting our scree plot
plt.plot(pca.explained_variance_ratio_)
plt.xlabel('Number of components', fontsize = 14)
plt.ylabel('Explained variance', fontsize = 14)
plt.title('Scree Plot', fontsize = 16)
plt.show()
# Training and Making Predictions
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(max_depth=2, random_state=0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Performance Evaluation
#
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
print('Accuracy' , accuracy_score(y_test, y_pred)) | [[ 0 33]
[ 0 90]]
Accuracy 0.7317073170731707
| MIT | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT |
Dependencies | import os
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
# Set seeds to make the experiment more reproducible.
from tensorflow import set_random_seed
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed_everything()
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore") | Using TensorFlow backend.
| MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Load data | train = pd.read_csv('../input/aptos2019-blindness-detection/train.csv')
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
print('Number of train samples: ', train.shape[0])
print('Number of test samples: ', test.shape[0])
# Preprocecss data
train["id_code"] = train["id_code"].apply(lambda x: x + ".png")
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
train['diagnosis'] = train['diagnosis'].astype('str')
display(train.head()) | Number of train samples: 3662
Number of test samples: 1928
| MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Model parameters | # Model parameters
BATCH_SIZE = 8
EPOCHS = 30
WARMUP_EPOCHS = 2
LEARNING_RATE = 1e-4
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 512
WIDTH = 512
CANAL = 3
N_CLASSES = train['diagnosis'].nunique()
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
def kappa(y_true, y_pred, n_classes=5):
y_trues = K.cast(K.argmax(y_true), K.floatx())
y_preds = K.cast(K.argmax(y_pred), K.floatx())
n_samples = K.cast(K.shape(y_true)[0], K.floatx())
distance = K.sum(K.abs(y_trues - y_preds))
max_distance = n_classes - 1
kappa_score = 1 - ((distance**2) / (n_samples * (max_distance**2)))
return kappa_score | _____no_output_____ | MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Train test split | X_train, X_val = train_test_split(train, test_size=0.25, random_state=0) | _____no_output_____ | MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Data generator | train_datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
brightness_range=[0.5, 1.5],
zoom_range=[1, 1.2],
zca_whitening=True,
horizontal_flip=True,
vertical_flip=True,
fill_mode='constant',
cval=0.)
train_generator=train_datagen.flow_from_dataframe(
dataframe=X_train,
directory="../input/aptos2019-blindness-detection/train_images/",
x_col="id_code",
y_col="diagnosis",
batch_size=BATCH_SIZE,
class_mode="categorical",
target_size=(HEIGHT, WIDTH))
valid_generator=train_datagen.flow_from_dataframe(
dataframe=X_val,
directory="../input/aptos2019-blindness-detection/train_images/",
x_col="id_code",
y_col="diagnosis",
batch_size=BATCH_SIZE,
class_mode="categorical",
target_size=(HEIGHT, WIDTH))
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_dataframe(
dataframe=test,
directory = "../input/aptos2019-blindness-detection/test_images/",
x_col="id_code",
target_size=(HEIGHT, WIDTH),
batch_size=1,
shuffle=False,
class_mode=None) | Found 2746 validated image filenames belonging to 5 classes.
Found 916 validated image filenames belonging to 5 classes.
Found 1928 validated image filenames.
| MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Model | def create_model(input_shape, n_out):
input_tensor = Input(shape=input_shape)
base_model = applications.ResNet50(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
x = Dropout(0.5)(x)
x = Dense(2048, activation='relu')(x)
x = Dropout(0.5)(x)
final_output = Dense(n_out, activation='softmax', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
model = create_model(input_shape=(HEIGHT, WIDTH, CANAL), n_out=N_CLASSES)
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
class_weights = class_weight.compute_class_weight('balanced', np.unique(train['diagnosis'].astype('int').values), train['diagnosis'].astype('int').values)
metric_list = ["accuracy", kappa]
optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary() | __________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 512, 512, 3) 0
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D) (None, 518, 518, 3) 0 input_1[0][0]
__________________________________________________________________________________________________
conv1 (Conv2D) (None, 256, 256, 64) 9472 conv1_pad[0][0]
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization) (None, 256, 256, 64) 256 conv1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 256, 256, 64) 0 bn_conv1[0][0]
__________________________________________________________________________________________________
pool1_pad (ZeroPadding2D) (None, 258, 258, 64) 0 activation_1[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 128, 128, 64) 0 pool1_pad[0][0]
__________________________________________________________________________________________________
res2a_branch2a (Conv2D) (None, 128, 128, 64) 4160 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
bn2a_branch2a (BatchNormalizati (None, 128, 128, 64) 256 res2a_branch2a[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 128, 128, 64) 0 bn2a_branch2a[0][0]
__________________________________________________________________________________________________
res2a_branch2b (Conv2D) (None, 128, 128, 64) 36928 activation_2[0][0]
__________________________________________________________________________________________________
bn2a_branch2b (BatchNormalizati (None, 128, 128, 64) 256 res2a_branch2b[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 128, 128, 64) 0 bn2a_branch2b[0][0]
__________________________________________________________________________________________________
res2a_branch2c (Conv2D) (None, 128, 128, 256 16640 activation_3[0][0]
__________________________________________________________________________________________________
res2a_branch1 (Conv2D) (None, 128, 128, 256 16640 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
bn2a_branch2c (BatchNormalizati (None, 128, 128, 256 1024 res2a_branch2c[0][0]
__________________________________________________________________________________________________
bn2a_branch1 (BatchNormalizatio (None, 128, 128, 256 1024 res2a_branch1[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 128, 128, 256 0 bn2a_branch2c[0][0]
bn2a_branch1[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 128, 128, 256 0 add_1[0][0]
__________________________________________________________________________________________________
res2b_branch2a (Conv2D) (None, 128, 128, 64) 16448 activation_4[0][0]
__________________________________________________________________________________________________
bn2b_branch2a (BatchNormalizati (None, 128, 128, 64) 256 res2b_branch2a[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 128, 128, 64) 0 bn2b_branch2a[0][0]
__________________________________________________________________________________________________
res2b_branch2b (Conv2D) (None, 128, 128, 64) 36928 activation_5[0][0]
__________________________________________________________________________________________________
bn2b_branch2b (BatchNormalizati (None, 128, 128, 64) 256 res2b_branch2b[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 128, 128, 64) 0 bn2b_branch2b[0][0]
__________________________________________________________________________________________________
res2b_branch2c (Conv2D) (None, 128, 128, 256 16640 activation_6[0][0]
__________________________________________________________________________________________________
bn2b_branch2c (BatchNormalizati (None, 128, 128, 256 1024 res2b_branch2c[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 128, 128, 256 0 bn2b_branch2c[0][0]
activation_4[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, 128, 128, 256 0 add_2[0][0]
__________________________________________________________________________________________________
res2c_branch2a (Conv2D) (None, 128, 128, 64) 16448 activation_7[0][0]
__________________________________________________________________________________________________
bn2c_branch2a (BatchNormalizati (None, 128, 128, 64) 256 res2c_branch2a[0][0]
__________________________________________________________________________________________________
activation_8 (Activation) (None, 128, 128, 64) 0 bn2c_branch2a[0][0]
__________________________________________________________________________________________________
res2c_branch2b (Conv2D) (None, 128, 128, 64) 36928 activation_8[0][0]
__________________________________________________________________________________________________
bn2c_branch2b (BatchNormalizati (None, 128, 128, 64) 256 res2c_branch2b[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, 128, 128, 64) 0 bn2c_branch2b[0][0]
__________________________________________________________________________________________________
res2c_branch2c (Conv2D) (None, 128, 128, 256 16640 activation_9[0][0]
__________________________________________________________________________________________________
bn2c_branch2c (BatchNormalizati (None, 128, 128, 256 1024 res2c_branch2c[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 128, 128, 256 0 bn2c_branch2c[0][0]
activation_7[0][0]
__________________________________________________________________________________________________
activation_10 (Activation) (None, 128, 128, 256 0 add_3[0][0]
__________________________________________________________________________________________________
res3a_branch2a (Conv2D) (None, 64, 64, 128) 32896 activation_10[0][0]
__________________________________________________________________________________________________
bn3a_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3a_branch2a[0][0]
__________________________________________________________________________________________________
activation_11 (Activation) (None, 64, 64, 128) 0 bn3a_branch2a[0][0]
__________________________________________________________________________________________________
res3a_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_11[0][0]
__________________________________________________________________________________________________
bn3a_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3a_branch2b[0][0]
__________________________________________________________________________________________________
activation_12 (Activation) (None, 64, 64, 128) 0 bn3a_branch2b[0][0]
__________________________________________________________________________________________________
res3a_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_12[0][0]
__________________________________________________________________________________________________
res3a_branch1 (Conv2D) (None, 64, 64, 512) 131584 activation_10[0][0]
__________________________________________________________________________________________________
bn3a_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3a_branch2c[0][0]
__________________________________________________________________________________________________
bn3a_branch1 (BatchNormalizatio (None, 64, 64, 512) 2048 res3a_branch1[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 64, 64, 512) 0 bn3a_branch2c[0][0]
bn3a_branch1[0][0]
__________________________________________________________________________________________________
activation_13 (Activation) (None, 64, 64, 512) 0 add_4[0][0]
__________________________________________________________________________________________________
res3b_branch2a (Conv2D) (None, 64, 64, 128) 65664 activation_13[0][0]
__________________________________________________________________________________________________
bn3b_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3b_branch2a[0][0]
__________________________________________________________________________________________________
activation_14 (Activation) (None, 64, 64, 128) 0 bn3b_branch2a[0][0]
__________________________________________________________________________________________________
res3b_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_14[0][0]
__________________________________________________________________________________________________
bn3b_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3b_branch2b[0][0]
__________________________________________________________________________________________________
activation_15 (Activation) (None, 64, 64, 128) 0 bn3b_branch2b[0][0]
__________________________________________________________________________________________________
res3b_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_15[0][0]
__________________________________________________________________________________________________
bn3b_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3b_branch2c[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, 64, 64, 512) 0 bn3b_branch2c[0][0]
activation_13[0][0]
__________________________________________________________________________________________________
activation_16 (Activation) (None, 64, 64, 512) 0 add_5[0][0]
__________________________________________________________________________________________________
res3c_branch2a (Conv2D) (None, 64, 64, 128) 65664 activation_16[0][0]
__________________________________________________________________________________________________
bn3c_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3c_branch2a[0][0]
__________________________________________________________________________________________________
activation_17 (Activation) (None, 64, 64, 128) 0 bn3c_branch2a[0][0]
__________________________________________________________________________________________________
res3c_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_17[0][0]
__________________________________________________________________________________________________
bn3c_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3c_branch2b[0][0]
__________________________________________________________________________________________________
activation_18 (Activation) (None, 64, 64, 128) 0 bn3c_branch2b[0][0]
__________________________________________________________________________________________________
res3c_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_18[0][0]
__________________________________________________________________________________________________
bn3c_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3c_branch2c[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, 64, 64, 512) 0 bn3c_branch2c[0][0]
activation_16[0][0]
__________________________________________________________________________________________________
activation_19 (Activation) (None, 64, 64, 512) 0 add_6[0][0]
__________________________________________________________________________________________________
res3d_branch2a (Conv2D) (None, 64, 64, 128) 65664 activation_19[0][0]
__________________________________________________________________________________________________
bn3d_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3d_branch2a[0][0]
__________________________________________________________________________________________________
activation_20 (Activation) (None, 64, 64, 128) 0 bn3d_branch2a[0][0]
__________________________________________________________________________________________________
res3d_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_20[0][0]
__________________________________________________________________________________________________
bn3d_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3d_branch2b[0][0]
__________________________________________________________________________________________________
activation_21 (Activation) (None, 64, 64, 128) 0 bn3d_branch2b[0][0]
__________________________________________________________________________________________________
res3d_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_21[0][0]
__________________________________________________________________________________________________
bn3d_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3d_branch2c[0][0]
__________________________________________________________________________________________________
add_7 (Add) (None, 64, 64, 512) 0 bn3d_branch2c[0][0]
activation_19[0][0]
__________________________________________________________________________________________________
activation_22 (Activation) (None, 64, 64, 512) 0 add_7[0][0]
__________________________________________________________________________________________________
res4a_branch2a (Conv2D) (None, 32, 32, 256) 131328 activation_22[0][0]
__________________________________________________________________________________________________
bn4a_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4a_branch2a[0][0]
__________________________________________________________________________________________________
activation_23 (Activation) (None, 32, 32, 256) 0 bn4a_branch2a[0][0]
__________________________________________________________________________________________________
res4a_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_23[0][0]
__________________________________________________________________________________________________
bn4a_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4a_branch2b[0][0]
__________________________________________________________________________________________________
activation_24 (Activation) (None, 32, 32, 256) 0 bn4a_branch2b[0][0]
__________________________________________________________________________________________________
res4a_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_24[0][0]
__________________________________________________________________________________________________
res4a_branch1 (Conv2D) (None, 32, 32, 1024) 525312 activation_22[0][0]
__________________________________________________________________________________________________
bn4a_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4a_branch2c[0][0]
__________________________________________________________________________________________________
bn4a_branch1 (BatchNormalizatio (None, 32, 32, 1024) 4096 res4a_branch1[0][0]
__________________________________________________________________________________________________
add_8 (Add) (None, 32, 32, 1024) 0 bn4a_branch2c[0][0]
bn4a_branch1[0][0]
__________________________________________________________________________________________________
activation_25 (Activation) (None, 32, 32, 1024) 0 add_8[0][0]
__________________________________________________________________________________________________
res4b_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_25[0][0]
__________________________________________________________________________________________________
bn4b_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4b_branch2a[0][0]
__________________________________________________________________________________________________
activation_26 (Activation) (None, 32, 32, 256) 0 bn4b_branch2a[0][0]
__________________________________________________________________________________________________
res4b_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_26[0][0]
__________________________________________________________________________________________________
bn4b_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4b_branch2b[0][0]
__________________________________________________________________________________________________
activation_27 (Activation) (None, 32, 32, 256) 0 bn4b_branch2b[0][0]
__________________________________________________________________________________________________
res4b_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_27[0][0]
__________________________________________________________________________________________________
bn4b_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4b_branch2c[0][0]
__________________________________________________________________________________________________
add_9 (Add) (None, 32, 32, 1024) 0 bn4b_branch2c[0][0]
activation_25[0][0]
__________________________________________________________________________________________________
activation_28 (Activation) (None, 32, 32, 1024) 0 add_9[0][0]
__________________________________________________________________________________________________
res4c_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_28[0][0]
__________________________________________________________________________________________________
bn4c_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4c_branch2a[0][0]
__________________________________________________________________________________________________
activation_29 (Activation) (None, 32, 32, 256) 0 bn4c_branch2a[0][0]
__________________________________________________________________________________________________
res4c_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_29[0][0]
__________________________________________________________________________________________________
bn4c_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4c_branch2b[0][0]
__________________________________________________________________________________________________
activation_30 (Activation) (None, 32, 32, 256) 0 bn4c_branch2b[0][0]
__________________________________________________________________________________________________
res4c_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_30[0][0]
__________________________________________________________________________________________________
bn4c_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4c_branch2c[0][0]
__________________________________________________________________________________________________
add_10 (Add) (None, 32, 32, 1024) 0 bn4c_branch2c[0][0]
activation_28[0][0]
__________________________________________________________________________________________________
activation_31 (Activation) (None, 32, 32, 1024) 0 add_10[0][0]
__________________________________________________________________________________________________
res4d_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_31[0][0]
__________________________________________________________________________________________________
bn4d_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4d_branch2a[0][0]
__________________________________________________________________________________________________
activation_32 (Activation) (None, 32, 32, 256) 0 bn4d_branch2a[0][0]
__________________________________________________________________________________________________
res4d_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_32[0][0]
__________________________________________________________________________________________________
bn4d_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4d_branch2b[0][0]
__________________________________________________________________________________________________
activation_33 (Activation) (None, 32, 32, 256) 0 bn4d_branch2b[0][0]
__________________________________________________________________________________________________
res4d_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_33[0][0]
__________________________________________________________________________________________________
bn4d_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4d_branch2c[0][0]
__________________________________________________________________________________________________
add_11 (Add) (None, 32, 32, 1024) 0 bn4d_branch2c[0][0]
activation_31[0][0]
__________________________________________________________________________________________________
activation_34 (Activation) (None, 32, 32, 1024) 0 add_11[0][0]
__________________________________________________________________________________________________
res4e_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_34[0][0]
__________________________________________________________________________________________________
bn4e_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4e_branch2a[0][0]
__________________________________________________________________________________________________
activation_35 (Activation) (None, 32, 32, 256) 0 bn4e_branch2a[0][0]
__________________________________________________________________________________________________
res4e_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_35[0][0]
__________________________________________________________________________________________________
bn4e_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4e_branch2b[0][0]
__________________________________________________________________________________________________
activation_36 (Activation) (None, 32, 32, 256) 0 bn4e_branch2b[0][0]
__________________________________________________________________________________________________
res4e_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_36[0][0]
__________________________________________________________________________________________________
bn4e_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4e_branch2c[0][0]
__________________________________________________________________________________________________
add_12 (Add) (None, 32, 32, 1024) 0 bn4e_branch2c[0][0]
activation_34[0][0]
__________________________________________________________________________________________________
activation_37 (Activation) (None, 32, 32, 1024) 0 add_12[0][0]
__________________________________________________________________________________________________
res4f_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_37[0][0]
__________________________________________________________________________________________________
bn4f_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4f_branch2a[0][0]
__________________________________________________________________________________________________
activation_38 (Activation) (None, 32, 32, 256) 0 bn4f_branch2a[0][0]
__________________________________________________________________________________________________
res4f_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_38[0][0]
__________________________________________________________________________________________________
bn4f_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4f_branch2b[0][0]
__________________________________________________________________________________________________
activation_39 (Activation) (None, 32, 32, 256) 0 bn4f_branch2b[0][0]
__________________________________________________________________________________________________
res4f_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_39[0][0]
__________________________________________________________________________________________________
bn4f_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4f_branch2c[0][0]
__________________________________________________________________________________________________
add_13 (Add) (None, 32, 32, 1024) 0 bn4f_branch2c[0][0]
activation_37[0][0]
__________________________________________________________________________________________________
activation_40 (Activation) (None, 32, 32, 1024) 0 add_13[0][0]
__________________________________________________________________________________________________
res5a_branch2a (Conv2D) (None, 16, 16, 512) 524800 activation_40[0][0]
__________________________________________________________________________________________________
bn5a_branch2a (BatchNormalizati (None, 16, 16, 512) 2048 res5a_branch2a[0][0]
__________________________________________________________________________________________________
activation_41 (Activation) (None, 16, 16, 512) 0 bn5a_branch2a[0][0]
__________________________________________________________________________________________________
res5a_branch2b (Conv2D) (None, 16, 16, 512) 2359808 activation_41[0][0]
__________________________________________________________________________________________________
bn5a_branch2b (BatchNormalizati (None, 16, 16, 512) 2048 res5a_branch2b[0][0]
__________________________________________________________________________________________________
activation_42 (Activation) (None, 16, 16, 512) 0 bn5a_branch2b[0][0]
__________________________________________________________________________________________________
res5a_branch2c (Conv2D) (None, 16, 16, 2048) 1050624 activation_42[0][0]
__________________________________________________________________________________________________
res5a_branch1 (Conv2D) (None, 16, 16, 2048) 2099200 activation_40[0][0]
__________________________________________________________________________________________________
bn5a_branch2c (BatchNormalizati (None, 16, 16, 2048) 8192 res5a_branch2c[0][0]
__________________________________________________________________________________________________
bn5a_branch1 (BatchNormalizatio (None, 16, 16, 2048) 8192 res5a_branch1[0][0]
__________________________________________________________________________________________________
add_14 (Add) (None, 16, 16, 2048) 0 bn5a_branch2c[0][0]
bn5a_branch1[0][0]
__________________________________________________________________________________________________
activation_43 (Activation) (None, 16, 16, 2048) 0 add_14[0][0]
__________________________________________________________________________________________________
res5b_branch2a (Conv2D) (None, 16, 16, 512) 1049088 activation_43[0][0]
__________________________________________________________________________________________________
bn5b_branch2a (BatchNormalizati (None, 16, 16, 512) 2048 res5b_branch2a[0][0]
__________________________________________________________________________________________________
activation_44 (Activation) (None, 16, 16, 512) 0 bn5b_branch2a[0][0]
__________________________________________________________________________________________________
res5b_branch2b (Conv2D) (None, 16, 16, 512) 2359808 activation_44[0][0]
__________________________________________________________________________________________________
bn5b_branch2b (BatchNormalizati (None, 16, 16, 512) 2048 res5b_branch2b[0][0]
__________________________________________________________________________________________________
activation_45 (Activation) (None, 16, 16, 512) 0 bn5b_branch2b[0][0]
__________________________________________________________________________________________________
res5b_branch2c (Conv2D) (None, 16, 16, 2048) 1050624 activation_45[0][0]
__________________________________________________________________________________________________
bn5b_branch2c (BatchNormalizati (None, 16, 16, 2048) 8192 res5b_branch2c[0][0]
__________________________________________________________________________________________________
add_15 (Add) (None, 16, 16, 2048) 0 bn5b_branch2c[0][0]
activation_43[0][0]
__________________________________________________________________________________________________
activation_46 (Activation) (None, 16, 16, 2048) 0 add_15[0][0]
__________________________________________________________________________________________________
res5c_branch2a (Conv2D) (None, 16, 16, 512) 1049088 activation_46[0][0]
__________________________________________________________________________________________________
bn5c_branch2a (BatchNormalizati (None, 16, 16, 512) 2048 res5c_branch2a[0][0]
__________________________________________________________________________________________________
activation_47 (Activation) (None, 16, 16, 512) 0 bn5c_branch2a[0][0]
__________________________________________________________________________________________________
res5c_branch2b (Conv2D) (None, 16, 16, 512) 2359808 activation_47[0][0]
__________________________________________________________________________________________________
bn5c_branch2b (BatchNormalizati (None, 16, 16, 512) 2048 res5c_branch2b[0][0]
__________________________________________________________________________________________________
activation_48 (Activation) (None, 16, 16, 512) 0 bn5c_branch2b[0][0]
__________________________________________________________________________________________________
res5c_branch2c (Conv2D) (None, 16, 16, 2048) 1050624 activation_48[0][0]
__________________________________________________________________________________________________
bn5c_branch2c (BatchNormalizati (None, 16, 16, 2048) 8192 res5c_branch2c[0][0]
__________________________________________________________________________________________________
add_16 (Add) (None, 16, 16, 2048) 0 bn5c_branch2c[0][0]
activation_46[0][0]
__________________________________________________________________________________________________
activation_49 (Activation) (None, 16, 16, 2048) 0 add_16[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 2048) 0 activation_49[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 2048) 0 global_average_pooling2d_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 2048) 4196352 dropout_1[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 2048) 0 dense_1[0][0]
__________________________________________________________________________________________________
final_output (Dense) (None, 5) 10245 dropout_2[0][0]
==================================================================================================
Total params: 27,794,309
Trainable params: 4,206,597
Non-trainable params: 23,587,712
__________________________________________________________________________________________________
| MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Train top layers | STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
class_weight=class_weights,
verbose=1).history | Epoch 1/2
343/343 [==============================] - 564s 2s/step - loss: 1.3429 - acc: 0.6112 - kappa: 0.7074 - val_loss: 1.8074 - val_acc: 0.4890 - val_kappa: 0.2442
Epoch 2/2
343/343 [==============================] - 532s 2s/step - loss: 0.8423 - acc: 0.7008 - kappa: 0.8523 - val_loss: 2.3498 - val_acc: 0.4890 - val_kappa: 0.2517
| MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Fine-tune the complete model | for layer in model.layers:
layer.trainable = True
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es, rlrop]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
history_finetunning = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
class_weight=class_weights,
verbose=1).history | Epoch 1/30
343/343 [==============================] - 567s 2s/step - loss: 0.7127 - acc: 0.7340 - kappa: 0.8797 - val_loss: 0.9326 - val_acc: 0.6927 - val_kappa: 0.8542
Epoch 2/30
343/343 [==============================] - 549s 2s/step - loss: 0.6119 - acc: 0.7693 - kappa: 0.9201 - val_loss: 0.4930 - val_acc: 0.8095 - val_kappa: 0.9478
Epoch 3/30
343/343 [==============================] - 549s 2s/step - loss: 0.5501 - acc: 0.7915 - kappa: 0.9390 - val_loss: 0.6268 - val_acc: 0.7335 - val_kappa: 0.8974
Epoch 4/30
343/343 [==============================] - 547s 2s/step - loss: 0.5339 - acc: 0.7937 - kappa: 0.9403 - val_loss: 0.6106 - val_acc: 0.7621 - val_kappa: 0.9342
Epoch 5/30
343/343 [==============================] - 544s 2s/step - loss: 0.5076 - acc: 0.8109 - kappa: 0.9487 - val_loss: 0.5792 - val_acc: 0.7797 - val_kappa: 0.9184
Epoch 00005: ReduceLROnPlateau reducing learning rate to 4.999999873689376e-05.
Epoch 6/30
343/343 [==============================] - 544s 2s/step - loss: 0.4532 - acc: 0.8254 - kappa: 0.9557 - val_loss: 0.4168 - val_acc: 0.8502 - val_kappa: 0.9576
Epoch 7/30
343/343 [==============================] - 547s 2s/step - loss: 0.4337 - acc: 0.8331 - kappa: 0.9593 - val_loss: 0.4258 - val_acc: 0.8447 - val_kappa: 0.9537
Epoch 8/30
343/343 [==============================] - 544s 2s/step - loss: 0.4126 - acc: 0.8469 - kappa: 0.9644 - val_loss: 0.4385 - val_acc: 0.8425 - val_kappa: 0.9597
Epoch 9/30
343/343 [==============================] - 549s 2s/step - loss: 0.3963 - acc: 0.8437 - kappa: 0.9615 - val_loss: 0.4241 - val_acc: 0.8458 - val_kappa: 0.9615
Epoch 00009: ReduceLROnPlateau reducing learning rate to 2.499999936844688e-05.
Epoch 10/30
343/343 [==============================] - 548s 2s/step - loss: 0.3610 - acc: 0.8637 - kappa: 0.9725 - val_loss: 0.3777 - val_acc: 0.8623 - val_kappa: 0.9705
Epoch 11/30
343/343 [==============================] - 551s 2s/step - loss: 0.3568 - acc: 0.8637 - kappa: 0.9712 - val_loss: 0.4111 - val_acc: 0.8480 - val_kappa: 0.9640
Epoch 12/30
343/343 [==============================] - 551s 2s/step - loss: 0.3377 - acc: 0.8688 - kappa: 0.9740 - val_loss: 0.4150 - val_acc: 0.8491 - val_kappa: 0.9636
Epoch 13/30
343/343 [==============================] - 550s 2s/step - loss: 0.3335 - acc: 0.8776 - kappa: 0.9751 - val_loss: 0.4253 - val_acc: 0.8612 - val_kappa: 0.9674
Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.249999968422344e-05.
Epoch 14/30
343/343 [==============================] - 549s 2s/step - loss: 0.3144 - acc: 0.8808 - kappa: 0.9775 - val_loss: 0.3882 - val_acc: 0.8656 - val_kappa: 0.9687
Epoch 15/30
343/343 [==============================] - 549s 2s/step - loss: 0.3115 - acc: 0.8827 - kappa: 0.9779 - val_loss: 0.3922 - val_acc: 0.8689 - val_kappa: 0.9705
Restoring model weights from the end of the best epoch
Epoch 00015: early stopping
| MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Model loss graph | history = {'loss': history_warmup['loss'] + history_finetunning['loss'],
'val_loss': history_warmup['val_loss'] + history_finetunning['val_loss'],
'acc': history_warmup['acc'] + history_finetunning['acc'],
'val_acc': history_warmup['val_acc'] + history_finetunning['val_acc'],
'kappa': history_warmup['kappa'] + history_finetunning['kappa'],
'val_kappa': history_warmup['val_kappa'] + history_finetunning['val_kappa']}
sns.set_style("whitegrid")
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex='col', figsize=(20, 18))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
ax3.plot(history['kappa'], label='Train kappa')
ax3.plot(history['val_kappa'], label='Validation kappa')
ax3.legend(loc='best')
ax3.set_title('Kappa')
plt.xlabel('Epochs')
sns.despine()
plt.show() | _____no_output_____ | MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Model Evaluation | lastFullTrainPred = np.empty((0, N_CLASSES))
lastFullTrainLabels = np.empty((0, N_CLASSES))
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_TRAIN+1):
im, lbl = next(train_generator)
scores = model.predict(im, batch_size=train_generator.batch_size)
lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)
lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(valid_generator)
scores = model.predict(im, batch_size=valid_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0) | _____no_output_____ | MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Threshold optimization | def find_best_fixed_threshold(preds, targs, do_plot=True):
best_thr_list = [0 for i in range(preds.shape[1])]
for index in reversed(range(1, preds.shape[1])):
score = []
thrs = np.arange(0, 1, 0.01)
for thr in thrs:
preds_thr = [index if x[index] > thr else np.argmax(x) for x in preds]
score.append(cohen_kappa_score(targs, preds_thr))
score = np.array(score)
pm = score.argmax()
best_thr, best_score = thrs[pm], score[pm].item()
best_thr_list[index] = best_thr
print(f'thr={best_thr:.3f}', f'F2={best_score:.3f}')
if do_plot:
plt.plot(thrs, score)
plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max())
plt.text(best_thr+0.03, best_score-0.01, ('Kappa[%s]=%.3f'%(index, best_score)), fontsize=14);
plt.show()
return best_thr_list
lastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))
lastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))
complete_labels = [np.argmax(label) for label in lastFullComLabels]
threshold_list = find_best_fixed_threshold(lastFullComPred, complete_labels, do_plot=True)
threshold_list[0] = 0 # In last instance assign label 0
train_preds = [np.argmax(pred) for pred in lastFullTrainPred]
train_labels = [np.argmax(label) for label in lastFullTrainLabels]
validation_preds = [np.argmax(pred) for pred in lastFullValPred]
validation_labels = [np.argmax(label) for label in lastFullValLabels]
train_preds_opt = [0 for i in range(lastFullTrainPred.shape[0])]
for idx, thr in enumerate(threshold_list):
for idx2, pred in enumerate(lastFullTrainPred):
if pred[idx] > thr:
train_preds_opt[idx2] = idx
validation_preds_opt = [0 for i in range(lastFullValPred.shape[0])]
for idx, thr in enumerate(threshold_list):
for idx2, pred in enumerate(lastFullValPred):
if pred[idx] > thr:
validation_preds_opt[idx2] = idx | thr=0.390 F2=0.812
| MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Confusion Matrix | fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax2).set_title('Validation')
plt.show()
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
train_cnf_matrix = confusion_matrix(train_labels, train_preds_opt)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds_opt)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train optimized')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax2).set_title('Validation optimized')
plt.show() | _____no_output_____ | MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Quadratic Weighted Kappa | print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds+validation_preds, train_labels+validation_labels, weights='quadratic'))
print("Train optimized Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds_opt, train_labels, weights='quadratic'))
print("Validation optimized Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds_opt, validation_labels, weights='quadratic'))
print("Complete optimized set Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds_opt+validation_preds_opt, train_labels+validation_labels, weights='quadratic')) | Train Cohen Kappa score: 0.930
Validation Cohen Kappa score: 0.906
Complete set Cohen Kappa score: 0.924
Train optimized Cohen Kappa score: 0.915
Validation optimized Cohen Kappa score: 0.896
Complete optimized set Cohen Kappa score: 0.910
| MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Apply model to test set and output predictions | test_generator.reset()
STEP_SIZE_TEST = test_generator.n//test_generator.batch_size
preds = model.predict_generator(test_generator, steps=STEP_SIZE_TEST)
predictions = [np.argmax(pred) for pred in preds]
predictions_opt = [0 for i in range(preds.shape[0])]
for idx, thr in enumerate(threshold_list):
for idx2, pred in enumerate(preds):
if pred[idx] > thr:
predictions_opt[idx2] = idx
filenames = test_generator.filenames
results = pd.DataFrame({'id_code':filenames, 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
results_opt = pd.DataFrame({'id_code':filenames, 'diagnosis':predictions_opt})
results_opt['id_code'] = results_opt['id_code'].map(lambda x: str(x)[:-4]) | _____no_output_____ | MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Predictions class distribution | fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d", ax=ax1)
sns.countplot(x="diagnosis", data=results_opt, palette="GnBu_d", ax=ax2)
sns.despine()
plt.show()
val_kappa = cohen_kappa_score(validation_preds, validation_labels, weights='quadratic')
val_opt_kappa = cohen_kappa_score(validation_preds_opt, validation_labels, weights='quadratic')
if val_kappa > val_opt_kappa:
results_name = 'submission.csv'
results_opt_name = 'submission_opt.csv'
else:
results_name = 'submission_norm.csv'
results_opt_name = 'submission.csv'
results.to_csv(results_name, index=False)
results.head(10)
results_opt.to_csv(results_opt_name, index=False)
results_opt.head(10) | _____no_output_____ | MIT | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection |
Task: Predict User Item response under uniform exposure while learning from biased training dataMany current applications use recommendations in order to modify the natural user behavior, such as to increase the number of sales or the time spent on a website. This results in a gap between the final recommendation objective and the classical setup where recommendation candidates are evaluated by their coherence with past user behavior, by predicting either the missing entries in the user-item matrix, or the most likely next event. To bridge this gap, we optimize a recommendation policy for the task of increasing the desired outcome versus the organic user behavior. We show this is equivalent to learning to predict recommendation outcomes under a fully random recommendation policy. To this end, we propose a new domain adaptation algorithm that learns from logged data containing outcomes from a biased recommendation policy and predicts recommendation outcomes according to random exposure. We compare our method against state-of-the-art factorization methods and new approaches of causal recommendation and show significant improvements. Dataset**MovieLens 100k dataset** was collected by the GroupLens Research Project at the University of Minnesota. This data set consists of: * 100,000 ratings (1-5) from 943 users on 1682 movies. * Each user has rated at least 20 movies. The data was collected through the MovieLens web site (movielens.umn.edu) during the seven-month period from September 19th, 1997 through April 22nd, 1998. Solution:**Causal Matrix Factorization** - for more details see: https://arxiv.org/abs/1706.07639 Metrics: * MSE - Mean Squared Error * NLL - Negative Log Likelihood * AUC - Area Under the Curve---------------------------------------------------------- Questions: Q1: Add the definition for create_counterfactual_regularizer() method Q2: Compare the results of using variable values for cf_pen hyperparameter (0 vs. bigger) Q3: Compare different types of optimizers Q4: Push the performance as high as possible! | %%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
import os
import string
import tempfile
import time
import numpy as np
import matplotlib.pyplot as plt
import csv
import random
import tensorflow as tf
from tensorflow.contrib.tensorboard.plugins import projector
from tensorboard import summary as summary_lib
from __future__ import absolute_import
from __future__ import print_function
tf.set_random_seed(42)
tf.logging.set_verbosity(tf.logging.INFO)
print(tf.__version__)
# Hyper-Parameters
flags = tf.app.flags
tf.app.flags.DEFINE_string('f', '', 'kernel')
flags.DEFINE_string('data_set', 'user_prod_dict.skew.', 'Dataset string.') # Reg Skew
flags.DEFINE_string('adapt_stat', 'adapt_2i', 'Adapt String.') # Adaptation strategy
flags.DEFINE_string('model_name', 'cp2v', 'Name of the model for saving.')
flags.DEFINE_float('learning_rate', 1.0, 'Initial learning rate.')
flags.DEFINE_integer('num_epochs', 1, 'Number of epochs to train.')
flags.DEFINE_integer('num_steps', 100, 'Number of steps after which to test.')
flags.DEFINE_integer('embedding_size', 100, 'Size of each embedding vector.')
flags.DEFINE_integer('batch_size', 512, 'How big is a batch of training.')
flags.DEFINE_float('cf_pen', 10.0, 'Counterfactual regularizer hyperparam.')
flags.DEFINE_float('l2_pen', 0.0, 'L2 regularizer hyperparam.')
flags.DEFINE_string('cf_loss', 'l1', 'Use L1 or L2 for the loss .')
FLAGS = tf.app.flags.FLAGS
#_DATA_PATH = "/Users/f.vasile/MyFolders/MyProjects/1.MyPapers/2018_Q2_DS3_Course/code/cp2v/src/Data/"
_DATA_PATH = "./data/"
train_data_set_location = _DATA_PATH + FLAGS.data_set + "train." + FLAGS.adapt_stat + ".csv" # Location of train dataset
test_data_set_location = _DATA_PATH + FLAGS.data_set + "test." + FLAGS.adapt_stat + ".csv" # Location of the test dataset
validation_test_set_location = _DATA_PATH + FLAGS.data_set + "valid_test." + FLAGS.adapt_stat + ".csv" # Location of the validation dataset
validation_train_set_location = _DATA_PATH + FLAGS.data_set + "valid_train." + FLAGS.adapt_stat + ".csv" #Location of the validation dataset
model_name = FLAGS.model_name + ".ckpt"
print(train_data_set_location)
def calculate_vocab_size(file_location):
"""Calculate the total number of unique elements in the dataset"""
with open(file_location, 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
useridtemp = []
productid = []
for row in reader:
useridtemp.append(row[0])
productid.append(row[1])
userid_size = len(set(useridtemp))
productid_size = len(set(productid))
return userid_size, productid_size
userid_size, productid_size = calculate_vocab_size(train_data_set_location) # Calculate the total number of unique elements in the dataset
print(str(userid_size))
print(str(productid_size))
plot_gradients = False # Plot the gradients
cost_val = []
tf.set_random_seed(42)
def load_train_dataset(dataset_location, batch_size, num_epochs):
"""Load the training data using TF Dataset API"""
with tf.name_scope('train_dataset_loading'):
record_defaults = [[1], [1], [0.]] # Sets the type of the resulting tensors and default values
# Dataset is in the format - UserID ProductID Rating
dataset = tf.data.TextLineDataset(dataset_location).map(lambda line: tf.decode_csv(line, record_defaults=record_defaults))
dataset = dataset.shuffle(buffer_size=10000)
dataset = dataset.batch(batch_size)
dataset = dataset.cache()
dataset = dataset.repeat(num_epochs)
iterator = dataset.make_one_shot_iterator()
user_batch, product_batch, label_batch = iterator.get_next()
label_batch = tf.expand_dims(label_batch, 1)
return user_batch, product_batch, label_batch
def load_test_dataset(dataset_location):
"""Load the test and validation datasets"""
user_list = []
product_list = []
labels = []
with open(dataset_location, 'r') as f:
reader = csv.reader(f)
for row in reader:
user_list.append(row[0])
product_list.append(row[1])
labels.append(row[2])
labels = np.reshape(labels, [-1, 1])
cr = compute_empirical_cr(labels)
return user_list, product_list, labels, cr
def compute_2i_regularization_id(prods, num_products):
"""Compute the ID for the regularization for the 2i approach"""
reg_ids = []
# Loop through batch and compute if the product ID is greater than the number of products
for x in np.nditer(prods):
if x >= num_products:
reg_ids.append(x)
elif x < num_products:
reg_ids.append(x + num_products) # Add number of products to create the 2i representation
return np.asarray(reg_ids)
def generate_bootstrap_batch(seed, data_set_size):
"""Generate the IDs for the bootstap"""
random.seed(seed)
ids = [random.randint(0, data_set_size-1) for j in range(int(data_set_size*0.8))]
return ids
def compute_empirical_cr(labels):
"""Compute the cr from the empirical data"""
labels = labels.astype(np.float)
clicks = np.count_nonzero(labels)
views = len(np.where(labels==0)[0])
cr = float(clicks)/float(views)
return cr
def create_average_predictor_tensors(label_list_placeholder, logits_placeholder):
"""Create the tensors required to run the averate predictor for the bootstraps"""
with tf.device('/cpu:0'):
with tf.variable_scope('ap_logits'):
ap_logits = tf.reshape(logits_placeholder, [tf.shape(label_list_placeholder)[0], 1])
with tf.name_scope('ap_losses'):
ap_mse_loss = tf.losses.mean_squared_error(labels=label_list_placeholder, predictions=ap_logits)
ap_log_loss = tf.losses.log_loss(labels=label_list_placeholder, predictions=ap_logits)
with tf.name_scope('ap_metrics'):
# Add performance metrics to the tensorflow graph
ap_correct_predictions = tf.equal(tf.round(ap_logits), label_list_placeholder)
ap_accuracy = tf.reduce_mean(tf.cast(ap_correct_predictions, tf.float32))
return ap_mse_loss, ap_log_loss
def compute_bootstraps_2i(sess, model, test_user_batch, test_product_batch, test_label_batch, test_logits, running_vars_initializer, ap_mse_loss, ap_log_loss):
"""Compute the bootstraps for the 2i model"""
data_set_size = len(test_user_batch)
mse = []
llh = []
ap_mse = []
ap_llh = []
auc_list = []
mse_diff = []
llh_diff = []
# Compute the bootstrap values for the test split - this compute the empirical CR as well for comparision
for i in range(30):
ids = generate_bootstrap_batch(i*2, data_set_size)
test_user_batch = np.asarray(test_user_batch)
test_product_batch = np.asarray(test_product_batch)
test_label_batch = np.asarray(test_label_batch)
# Reset the running variables used for the AUC
sess.run(running_vars_initializer)
# Construct the feed-dict for the model and the average predictor
feed_dict = {model.user_list_placeholder : test_user_batch[ids], model.product_list_placeholder: test_product_batch[ids], model.label_list_placeholder: test_label_batch[ids], model.logits_placeholder: test_logits[ids], model.reg_list_placeholder: test_product_batch[ids]}
# Run the model test step updating the AUC object
_, loss_val, mse_loss_val, log_loss_val = sess.run([model.auc_update_op, model.loss, model.mse_loss, model.log_loss], feed_dict=feed_dict)
auc_score = sess.run(model.auc, feed_dict=feed_dict)
# Run the Average Predictor graph
ap_mse_val, ap_log_val = sess.run([ap_mse_loss, ap_log_loss], feed_dict=feed_dict)
mse.append(mse_loss_val)
llh.append(log_loss_val)
ap_mse.append(ap_mse_val)
ap_llh.append(ap_log_val)
auc_list.append(auc_score)
for i in range(30):
mse_diff.append((ap_mse[i]-mse[i]) / ap_mse[i])
llh_diff.append((ap_llh[i]-llh[i]) / ap_llh[i])
print("MSE Mean Score On The Bootstrap = ", np.mean(mse))
print("MSE Mean Lift Over Average Predictor (%) = ", np.round(np.mean(mse_diff)*100, decimals=2))
print("MSE STD (%) =" , np.round(np.std(mse_diff)*100, decimals=2))
print("LLH Mean Over Average Predictor (%) =", np.round(np.mean(llh_diff)*100, decimals=2))
print("LLH STD (%) = ", np.round(np.std(llh_diff)*100, decimals=2))
print("Mean AUC Score On The Bootstrap = ", np.round(np.mean(auc_list), decimals=4), "+/-", np.round(np.std(auc_list), decimals=4)) | _____no_output_____ | MIT | application/part_3/CausalMF-questions.ipynb | vishalbelsare/intro-to-reco |
About Supervised Prod2vec - Class to define MF of the implicit feedback matrix (1/0/unk) of Users x Products- When called it creates the TF graph for the associated NN:Step1: self.create_placeholders() => Creates the input placeholdersStep2: self.build_graph() => Creates the 3 layers: - the user embedding layer - the product embedding layer - the output prediction layerStep3: self.create_losses() => Defines the loss function for predictionStep4: self.add_optimizer() => Defines the optimizerStep5: self.add_performance_metrics() => Defines the logging performance metrics ???Step6: self.add_summaries() => Defines the final performance stats | class SupervisedProd2vec():
def __init__(self, userid_size, productid_size, embedding_size, l2_pen, learning_rate):
self.userid_size = userid_size
self.productid_size = productid_size
self.embedding_size = embedding_size
self.l2_pen = l2_pen
self.learning_rate = learning_rate
# Build the graph
self.create_placeholders()
self.build_graph()
self.create_losses()
self.add_optimizer()
self.add_performance_metrics()
self.add_summaries()
def create_placeholders(self):
"""Create the placeholders to be used """
self.user_list_placeholder = tf.placeholder(tf.int32, [None], name="user_list_placeholder")
self.product_list_placeholder = tf.placeholder(tf.int32, [None], name="product_list_placeholder")
self.label_list_placeholder = tf.placeholder(tf.float32, [None, 1], name="label_list_placeholder")
# logits placeholder used to store the test CR for the bootstrapping process
self.logits_placeholder = tf.placeholder(tf.float32, [None], name="logits_placeholder")
def build_graph(self):
"""Build the main tensorflow graph with embedding layers"""
with tf.name_scope('embedding_layer'):
# User matrix and current batch
self.user_embeddings = tf.get_variable("user_embeddings", shape=[self.userid_size, self.embedding_size], initializer=tf.contrib.layers.xavier_initializer(), trainable=True)
self.user_embed = tf.nn.embedding_lookup(self.user_embeddings, self.user_list_placeholder) # Lookup the Users for the given batch
self.user_b = tf.Variable(tf.zeros([self.userid_size]), name='user_b', trainable=True)
self.user_bias_embed = tf.nn.embedding_lookup(self.user_b, self.user_list_placeholder)
# Product embedding
self.product_embeddings = tf.get_variable("product_embeddings", shape=[self.productid_size, self.embedding_size], initializer=tf.contrib.layers.xavier_initializer(), trainable=True)
self.product_embed = tf.nn.embedding_lookup(self.product_embeddings, self.product_list_placeholder) # Lookup the embeddings2 for the given batch
self.prod_b = tf.Variable(tf.zeros([self.productid_size]), name='prod_b', trainable=True)
self.prod_bias_embed = tf.nn.embedding_lookup(self.prod_b, self.product_list_placeholder)
with tf.variable_scope('logits'):
self.b = tf.get_variable('b', [1], initializer=tf.constant_initializer(0.0, dtype=tf.float32), trainable=True)
self.alpha = tf.get_variable('alpha', [], initializer=tf.constant_initializer(0.00000001, dtype=tf.float32), trainable=True)
#alpha * (<user_i, prod_j>
self.emb_logits = self.alpha * tf.reshape(tf.reduce_sum(tf.multiply(self.user_embed, self.product_embed), 1), [tf.shape(self.user_list_placeholder)[0], 1])
#prod_bias + user_bias + global_bias
self.logits = tf.reshape(tf.add(self.prod_bias_embed, self.user_bias_embed), [tf.shape(self.user_list_placeholder)[0], 1]) + self.b
self.logits = self.emb_logits + self.logits
self.prediction = tf.sigmoid(self.logits, name='sigmoid_prediction')
def create_losses(self):
"""Create the losses"""
with tf.name_scope('losses'):
#Sigmoid loss between the logits and labels
self.loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.label_list_placeholder))
#Adding the regularizer term on user vct and prod vct
self.loss = self.loss + self.l2_pen * tf.nn.l2_loss(self.user_embeddings) + self.l2_pen * tf.nn.l2_loss(self.product_embeddings) + self.l2_pen * tf.nn.l2_loss(self.prod_b) + self.l2_pen * tf.nn.l2_loss(self.user_b)
#Compute MSE loss
self.mse_loss = tf.losses.mean_squared_error(labels=self.label_list_placeholder, predictions=tf.sigmoid(self.logits))
#Compute Log loss
self.log_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.label_list_placeholder))
def add_optimizer(self):
"""Add the required optimiser to the graph"""
with tf.name_scope('optimizer'):
# Global step variable to keep track of the number of training steps
self.global_step = tf.Variable(0, dtype=tf.int32, trainable=False, name='global_step')
self.apply_grads = tf.train.GradientDescentOptimizer(self.learning_rate).minimize(self.loss, global_step=self.global_step)
def add_performance_metrics(self):
"""Add the required performance metrics to the graph"""
with tf.name_scope('performance_metrics'):
# Add performance metrics to the tensorflow graph
correct_predictions = tf.equal(tf.round(self.prediction), self.label_list_placeholder)
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32), name="accuracy")
self.auc, self.auc_update_op = tf.metrics.auc(labels=self.label_list_placeholder, predictions=self.prediction, num_thresholds=1000, name="auc_metric")
def add_summaries(self):
"""Add the required summaries to the graph"""
with tf.name_scope('summaries'):
# Add loss to the summaries
tf.summary.scalar('total_loss', self.loss)
tf.summary.histogram('histogram_total_loss', self.loss)
# Add weights to the summaries
tf.summary.histogram('user_embedding_weights', self.user_embeddings)
tf.summary.histogram('product_embedding_weights', self.product_embeddings)
tf.summary.histogram('logits', self.logits)
tf.summary.histogram('prod_b', self.prod_b)
tf.summary.histogram('user_b', self.user_b)
tf.summary.histogram('global_bias', self.b)
tf.summary.scalar('alpha', self.alpha)
| _____no_output_____ | MIT | application/part_3/CausalMF-questions.ipynb | vishalbelsare/intro-to-reco |
CausalProd2Vec2i - inherits from SupervisedProd2vec- Class to define the causal version of MF of the implicit feedback matrix (1/0/unk) of Users x Products- When called it creates the TF graph for the associated NN:**Step1: Changed: +regularizer placeholder** self.create_placeholders() => Creates the input placeholders **Step2:** self.build_graph() => Creates the 3 layers: - the user embedding layer - the product embedding layer - the output prediction layer**New:** self.create_control_embeddings() self.create_counter_factual_loss()**Step3: Changed: +add regularizer between embeddings** self.create_losses() => Defines the loss function for prediction**Step4:** self.add_optimizer() => Defines the optimizer**Step5:** self.add_performance_metrics() => Defines the logging performance metrics ???**Step6:** self.add_summaries() => Defines the final performance stats | class CausalProd2Vec2i(SupervisedProd2vec):
def __init__(self, userid_size, productid_size, embedding_size, l2_pen, learning_rate, cf_pen, cf='l1'):
self.userid_size = userid_size
self.productid_size = productid_size * 2 # Doubled to accommodate the treatment embeddings
self.embedding_size = embedding_size
self.l2_pen = l2_pen
self.learning_rate = learning_rate
self.cf_pen = cf_pen
self.cf = cf
# Build the graph
self.create_placeholders()
self.build_graph()
self.create_control_embeddings()
#self.create_counterfactual_regularizer()
self.create_losses()
self.add_optimizer()
self.add_performance_metrics()
self.add_summaries()
def create_placeholders(self):
"""Create the placeholders to be used """
self.user_list_placeholder = tf.placeholder(tf.int32, [None], name="user_list_placeholder")
self.product_list_placeholder = tf.placeholder(tf.int32, [None], name="product_list_placeholder")
self.label_list_placeholder = tf.placeholder(tf.float32, [None, 1], name="label_list_placeholder")
self.reg_list_placeholder = tf.placeholder(tf.int32, [None], name="reg_list_placeholder")
# logits placeholder used to store the test CR for the bootstrapping process
self.logits_placeholder = tf.placeholder(tf.float32, [None], name="logits_placeholder")
def create_control_embeddings(self):
"""Create the control embeddings"""
with tf.name_scope('control_embedding'):
# Get the control embedding at id 0
self.control_embed = tf.stop_gradient(tf.nn.embedding_lookup(self.product_embeddings, self.reg_list_placeholder))
#################################
## SOLUTION TO Q1 GOES HERE! ##
#################################
#def create_counterfactual_regularizer(self):
# self.cf_reg
def create_losses(self):
"""Create the losses"""
with tf.name_scope('losses'):
#Sigmoid loss between the logits and labels
self.log_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.label_list_placeholder))
#Adding the regularizer term on user vct and prod vct and their bias terms
reg_term = self.l2_pen * ( tf.nn.l2_loss(self.user_embeddings) + tf.nn.l2_loss(self.product_embeddings) )
reg_term_biases = self.l2_pen * ( tf.nn.l2_loss(self.prod_b) + tf.nn.l2_loss(self.user_b) )
self.loss = self.log_loss + reg_term + reg_term_biases
#Adding the counterfactual regualizer term
# Q1: Write the method that computes the counterfactual regularizer
#self.create_counterfactual_regularizer()
#self.loss = self.loss + (self.cf_pen * self.cf_reg)
#Compute addtionally the MSE loss
self.mse_loss = tf.losses.mean_squared_error(labels=self.label_list_placeholder, predictions=tf.sigmoid(self.logits))
| _____no_output_____ | MIT | application/part_3/CausalMF-questions.ipynb | vishalbelsare/intro-to-reco |
Create the TF Graph | # Create graph object
graph = tf.Graph()
with graph.as_default():
with tf.device('/cpu:0'):
# Load the required graph
### Number of products and users
productid_size = 1683
userid_size = 944
model = CausalProd2Vec2i(userid_size, productid_size+1, FLAGS.embedding_size, FLAGS.l2_pen, FLAGS.learning_rate, FLAGS.cf_pen, cf=FLAGS.cf_loss)
ap_mse_loss, ap_log_loss = create_average_predictor_tensors(model.label_list_placeholder, model.logits_placeholder)
# Define initializer to initialize/reset running variables
running_vars = tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES, scope="performance_metrics/auc_metric")
running_vars_initializer = tf.variables_initializer(var_list=running_vars)
# Get train data batch from queue
next_batch = load_train_dataset(train_data_set_location, FLAGS.batch_size, FLAGS.num_epochs)
test_user_batch, test_product_batch, test_label_batch, test_cr = load_test_dataset(test_data_set_location)
val_test_user_batch, val_test_product_batch, val_test_label_batch, val_cr = load_test_dataset(validation_test_set_location)
val_train_user_batch, val_train_product_batch, val_train_label_batch, val_cr = load_test_dataset(validation_train_set_location)
# create the empirical CR test logits
test_logits = np.empty(len(test_label_batch))
test_logits.fill(test_cr)
| _____no_output_____ | MIT | application/part_3/CausalMF-questions.ipynb | vishalbelsare/intro-to-reco |
Launch the Session: Train the model | # Launch the Session
with tf.Session(graph=graph, config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=False)) as sess:
# initialise all the TF variables
init_op = tf.global_variables_initializer()
sess.run(init_op)
# Setup tensorboard: tensorboard --logdir=/tmp/tensorboard
time_tb = str(time.ctime(int(time.time())))
train_writer = tf.summary.FileWriter('/tmp/tensorboard' + '/train' + time_tb, sess.graph)
test_writer = tf.summary.FileWriter('/tmp/tensorboard' + '/test' + time_tb, sess.graph)
merged = tf.summary.merge_all()
# Embeddings viz (Possible to add labels for embeddings later)
saver = tf.train.Saver()
config = projector.ProjectorConfig()
embedding = config.embeddings.add()
embedding.tensor_name = model.product_embeddings.name
projector.visualize_embeddings(train_writer, config)
# Variables used in the training loop
t = time.time()
step = 0
average_loss = 0
average_mse_loss = 0
average_log_loss = 0
# Start the training loop---------------------------------------------------------------------------------------------
print("Starting Training On Causal Prod2Vec")
print(FLAGS.cf_loss)
print("Num Epochs = ", FLAGS.num_epochs)
print("Learning Rate = ", FLAGS.learning_rate)
print("L2 Reg = ", FLAGS.l2_pen)
print("CF Reg = ", FLAGS.cf_pen)
try:
while True:
# Run the TRAIN for this step batch ---------------------------------------------------------------------
# Construct the feed_dict
user_batch, product_batch, label_batch = sess.run(next_batch)
# Treatment is the small set of samples from St, Control is the larger set of samples from Sc
reg_ids = compute_2i_regularization_id(product_batch, productid_size) # Compute the product ID's for regularization
feed_dict = {model.user_list_placeholder : user_batch, model.product_list_placeholder: product_batch, model.reg_list_placeholder: reg_ids, model.label_list_placeholder: label_batch}
# Run the graph
_, sum_str, loss_val, mse_loss_val, log_loss_val = sess.run([model.apply_grads, merged, model.loss, model.mse_loss, model.log_loss], feed_dict=feed_dict)
step +=1
average_loss += loss_val
average_mse_loss += mse_loss_val
average_log_loss += log_loss_val
# Every num_steps print average loss
if step % FLAGS.num_steps == 0:
if step > FLAGS.num_steps:
# The average loss is an estimate of the loss over the last set batches.
average_loss /= FLAGS.num_steps
average_mse_loss /= FLAGS.num_steps
average_log_loss /= FLAGS.num_steps
print("Average Training Loss on S_c (FULL, MSE, NLL) at step ", step, ": ", average_loss, ": ", average_mse_loss, ": ", average_log_loss, "Time taken (S) = " + str(round(time.time() - t, 1)))
average_loss = 0
t = time.time() # reset the time
train_writer.add_summary(sum_str, step) # Write the summary
# Run the VALIDATION for this step batch ---------------------------------------------------------------------
val_train_product_batch = np.asarray(val_train_product_batch, dtype=np.float32)
val_test_product_batch = np.asarray(val_test_product_batch, dtype=np.float32)
vaL_train_reg_ids = compute_2i_regularization_id(val_train_product_batch, productid_size) # Compute the product ID's for regularization
vaL_test_reg_ids = compute_2i_regularization_id(val_test_product_batch, productid_size) # Compute the product ID's for regularization
feed_dict_test = {model.user_list_placeholder : val_test_user_batch, model.product_list_placeholder: val_test_product_batch, model.reg_list_placeholder: vaL_test_reg_ids, model.label_list_placeholder: val_test_label_batch}
feed_dict_train = {model.user_list_placeholder : val_train_user_batch, model.product_list_placeholder: val_train_product_batch, model.reg_list_placeholder: vaL_train_reg_ids, model.label_list_placeholder: val_train_label_batch}
sum_str, loss_val, mse_loss_val, log_loss_val = sess.run([merged, model.loss, model.mse_loss, model.log_loss], feed_dict=feed_dict_train)
print("Validation loss on S_c (FULL, MSE, NLL) at step ", step, ": ", loss_val, ": ", mse_loss_val, ": ", log_loss_val)
sum_str, loss_val, mse_loss_val, log_loss_val = sess.run([merged, model.loss, model.mse_loss, model.log_loss], feed_dict=feed_dict_test)
cost_val.append(loss_val)
print("Validation loss on S_t(FULL, MSE, NLL) at step ", step, ": ", loss_val, ": ", mse_loss_val, ": ", log_loss_val)
print("####################################################################################################################")
test_writer.add_summary(sum_str, step) # Write the summary
except tf.errors.OutOfRangeError:
print("Reached the number of epochs")
finally:
saver.save(sess, os.path.join('/tmp/tensorboard', model_name), model.global_step) # Save model
train_writer.close()
print("Training Complete")
# Run the bootstrap for this model ---------------------------------------------------------------------------------------------------------------
print("Begin Bootstrap process...")
print("Running BootStrap On The Control Representations")
compute_bootstraps_2i(sess, model, test_user_batch, test_product_batch, test_label_batch, test_logits, running_vars_initializer, ap_mse_loss, ap_log_loss)
print("Running BootStrap On The Treatment Representations")
test_product_batch = [int(x) + productid_size for x in test_product_batch]
compute_bootstraps_2i(sess, model, test_user_batch, test_product_batch, test_label_batch, test_logits, running_vars_initializer, ap_mse_loss, ap_log_loss)
| _____no_output_____ | MIT | application/part_3/CausalMF-questions.ipynb | vishalbelsare/intro-to-reco |
 Basketball Shooting Practice[Watch on YouTube](https://www.youtube.com/watch?v=Tm2ruZQLcqE&list=PL-j7ku2URmjZYtWzMCS4AqFS5SXPXRHwf)When Nick shoots a basketball, he either sinks the shot or misses. For each shotNick sinks, he is given 5 points by his father. For each missed shot, Nick’s Dadtakes 2 points away.Nick attempts a total of 28 shots and ends up with zero points (i.e. he breakseven). How many shots did Nick sink?from https://www.cemc.uwaterloo.ca/resources/potw/2019-20/English/POTWB-19-NN-01-P.pdf | import pandas as pd
shots = 28
final_score = 0
shots_dataframe = pd.DataFrame(columns=['Sunk', 'Missed', 'Points'])
for sunk in range(0,shots+1):
missed = shots - sunk
points = sunk * 5 - missed * 2
shots_dataframe = shots_dataframe.append({'Sunk':sunk, 'Missed':missed, 'Points':points}, ignore_index=True)
if points == final_score:
print('Nick ends up with', final_score, 'points if he sinks', sunk, 'shots.')
shots_dataframe
%matplotlib inline
shots_dataframe.plot() | _____no_output_____ | CC-BY-4.0 | notebooks/basketball-shooting-practice.ipynb | callysto/interesting-problems |
Ray RLlib - Introduction to Reinforcement Learning© 2019-2021, Anyscale. All Rights Reserved_Reinforcement Learning_ is the category of machine learning that focuses on training one or more _agents_ to achieve maximal _rewards_ while operating in an environment. This lesson discusses the core concepts of RL, while subsequent lessons explore RLlib in depth. We'll use two examples with exercises to give you a taste of RL. If you already understand RL concepts, you can either skim this lesson or skip to the [next lesson](02-Introduction-to-RLlib.ipynb). What Is Reinforcement Learning?Let's explore the basic concepts of RL, specifically the _Markov Decision Process_ abstraction, and to show its use in Python.Consider the following image:In RL, one or more **agents** interact with an **environment** to maximize a **reward**. The agents make **observations** about the **state** of the environment and take **actions** that are believed will maximize the long-term reward. However, at any particular moment, the agents can only observe the immediate reward. So, the training process usually involves lots and lot of replay of the game, the robot simulator traversing a virtual space, etc., so the agents can learn from repeated trials what decisions/actions work best to maximize the long-term, cumulative reward.The trail and error search and delayed reward are the distinguishing characterists of RL vs. other ML methods ([Sutton 2018](06-RL-References.ipynbBooks)).The way to formalize trial and error is the **exploitation vs. exploration tradeoff**. When an agent finds what appears to be a "rewarding" sequence of actions, the agent may naturally want to continue to **exploit** these actions. However, even better actions may exist. An agent won't know whether alternatives are better or not unless some percentage of actions taken **explore** the alternatives. So, all RL algorithms include a strategy for exploitation and exploration. RL ApplicationsRL has many potential applications. RL became "famous" due to these successes, including achieving expert game play, training robots, autonomous vehicles, and other simulated agents:Credits:* [AlphaGo](https://www.youtube.com/watch?v=l7ngy56GY6k)* [Breakout](https://towardsdatascience.com/tutorial-double-deep-q-learning-with-dueling-network-architectures-4c1b3fb7f756) ([paper](https://arxiv.org/abs/1312.5602))* [Stacking Legos with Sawyer](https://robohub.org/soft-actor-critic-deep-reinforcement-learning-with-real-world-robots/)* [Walking Man](https://openai.com/blog/openai-baselines-ppo/)* [Autonomous Vehicle](https://www.daimler.com/innovation/case/autonomous/intelligent-drive-2.html)* ["Cassie": Two-legged Robot](https://mime.oregonstate.edu/research/drl/robots/cassie/) (Uses Ray!) Recently other industry applications have emerged, include the following:* **Process optimization:** industrial processes (factories, pipelines) and other business processes, routing problems, cluster optimization.* **Ad serving and recommendations:** Some of the traditional methods, including _collaborative filtering_, are hard to scale for very large data sets. RL systems are being developed to do an effective job more efficiently than traditional methods.* **Finance:** Markets are time-oriented _environments_ where automated trading systems are the _agents_. Markov Decision ProcessesAt its core, Reinforcement learning builds on the concepts of [Markov Decision Process (MDP)](https://en.wikipedia.org/wiki/Markov_decision_process), where the current state, the possible actions that can be taken, and overall goal are the building blocks.An MDP models sequential interactions with an external environment. It consists of the following:- a **state space** where the current state of the system is sometimes called the **context**.- a set of **actions** that can be taken at a particular state $s$ (or sometimes the same set for all states).- a **transition function** that describes the probability of being in a state $s'$ at time $t+1$ given that the MDP was in state $s$ at time $t$ and action $a$ was taken. The next state is selected stochastically based on these probabilities.- a **reward function**, which determines the reward received at time $t$ following action $a$, based on the decision of **policy** $\pi$. The goal of MDP is to develop a **policy** $\pi$ that specifies what action $a$ should be chosen for a given state $s$ so that the cumulative reward is maximized. When it is possible for the policy "trainer" to fully observe all the possible states, actions, and rewards, it can define a deterministic policy, fixing a single action choice for each state. In this scenario, the transition probabilities reduce to the probability of transitioning to state $s'$ given the current state is $s$, independent of actions, because the state now leads to a deterministic action choice. Various algorithms can be used to compute this policy. Put another way, if the policy isn't deterministic, then the transition probability to state $s'$ at a time $t+1$ when action $a$ is taken for state $s$ at time $t$, is given by:\begin{equation}P_a(s',s) = P(s_{t+1} = s'|s_t=s,a)\end{equation}When the policy is deterministic, this transition probability reduces to the following, independent of $a$:\begin{equation}P(s',s) = P(s_{t+1} = s'|s_t=s)\end{equation}To be clear, a deterministic policy means that one and only one action will always be selected for a given state $s$, but the next state $s'$ will still be selected stochastically.In the general case of RL, it isn't possible to fully know all this information, some of which might be hidden and evolving, so it isn't possible to specify a fully-deterministic policy. Often this cumulative reward is computed using the **discounted sum** over all rewards observed:\begin{equation}\arg\max_{\pi} \sum_{t=1}^T \gamma^t R_t(\pi),\end{equation}where $T$ is the number of steps taken in the MDP (this is a random variable and may depend on $\pi$), $R_t$ is the reward received at time $t$ (also a random variable which depends on $\pi$), and $\gamma$ is the **discount factor**. The value of $\gamma$ is between 0 and 1, meaning it has the effect of "discounting" earlier rewards vs. more recent rewards. The [Wikipedia page on MDP](https://en.wikipedia.org/wiki/Markov_decision_process) provides more details. Note what we said in the third bullet, that the new state only depends on the previous state and the action taken. The assumption is that we can simplify our effort by ignoring all the previous states except the last one and still achieve good results. This is known as the [Markov property](https://en.wikipedia.org/wiki/Markov_property). This assumption often works well and it greatly reduces the resources required. The Elements of RLHere are the elements of RL that expand on MDP concepts (see [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition) for more details): PoliciesUnlike MDP, the **transition function** probabilities are often not known in advance, but must be learned. Learning is done through repeated "play", where the agent interacts with the environment.This makes the **policy** $\pi$ harder to determine. Because the fully state space usually can't be fully known, the choice of action $a$ for given state $s$ almostly always remains a stochastic choice, never deterministic, unlike MDP. Reward SignalThe idea of a **reward signal** encapsulates the desired goal for the system and provides feedback for updating the policy based on how well particular events or actions contribute rewards towards the goal. Value FunctionThe **value function** encapsulates the maximum cumulative reward likely to be achieved starting from a given state for an **episode**. This is harder to determine than the simple reward returned after taking an action. In fact, much of the research in RL over the decades has focused on finding better and more efficient implementations of value functions. To illustrate the challenge, repeatedly taking one sequence of actions may yield low rewards for a while, but eventually provide large rewards. Conversely, always choosing a different sequence of actions may yield a good reward at each step, but be suboptimal for the cumulative reward. EpisodeA sequence of steps by the agent starting in an initial state. At each step, the agent observes the current state, chooses the next action, and receives the new reward. Episodes are used for both training policies and replaying with an existing policy (called _rollout_). ModelAn optional feature, some RL algorithms develop or use a **model** of the environment to anticipate the resulting states and rewards for future actions. Hence, they are useful for _planning_ scenarios. Methods for solving RL problems that use models are called _model-based methods_, while methods that learn by trial and error are called _model-free methods_. Reinforcement Learning ExampleLet's finish this introduction let's learn about the popular "hello world" (1) example environment for RL, balancing a pole vertically on a moving cart, called `CartPole`. Then we'll see how to use RLlib to train a policy using a popular RL algorithm, _Proximal Policy Optimization_, again using `CartPole`.(1) In books and tutorials on programming languages, it is a tradition that the very first program shown prints the message "Hello World!". CartPole and OpenAIThe popular [OpenAI "gym" environment](https://gym.openai.com/) provides MDP interfaces to a variety of simulated environments. Perhaps the most popular for learning RL is `CartPole`, a simple environment that simulates the physics of balancing a pole on a moving cart. The `CartPole` problem is described at https://gym.openai.com/envs/CartPole-v1. Here is an image from that website, where the pole is currently falling to the right, which means the cart will need to move to the right to restore balance: This example fits into the MDP framework as follows:- The **state** consists of the position and velocity of the cart (moving in one dimension from left to right) as well as the angle and angular velocity of the pole that is balancing on the cart.- The **actions** are to decrease or increase the cart's velocity by one unit. A negative velocity means it is moving to the left.- The **transition function** is deterministic and is determined by simulating physical laws. Specifically, for a given **state**, what should we choose as the next velocity value? In the RL context, the correct velocity value to choose has to be learned. Hence, we learn a _policy_ that approximates the optimal transition function that could be calculated from the laws of physics.- The **reward function** is a constant 1 as long as the pole is upright, and 0 once the pole has fallen over. Therefore, maximizing the reward means balancing the pole for as long as possible.- The **discount factor** in this case can be taken to be 1, meaning we treat the rewards at all time steps equally and don't discount any of them.More information about the `gym` Python module is available at https://gym.openai.com/. The list of all the available Gym environments is in [this wiki page](https://github.com/openai/gym/wiki/Table-of-environments). We'll use a few more of them and even create our own in subsequent lessons. | import gym
import numpy as np
import pandas as pd
import json | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
The code below illustrates how to create and manipulate MDPs in Python. An MDP can be created by calling `gym.make`. Gym environments are identified by names like `CartPole-v1`. A **catalog of built-in environments** can be found at https://gym.openai.com/envs. | env = gym.make("CartPole-v1")
print("Created env:", env) | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Reset the state of the MDP by calling `env.reset()`. This call returns the initial state of the MDP. | state = env.reset()
print("The starting state is:", state) | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Recall that the state is the position of the cart, its velocity, the angle of the pole, and the angular velocity of the pole. The `env.step` method takes an action. In the case of the `CartPole` environment, the appropriate actions are 0 or 1, for pushing the cart to the left or right, respectively. `env.step()` returns a tuple of four things:1. the new state of the environment2. a reward3. a boolean indicating whether the simulation has finished4. a dictionary of miscellaneous extra informationLet's show what happens if we take one step with an action of 0. | action = 0
state, reward, done, info = env.step(action)
print(state, reward, done, info) | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
A **rollout** is a simulation of a policy in an environment. It is used both during training and when running simulations with a trained policy. The code below performs a rollout in a given environment. It takes **random actions** until the simulation has finished and returns the cumulative reward. | def random_rollout(env):
state = env.reset()
done = False
cumulative_reward = 0
# Keep looping as long as the simulation has not finished.
while not done:
# Choose a random action (either 0 or 1).
action = np.random.choice([0, 1])
# Take the action in the environment.
state, reward, done, _ = env.step(action)
# Update the cumulative reward.
cumulative_reward += reward
# Return the cumulative reward.
return cumulative_reward | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Try rerunning the following cell a few times. How much do the answers change? Note that the maximum possible reward for `CartPole-v1` is 500. You'll probably get numbers well under 500. | reward = random_rollout(env)
print(reward)
reward = random_rollout(env)
print(reward) | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Exercise 1Choosing actions at random in `random_rollout` is not a very effective policy, as the previous results showed. Finish implementing the `rollout_policy` function below, which takes an environment *and* a policy. Recall that the *policy* is a function that takes in a *state* and returns an *action*. The main difference is that instead of choosing a **random action**, like we just did (with poor results), the action should be chosen **with the policy** (as a function of the state).> **Note:** Exercise solutions for this tutorial can be found [here](solutions/Ray-RLlib-Solutions.ipynb). | def rollout_policy(env, policy):
state = env.reset()
done = False
cumulative_reward = 0
# EXERCISE: Fill out this function by copying the appropriate part of 'random_rollout'
# and modifying it to choose the action using the policy.
raise NotImplementedError
# Return the cumulative reward.
return cumulative_reward
def sample_policy1(state):
return 0 if state[0] < 0 else 1
def sample_policy2(state):
return 1 if state[0] < 0 else 0
reward1 = np.mean([rollout_policy(env, sample_policy1) for _ in range(100)])
reward2 = np.mean([rollout_policy(env, sample_policy2) for _ in range(100)])
print('The first sample policy got an average reward of {}.'.format(reward1))
print('The second sample policy got an average reward of {}.'.format(reward2))
assert 5 < reward1 < 15, ('Make sure that rollout_policy computes the action '
'by applying the policy to the state.')
assert 25 < reward2 < 35, ('Make sure that rollout_policy computes the action '
'by applying the policy to the state.') | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
We'll return to `CartPole` in lesson [01: Application Cart Pole](explore-rllib/01-Application-Cart-Pole.ipynb) in the `explore-rllib` section. RLlib Reinforcement Learning Example: Cart Pole with Proximal Policy OptimizationThis section demonstrates how to use the _proximal policy optimization_ (PPO) algorithm implemented by [RLlib](http://rllib.io). PPO is a popular way to develop a policy. RLlib also uses [Ray Tune](http://tune.io), the Ray Hyperparameter Tuning framework, which is covered in the [Ray Tune Tutorial](../ray-tune/00-Ray-Tune-Overview.ipynb).We'll provide relatively little explanation of **RLlib** concepts for now, but explore them in greater depth in subsequent lessons. For more on RLlib, see the documentation at http://rllib.io. PPO is described in detail in [this paper](https://arxiv.org/abs/1707.06347). It is a variant of _Trust Region Policy Optimization_ (TRPO) described in [this earlier paper](https://arxiv.org/abs/1502.05477). [This OpenAI post](https://openai.com/blog/openai-baselines-ppo/) provides a more accessible introduction to PPO.PPO works in two phases. In the first phase, a large number of rollouts are performed in parallel. The rollouts are then aggregated on the driver and a surrogate optimization objective is defined based on those rollouts. In the second phase, we use SGD (_stochastic gradient descent_) to find the policy that maximizes that objective with a penalty term for diverging too much from the current policy.> **NOTE:** The SGD optimization step is best performed in a data-parallel manner over multiple GPUs. This is exposed through the `num_gpus` field of the `config` dictionary. Hence, for normal usage, one or more GPUs is recommended.(The original version of this example can be found [here](https://raw.githubusercontent.com/ucbrise/risecamp/risecamp2018/ray/tutorial/rllib_exercises/)). | import ray
from ray.rllib.agents.ppo import PPOTrainer, DEFAULT_CONFIG
from ray.tune.logger import pretty_print | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Initialize Ray. If you are running these tutorials on your laptop, then a single-node Ray cluster will be started by the next cell. If you are running in the Anyscale platform, it will connect to the running Ray cluster. | info = ray.init(ignore_reinit_error=True, log_to_driver=False)
print(info) | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
> **Tip:** Having trouble starting Ray? See the [Troubleshooting](../reference/Troubleshooting-Tips-Tricks.ipynb) tips. The next cell prints the URL for the Ray Dashboard. **This is only correct if you are running this tutorial on a laptop.** Click the link to open the dashboard.If you are running on the Anyscale platform, use the URL provided by your instructor to open the Dashboard. | print("Dashboard URL: http://{}".format(info["webui_url"])) | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Instantiate a PPOTrainer object. We pass in a config object that specifies how the network and training procedure should be configured. Some of the parameters are the following.- `num_workers` is the number of actors that the agent will create. This determines the degree of parallelism that will be used. In a cluster, these actors will be spread over the available nodes.- `num_sgd_iter` is the number of epochs of SGD (stochastic gradient descent, i.e., passes through the data) that will be used to optimize the PPO surrogate objective at each iteration of PPO, for each _minibatch_ ("chunk") of training data. Using minibatches is more efficient than training with one record at a time.- `sgd_minibatch_size` is the SGD minibatch size (batches of data) that will be used to optimize the PPO surrogate objective.- `model` contains a dictionary of parameters describing the neural net used to parameterize the policy. The `fcnet_hiddens` parameter is a list of the sizes of the hidden layers. Here, we have two hidden layers of size 100, each.- `num_cpus_per_worker` when set to 0 prevents Ray from pinning a CPU core to each worker, which means we could run out of workers in a constrained environment like a laptop or a cloud VM. | config = DEFAULT_CONFIG.copy()
config['num_workers'] = 1
config['num_sgd_iter'] = 30
config['sgd_minibatch_size'] = 128
config['model']['fcnet_hiddens'] = [100, 100]
config['num_cpus_per_worker'] = 0
agent = PPOTrainer(config, 'CartPole-v1') | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Now let's train the policy on the `CartPole-v1` environment for `N` steps. The JSON object returned by each call to `agent.train()` contains a lot of information we'll inspect below. For now, we'll extract information we'll graph, such as `episode_reward_mean`. The _mean_ values are more useful for determining successful training. | N = 10
results = []
episode_data = []
episode_json = []
for n in range(N):
result = agent.train()
results.append(result)
episode = {'n': n,
'episode_reward_min': result['episode_reward_min'],
'episode_reward_mean': result['episode_reward_mean'],
'episode_reward_max': result['episode_reward_max'],
'episode_len_mean': result['episode_len_mean']}
episode_data.append(episode)
episode_json.append(json.dumps(episode))
print(f'{n:3d}: Min/Mean/Max reward: {result["episode_reward_min"]:8.4f}/{result["episode_reward_mean"]:8.4f}/{result["episode_reward_max"]:8.4f}') | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Now let's convert the episode data to a Pandas `DataFrame` for easy manipulation. The results indicate how much reward the policy is receiving (`episode_reward_*`) and how many time steps of the environment the policy ran (`episode_len_mean`). The maximum possible reward for this problem is `500`. The reward mean and trajectory length are very close because the agent receives a reward of one for every time step that it survives. However, this is specific to this environment and not true in general. | df = pd.DataFrame(data=episode_data)
df
df.columns.tolist() | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Let's plot the data. Since the length and reward means are equal, we'll only plot one line: | df.plot(x="n", y=["episode_reward_mean", "episode_reward_min", "episode_reward_max"], secondary_y=True) | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
The model is quickly able to hit the maximum value of 500, but the mean is what's most valuable. After 10 steps, we're more than half way there. FYI, here are two views of the whole value for one result. First, a "pretty print" output.> **Tip:** The output will be long. When this happens for a cell, right click and select _Enable scrolling for outputs_. | print(pretty_print(results[-1])) | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
We'll learn about more of these values as continue the tutorial.The whole, long JSON blob, which includes the historical stats about episode rewards and lengths: | results[-1] | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Let's plot the `episode_reward` values: | episode_rewards = results[-1]['hist_stats']['episode_reward']
df_episode_rewards = pd.DataFrame(data={'episode':range(len(episode_rewards)), 'reward':episode_rewards})
df_episode_rewards.plot(x="episode", y="reward") | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
For a well-trained model, most runs do very well while occasional runs do poorly. Try plotting other results episodes by changing the array index in `results[-1]` to another number between `0` and `9`. (The length of `results` is `10`.) Exercise 2The current network and training configuration are too large and heavy-duty for a simple problem like `CartPole`. Modify the configuration to use a smaller network (the `config['model']['fcnet_hiddens']` setting) and to speed up the optimization of the surrogate objective. (Fewer SGD iterations and a larger batch size should help.) | # Make edits here:
config = DEFAULT_CONFIG.copy()
config['num_workers'] = 3
config['num_sgd_iter'] = 30
config['sgd_minibatch_size'] = 128
config['model']['fcnet_hiddens'] = [100, 100]
config['num_cpus_per_worker'] = 0
agent = PPOTrainer(config, 'CartPole-v1') | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Train the agent and try to get a reward of 500. If it's training too slowly you may need to modify the config above to use fewer hidden units, a larger `sgd_minibatch_size`, a smaller `num_sgd_iter`, or a larger `num_workers`.This should take around `N` = 20 or 30 training iterations. | N = 5
results = []
episode_data = []
episode_json = []
for n in range(N):
result = agent.train()
results.append(result)
episode = {'n': n,
'episode_reward_mean': result['episode_reward_mean'],
'episode_reward_max': result['episode_reward_max'],
'episode_len_mean': result['episode_len_mean']}
episode_data.append(episode)
episode_json.append(json.dumps(episode))
print(f'Max reward: {episode["episode_reward_max"]}') | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Using CheckpointsYou checkpoint the current state of a trainer to save what it has learned. Checkpoints are used for subsequent _rollouts_ and also to continue training later from a known-good state. Calling `agent.save()` creates the checkpoint and returns the path to the checkpoint file, which can be used later to restore the current state to a new trainer. Here we'll load the trained policy into the same process, but often it would be loaded in a new process, for example on a production cluster for serving that is separate from the training cluster. | checkpoint_path = agent.save()
print(checkpoint_path) | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Now load the checkpoint in a new trainer: | trained_config = config.copy()
test_agent = PPOTrainer(trained_config, "CartPole-v1")
test_agent.restore(checkpoint_path) | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Use the previously-trained policy to act in an environment. The key line is the call to `test_agent.compute_action(state)` which uses the trained policy to choose an action. This is an example of _rollout_, which we'll study in a subsequent lesson.Verify that the cumulative reward received roughly matches up with the reward printed above. It will be at or near 500. | env = gym.make("CartPole-v1")
state = env.reset()
done = False
cumulative_reward = 0
while not done:
action = test_agent.compute_action(state) # key line; get the next action
state, reward, done, _ = env.step(action)
cumulative_reward += reward
print(cumulative_reward)
ray.shutdown() | _____no_output_____ | Apache-2.0 | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial |
Introduction- メトリックをちゃんと定義する- ref: > exploring-molecular-properties-data.ipynb Import everything I nead :) | import numpy as np
import matplotlib.pyplot
import pandas as pd
from sklearn.metrics import mean_absolute_error | _____no_output_____ | MIT | src/17_Metric.ipynb | fkubota/kaggle-Predicting-Molecular-Properties |
Metric - このコンペでは、以下のスコアを用いる- type ごとの平均$$score = \frac{1}{T} \sum_{t=1}^{T} \log \left ( \frac{1}{n_t} \sum_{i=1}^{n_i} |{y_i - \hat{y_i}|} \right)$$ | def metric(df, preds):
df["prediction"] = preds
maes = []
for t in df.type.unique():
y_true = df[df.type==t].scalar_coupling_constant.values
y_pred = df[df.type==t].prediction.values
mae = np.log(mean_absolute_error(y_true, y_pred))
maes.append(mae)
return np.mean(maes) | _____no_output_____ | MIT | src/17_Metric.ipynb | fkubota/kaggle-Predicting-Molecular-Properties |
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. | try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False | Note: not using Google CoLab
| Apache-2.0 | t81_558_class_05_2_kfold.ipynb | machevres6/t81_558_deep_learning |
Part 5.2: Using K-Fold Cross-validation with KerasCross-validation can be used for a variety of purposes in predictive modeling. These include:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. Cross validation is shown in Figure 5.CROSS.**Figure 5.CROSS: K-Fold Crossvalidation**It is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:* Choose the model that had the highest validation score as the final model.* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.Cross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem. | import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values | _____no_output_____ | Apache-2.0 | t81_558_class_05_2_kfold.ipynb | machevres6/t81_558_deep_learning |
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count. | import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,
epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
| Fold #1
Fold score (RMSE): 0.6245484893737087
Fold #2
Fold score (RMSE): 0.5802295511082306
Fold #3
Fold score (RMSE): 0.6300965769274195
Fold #4
Fold score (RMSE): 0.4550931884841248
Fold #5
Fold score (RMSE): 1.0517027192572377
Final, out of sample score (RMSE): 0.6981314007708873
| Apache-2.0 | t81_558_class_05_2_kfold.ipynb | machevres6/t81_558_deep_learning |
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.It is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression. | import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values | _____no_output_____ | Apache-2.0 | t81_558_class_05_2_kfold.ipynb | machevres6/t81_558_deep_learning |
We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count. | import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
# Use for StratifiedKFold classification
kf = StratifiedKFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
# Must specify y StratifiedKFold for
for train, test in kf.split(x,df['product']):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
# raw probabilities to chosen class (highest probability)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
| Fold #1
Fold score (accuracy): 0.6766169154228856
Fold #2
Fold score (accuracy): 0.6691542288557214
Fold #3
Fold score (accuracy): 0.6907730673316709
Fold #4
Fold score (accuracy): 0.6733668341708543
Fold #5
Fold score (accuracy): 0.654911838790932
Final score (accuracy): 0.673
| Apache-2.0 | t81_558_class_05_2_kfold.ipynb | machevres6/t81_558_deep_learning |
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use. Figure 5.HOLDOUT shows this division.**Figure 5.HOLDOUT: Cross Validation and a Holdout Set**The following program makes use of a holdout set, and then still cross-validates. | import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
| Fold #1
Fold score (RMSE): 24.299626704604506
Fold #2
Fold score (RMSE): 0.6609159891625663
Fold #3
Fold score (RMSE): 0.4997884237817687
Fold #4
Fold score (RMSE): 1.1084218284103058
Fold #5
Fold score (RMSE): 0.614899992174395
Cross-validated score (RMSE): 10.888206072135832
Holdout score (RMSE): 0.6283593821273058
| Apache-2.0 | t81_558_class_05_2_kfold.ipynb | machevres6/t81_558_deep_learning |
HLCA Figure 2 Here we will generate the figures from the HLCA pre-print, figure 2. Figure 2d was generated separately in R, using code from integration benchmarking framework 'scIB'. import modules, set paths and parameters: | import scanpy as sc
import pandas as pd
import numpy as np
import sys
import os
from collections import Counter
sys.path.append("../../scripts/")
import reference_based_harmonizing
import celltype_composition_plotting
import plotting
import sankey
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import to_hex
import ast
sc.set_figure_params(
dpi=140,
fontsize=12,
frameon=False,
transparent=True,
)
sns.set_style(style="white")
sns.set_context(context="paper") | _____no_output_____ | MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
for pretty code formatting (not needed to run notebook): | %load_ext lab_black | _____no_output_____ | MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
paths: | path_HLCA = "../../data/HLCA_core_h5ads/HLCA_v1.h5ad"
path_celltype_reference = "../../supporting_files/metadata_harmonization/HLCA_cell_type_reference_mapping_20211103.csv"
dir_figures = "../../figures" | _____no_output_____ | MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
Generate figures: initiate empty dictionary in which to store paper figures. | FIGURES = dict() | _____no_output_____ | MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
for automatic script updating and pretty coding (not necessary for code to run!) | adata = sc.read(path_HLCA) | _____no_output_____ | MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
Overview of stats (number of studies, cells, annotations etc.): Number of studies, datasets, subjects, samples, cells: | print("Number of studies:", len(set(adata.obs.study)))
print("Number of datasets:", len(set(adata.obs.dataset)))
print("Number of subjects:", len(set(adata.obs.subject_ID)))
print("Number of samples:", len(set(adata.obs["sample"])))
print("Number of cells:", adata.obs.shape[0]) | Number of studies: 11
Number of datasets: 14
Number of subjects: 107
Number of samples: 166
Number of cells: 584884
| MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
Proportions of cell compartments in the HLCA: | original_ann_lev_1_percs = np.round(
adata.obs.original_ann_level_1.value_counts() / adata.n_obs * 100, 1
)
print("Original annotation proportions (level 1):")
print(original_ann_lev_1_percs) | Original annotation proportions (level 1):
Epithelial 48.1
Immune 38.7
Endothelial 8.5
Stroma 4.3
Proliferating cells 0.3
Name: original_ann_level_1, dtype: float64
| MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
Perc. of cells annotated per level: | for level in range(1, 6):
n_unannotated = np.sum(
[
isnone or isnull
for isnone, isnull in zip(
adata.obs[f"original_ann_level_{level}_clean"].values == "None",
pd.isnull(adata.obs[f"original_ann_level_{level}_clean"].values),
)
]
)
n_annotated = adata.n_obs - n_unannotated
print(
f"Perc. originally annotated at level {level}: {round(n_annotated/adata.n_obs*100,1)}"
) | Perc. originally annotated at level 1: 100.0
Perc. originally annotated at level 2: 98.8
Perc. originally annotated at level 3: 93.6
Perc. originally annotated at level 4: 65.7
Perc. originally annotated at level 5: 6.8
| MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
Distribution of demographics: | print(f"Min. and max. age: {adata.obs.age.min()}, {adata.obs.age.max()}")
adata.obs.sex.value_counts() / adata.n_obs * 100
adata.obs.ethnicity.value_counts() / adata.n_obs * 100
print(f"Min. and max. BMI: {adata.obs.BMI.min()}, {adata.obs.BMI.max()}")
adata.obs.smoking_status.value_counts() / adata.n_obs * 100 | _____no_output_____ | MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
figures: Overview of subjects, samples, and cells per study (not in the paper): | plotting.plot_dataset_statistics(adata, fontsize=8, figheightscale=3.5) | _____no_output_____ | MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
2a Subject/sample distributions Re-map ethnicities: | ethnicity_remapper = {
"asian": "asian",
"black": "black",
"latino": "latino",
"mixed": "mixed",
"nan": "nan",
"pacific islander": "other",
"white": "white",
}
adata.obs.ethnicity = adata.obs.ethnicity.map(ethnicity_remapper) | _____no_output_____ | MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
Plot subject demographic and sample anatomical location distributions: | FIGURES["2a_subject_and_sample_stats"] = plotting.plot_subject_and_sample_stats_incl_na(
adata, return_fig=True
) | age: 99% annotated
BMI: 70% annotated
sex: 100% annotated)
ethnicity 93% annotated
smoking_status: 92% annotated
| MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
2b Cell type composition sankey plot, level 1-3: First, generate a color mapping. We want to map cell types from the same compartment in the same shade (e.g. epithilial orange/red, endothelial purple), at all levels. We'll need to incorporate our hierarchical cell type reference for that, and then calculate the colors per level. That is done with the code below: | harmonizing_df = reference_based_harmonizing.load_harmonizing_table(
path_celltype_reference
)
consensus_df = reference_based_harmonizing.create_consensus_table(harmonizing_df)
max_level = 5
color_prop_df = celltype_composition_plotting.calculate_hierarchical_coloring_df(
adata,
consensus_df,
max_level,
lev1_colormap_dict={
"Epithelial": "Oranges",
"Immune": "Greens",
"Endothelial": "Purples",
"Stroma": "Blues",
"Proliferating cells": "Reds",
},
ann_level_name_prefix="original_ann_level_",
) | /home/icb/lisa.sikkema/miniconda3/envs/scRNAseq_analysis/lib/python3.7/site-packages/pandas/core/indexing.py:671: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_with_indexer(indexer, value)
| MIT | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.