code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <p align="center"> # <img src="https://github.com/jessepisel/energy_analytics/blob/master/EA_logo.jpg?raw=true" width="220" height="240" /> # # </p> # # ## Regular Gridded Data Structures / ndarrays in Python for Engineers and Geoscientists # ### <NAME>, Associate Professor, University of Texas at Austin # # #### Contacts: [Twitter/@GeostatsGuy](https://twitter.com/geostatsguy) | [GitHub/GeostatsGuy](https://github.com/GeostatsGuy) | [www.michaelpyrcz.com](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) # # This is a tutorial for / demonstration of **Regular Gridded Data Structures in Python**. In Python, a common tool for dealing with Regular Gridded Data Structures is the *ndarray* from the **NumPy Python package** (by <NAME> et al.). # # This tutorial includes the methods and operations that would commonly be required for Engineers and Geoscientists working with Regularly Gridded Data Structures for the purpose of: # # 1. Data Checking and Cleaning # 2. Data Mining / Inferential Data Analysis # 3. Predictive Modeling # # for Data Analytics, Geostatistics and Machine Learning. # # ##### Regular Data Structures # # In Python we will commonly store our data in two formats, tables and arrays. For sample data with typically multiple features $1,\ldots,m$ over $1,\ldots,n$ samples we will work with tables. For exhaustive 2D maps and 3D models (usually representing a single feature) on a regular grid over $[1,\ldots,n_{1}], [1,\ldots,n_{2}],\ldots,[1,\ldots,n_{ndim}]$, where $n_{dim}$ is the number of dimensions, we will work with arrays. Of course, it is always possible to add another dimension to our array to include multiple features, $1,\ldots,m$, over all locations. # # In geostatistical workflows the tables are typically sample data from wells and drill holes and the grids are the interpolated or simulated models or secondary data from sources such as seismic inversion. # # The NumPy package provides a convenient *ndarray* object for working with regularly gridded data. In the following tutorial we will focus on practical methods with *ndarray*s. There is another section available on Tabular Data Structures that focuses on DataFrames at https://github.com/GeostatsGuy/PythonNumericalDemos/blob/master/PythonDataBasics_DataFrame.ipynb. # # #### Project Goal # # Learn the basics for working with Regular Gridded Data Structures in Python to build practical subsurfrace modeling and machine learning workflows. # # #### Caveats # # I included methods that I have found useful for building my geo-engineering workflows for subsurface modeling. I think they should be accessible to most geoscientists and engineers. Certainly, there are more advanced, more compact, more efficient methods to accomplish the same tasks. I tried to keep the methods simple. I appreciate feedback and I will use it to improve this tutorial periodically. # # #### Load the required libraries # # The following code loads the required libraries. # import os # set current working directory import numpy as np # ndarrays import matplotlib.pyplot as plt # plotting from scipy import stats # summary stats # If you get a package import error, you may have to first install some of these packages. This can usually be accomplished by opening up a command window on Windows and then typing 'python -m pip install [package-name]'. More assistance is available with the respective package docs. # # #### Declare functions # # These are the functions we have included here: # # 1. GSLIB2ndarray - load GSLIB Geo-EAS format regular grid data 1D or 2D to NumPy *ndarray* # 2. ndarray2GSLIB - write NumPy array to GSLIB Geo-EAS format regular grid data 1D or 2D # 3. pixelplt - plot 2D NumPy arrays with same parameters as GSLIB's pixelplt # # I include and demonstrate the GSLIB Geo-EAS file read and write functions, because (1) *ndarray* read and write member functions are convenience functions that are limited and (2) for geostatistical modeling it is conveneint to read and write from Geo-EAS the format used in GSLIB by Deutsch and Journel (1998). Also, I included a function that reimpliments the 2D array plotting program 'pixelplt' from GSLIB. The inputs are simple and the method is consistent with GSLIB, and by using it we postpone having to learn the MatPlotLib package for plotting. # # Warning, there has been no attempt to make these functions robust in the precense of bad inputs. If you get a crazy error check the inputs. Are the arrays the correct dimension? Is the parameter order mixed up? Make sure the inputs are consistent with the descriptions in this document. # + # utility to convert 1D or 2D numpy ndarray to a GSLIB Geo-EAS file for use with GSLIB methods def ndarray2GSLIB(array, data_file, col_name): file_out = open(data_file, "w") file_out.write(data_file + "\n") file_out.write("1 \n") file_out.write(col_name + "\n") if array.ndim == 2: ny = array.shape[0] nx = array.shape[1] ncol = 1 for iy in range(0, ny): for ix in range(0, nx): file_out.write(str(array[ny - 1 - iy, ix]) + "\n") elif array.ndim == 1: nx = len(array) for ix in range(0, nx): file_out.write(str(array[ix]) + "\n") else: Print("Error: must use a 2D array") file_out.close() return file_out.close() # utility to convert GSLIB Geo-EAS files to a 1D or 2D numpy ndarray for use with Python methods def GSLIB2ndarray(data_file, kcol, nx, ny): colArray = [] if ny > 1: array = np.ndarray(shape=(ny, nx), dtype=float, order="F") else: array = np.zeros(nx) with open(data_file) as myfile: # read first two lines head = [next(myfile) for x in range(2)] line2 = head[1].split() ncol = int(line2[0]) # get the number of columns for icol in range(0, ncol): # read over the column names head = [next(myfile) for x in range(1)] if icol == kcol: col_name = head[0].split()[0] if ny > 1: for iy in range(0, ny): for ix in range(0, nx): head = [next(myfile) for x in range(1)] array[ny - 1 - iy][ix] = head[0].split()[kcol] else: for ix in range(0, nx): head = [next(myfile) for x in range(1)] array[ix] = head[0].split()[kcol] return array, col_name # pixel plot, reimplemention in Python of GSLIB pixelplt with MatPlotLib methods (commented out image file creation) def pixelplt( array, xmin, xmax, ymin, ymax, step, vmin, vmax, title, xlabel, ylabel, vlabel, cmap, fig_name, ): xx, yy = np.meshgrid( np.arange(xmin, xmax, step), np.arange(ymax, ymin, -1 * step) ) plt.figure(figsize=(8, 6)) im = plt.contourf( xx, yy, array, cmap=cmap, vmin=vmin, vmax=vmax, levels=np.linspace(vmin, vmax, 100), ) plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) cbar = plt.colorbar( im, orientation="vertical", ticks=np.linspace(vmin, vmax, 10) ) cbar.set_label(vlabel, rotation=270, labelpad=20) # plt.savefig(fig_name + '.' + image_type,dpi=dpi) plt.show() return im # - # #### Set the working directory # # I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). Also, in this case make sure to place the required (see below) data file in this directory. When we are done with this tutorial we will write our new dataset back to this directory. os.chdir("c:/PGE383") # set the working directory # #### Loading and Writing # # Let's load the 2D porosity map from the provide binary file. This file was created with the NumPy *ndarray* member function 'tofile'. Note: this and the read from file member function, *fromfile*, are convenience functions. They do not store any information about the array. So when we read our 100 x 100 array this results in a 10,000 1D array. Let's try for ourselves. We can read the binary to an array like this: porosity_map = np.fromfile("porosity_truth_map.dat") # Next, let's look at the shape member: porosity_map.shape # Confirmed, the shape is (10000,), a 10,000 node 1D array. Given we know it is actually a 100x100 array, we can use the *ndarray* member function *reshape* to correct this. Note, you get an error if the sizes are inconsistent, $\prod^{i} n_{i} \neq n_{1D}$ where $n_{i}$ is the number of nodes for axis $i$ and $n_{1D}$ is the number of nodes in the 1D vector that was read in. We reshape the array to 100x100, print the results and then get the 'ndarray' member 'shape' elements 0 and 1 to confirm the $n_{1} = n_{2} = 100$. porosity_map = np.reshape( porosity_map, [100, 100] ) # reshape the array to 100 x 100 print(porosity_map.shape) ny = porosity_map.shape[0] # get the array nx nx = porosity_map.shape[1] # get the array ny print( "Our 2D array has number of x cells = " + str(nx) + ", and y cells = " + str(ny) + "." ) # Let's close the loop and write out the array and read it back in, to demonstrat the *ndarray* writing member function *tofile*. porosity_map.tofile( "porosity_test.dat" ) # save our 2D array to a 1D binary file porosity_test = np.fromfile( "porosity_test.dat" ) # read the 1D binary back to a 1D array check = np.array_equal( porosity_map.flatten(), porosity_test ) # check if the read in array is the same as flatten orig. print( "The array we wrote out and read back in are the same, we closed the loop," + str(check) + "." ) # It worked! We used the NumPy function 'array_equal' to test if the arrays are the same. Did you notice I added the *flatten* member function? This caused the 100x100 'porosity_map' array to be passed to the *array_equal* as a 10,000 node 1D array, the same as 'porosity_test' array was loaded. We can write an array and read it back in and we get the same thing. # # Let's check out using .csv files to store a 2D ndarray. np.savetxt("porosity_map.csv", porosity_map, delimiter=",") # The 2D ndarray is saved with each line containing a row and each column delimited by a comma. In this format the 2D grid can be directly loaded into Excel. One can use conditional formatting to conduct a very quick check of the 'look' of the data. E.g. confirm that it is not upside down, scrambled etc. porosity_map_test = np.loadtxt( "porosity_map.csv", delimiter="," ) # load the csv file back into a 2D ndarray test = np.array_equal( porosity_map, porosity_map_test ) # check if the arrays are the same print(test) # OK, we confirmed that the save and reloaded 2D ndarray is the same as the original 2D ndarray. This save and load method works. Lets perform the same test for the included GeostatsPy functions to save and load gridded data in Geo-EAS format (this is the format used by GSLIB programs). ndarray2GSLIB( porosity_map, "porosity_map_GSLIB.out", "porosity" ) # save the gridded data to Geo-EAS format porosity_map_test2, col_name = GSLIB2ndarray( "porosity_map_GSLIB.out", 0, nx, ny ) test = np.array_equal( porosity_map, porosity_map_test2 ) # check if the arrays are the same print(test) # OK, we confirmed that the GeostatsPy methods for saving and loading 2D gridded data work. # #### Visualization # # Let's look at the dataset that we loaded. Instead of working with the MatPlotLib package directly (common data visualization package for Python) we will use the *pixelplt* reimplimentation from our set of functions from my effort to bring GSLIB to Python, the 'in-progress' GeostatsPy package. This function uses MatPlotLib with the function parameters to build a nice figure, so we can procastinate learning MatPlotLib for now! First let's set some parameters, including the spatial limits of the plot, the cell sizes in the plot and the min and max feature values and color map for the color bar. Our regular grid is 100 x 100 cells of 10 m cells (i.e. squares), 1,000 x 1,000 m in extents and we assume the origin, low left corder is at coordinate 0,0. Our porosity values are contained within the interval between 4 to 16%. xmin = 0.0 xmax = 1000.0 ymin = 0.0 ymax = 1000.0 cell_size = 10.0 vmin = 4.0 vmax = 16.0 cmap = plt.cm.plasma # Now we are ready to plot the 2D array with the *pixpelplt* reimplementation from our GSLIB in Python. pixelplt( porosity_map, xmin, xmax, ymin, ymax, cell_size, vmin, vmax, "Porosity Truth Map", "X(m)", "Y(M)", "Porosity (%)", cmap, "Porosity_Map", ) # The NumPy package *ndarray* docs recommend that users consider making their own functions to read and write *ndarray*s from ASCII files. We have coded functions to do this using the GSLIB Geo-EAS format, to support geostatistical workflows that utilize GSLIB programs as part of the GeostatsPy package that we are developing. We included the read and write functions here for this tutorial. # You can look at a truncated representation of the *ndarray* like this. Sometimes a good way to check data is to just look at it. print(porosity_map) # You can see that the 2D array is actually an array of arrays, e.g. an array of $1,\ldots,n_{x}$ of arrays of $1,\ldots,n_{y}$. To show this we can include an index for x and we will get a slice for all values with equal $x$ index. Let's look at the the first slice of $y$ values with x index equal to zero. porosity_map[0] # If we add another index we get a single node from the 2D array. Let's get the first and last values from this slice with $x$ index equal to zero. We will print them and you can confirm they are the first and last values from the output above. print(porosity_map[0][0]) # get first and last value for ix = 0 slice print(porosity_map[0][99]) # Alternatively, you can use this notation to access a single cell in a *ndarray*. print(porosity_map[0, 0]) # get first and last value for ix = 0 slice print(porosity_map[0, 99]) # You could get access to a range of values of the array like this (see below). We get the results for *porosity_map* indices $ix = 0$ and $iy = 0,1,\ldots,9$. print(porosity_map[0][0:10]) # get first 10 values for the ix = 0 slice # If you want to see the entire array without truncated representation then you change the print options threshold in NumPy to a *NaN* like this. Note, this is probably not a good idea if you are working with very large arrays. For this example you can literally look through 10,000 values! np.set_printoptions( threshold=np.nan ) # remove truncation from array visualization print(porosity_map) # #### Summary Statistics # # Let's try some summary statistics. Here's a convenient method from SciPy. Like many of the methods it anticipates a 1D array so we do a *flatten* on the 2D array to convert it to a 1D array before passing it. stats = stats.describe(porosity_map.flatten()) # array summary statistics stats # We also have a variety of built in summary statistic calculations that we may apply on *ndarray*s. Note, these methods work directly with our 2D array; therefore, do not require flatening to a 1D array. mean_por = porosity_map.mean() # array summary statistics stdev_por = porosity_map.std() min_por = porosity_map.min() max_por = porosity_map.max() print( "Summary Statistics of Porosity \n Mean = " + str(mean_por) + ", StDev = " + str(stdev_por) ) print(" Min = " + str(min_por) + ", Max = " + str(max_por)) # We can also do this with NumPy functions that work with arrays that calculate the previous summary statistics and more. mean_por = np.mean(porosity_map) # array summary statistics stdev_por = np.std(porosity_map) min_por = np.min(porosity_map) max_por = np.max(porosity_map) P10_por, P90_por = np.percentile(porosity_map, [0.10, 0.90]) print( "Summary Statistics of Porosity \n Mean = " + str(mean_por) + ", StDev = " + str(stdev_por) ) print(" Min = " + str(min_por) + ", Max = " + str(max_por)) print(" P10 = " + str(P10_por) + ", P90 = " + str(P90_por)) # #### Checking and Manipulating # # We can read and write individual value of our array with indices $ix = 0,\ldots,nx-1$ and $iy = 0,\ldots,ny-1$. local_por = porosity_map[0, 0] # get porosity at location 0,0 print("Porosity at location 0,0 in our ndarray is " + str(local_por) + ".") porosity_map[0, 0] = 10.0000 # change the porosity value at location 0,0 print( "Porosity at location 0,0 in our ndarray is now " + str(porosity_map[0, 0]) + "." ) # We can also check for *NaN*s, invalid or missing values in our *ndarray*. porosity_map[0, 0] = np.nan print( "Porosity at location 0,0 in our ndarray is now " + str(porosity_map[0, 0]) + "." ) # We can check for any *NaN*'s in our array with the following code. First, let's add a couple more *NaN* values to make this example more interesting. porosity_map[0, 1] = np.nan # add another NaN porosity_map[2, 1] = np.nan # add another NaN result = np.isnan(porosity_map).any() result # Ok, so now we kown that we have *NaN*'s in our array. This could cause issues with our calculations. We can get a list of indices with *NaN*'s in our *ndarray*. nan_list = np.argwhere( np.isnan(porosity_map) ) # get list of indices of array with NaNs print(nan_list) # We now have a list of the indices (0,0), (0,1) and (2,1) with *NaN*'s. This is exactly the array indices that we assigned to NaN. If you convert this list of indices by mapping them with *map* to *tuple*s and make that into a new list we get something we can use to directly interact with the *NaN*'s in our 2D *ndarray*. nan_list_tuple = list(map(tuple, nan_list)) # convert index list to tuple list print(nan_list_tuple) # check the tuple list print(porosity_map[nan_list_tuple[0]]) # get the values at the indices print(porosity_map[nan_list_tuple[1]]) print(porosity_map[nan_list_tuple[2]]) # Now that we have this list of array coordinate, list of tuples, we can use this to actually access those locations. Here we use those locations (there should be 3 *NaN*'s) to replace the missing values with very small porosity values (0.0001). print( "Value at the first NaN indices is " + str(porosity_map[nan_list_tuple[0]]) + "." ) # get value at first index porosity_map[ nan_list_tuple[0] ] = 0.001 # set the NaN's to a low porosity value porosity_map[nan_list_tuple[1]] = 0.001 porosity_map[nan_list_tuple[2]] = 0.001 print( "Value at the first NaN indices after setting to 0.001 is " + str(porosity_map[nan_list_tuple[0]]) + "." ) # #### Making Arrays # # There are various methods to make *ndarray*s from scratch. In some cases, our arrays are small enough we can just write them like this. my_array = np.array( [[0, 1, 2], [4, 5, 6], [7, 8, 9]] ) # make an ndarray by scratch print(my_array.shape) my_array # We now have a 3 x 3 *ndarray*. # # We can also use NumPy's *rand* to make an *ndarray* of any shape with random values between 0 and 1 and *zeros* to make an array of any shape with 0's. # + from scipy import stats # summary stats rand_array = np.random.rand( 100, 100 ) # make 100 x 100 node array with random values print("Shape of the random array = " + str(rand_array.shape)) print(stats.describe(rand_array.flatten())) pixelplt( rand_array, xmin, xmax, ymin, ymax, cell_size, 0, 1, "Random Values", "X(m)", "Y(M)", "Random", cmap, "random", ) zero_array = np.zeros((100, 100)) # make 100 x 100 node array with zeros print("Shape of the zero array = " + str(zero_array.shape)) print(stats.describe(zero_array.flatten())) pixelplt( zero_array, xmin, xmax, ymin, ymax, cell_size, -1, 1, "Zeros", "X(m)", "Y(M)", "Zeros", cmap, "zeros", ) # - # #### Operations # # We can search for values in our array with any criteria we like. In this example we identify all nodes with porosity values greater than 15%, the result of *porosity > 15.0* is a boolean array (true and false) with true when that criteria is met. We apply that to the *porosity_map* *ndarray* to return all node values with true in a new array. We can check the size of that array to get the total number of nodes with porosity values greater than 15. greater_than = porosity_map[ porosity_map > 15.0 ] # make boolean array and get values that meet criteria print(greater_than) print( "There are " + str(greater_than.size) + " of a total of " + str(porosity_map.flatten().size) + "." ) # We can actually plot the boolean array (true = 1 and false = 0 numerically) to get a map of the nodes that meet the criteria. We do that below with porosity > 13% because it looks more interesting than only 25 nodes for the porosity > 15% case. thresh_porosity_map = porosity_map > 13.0 pixelplt( thresh_porosity_map, xmin, xmax, ymin, ymax, cell_size, 0, 1, "Porosity > 13%", "X(m)", "Y(M)", "Boolean", cmap, "threshold", ) # How would you get a list of the indices that meet the criteria in the *porosity map* array? We repeat the command to make a list of tuples with locations with porosity > 15%, *loc_hig_por*. Then we simply grab the ix and iy index values from this list. The list is set up like this, my_list[0 for ix, 1 for iy][1 to number of nodes] loc_high_por = np.nonzero( porosity_map > 15 ) # get the indices with high porosity print( "Loc #1, ix = " + str(loc_high_por[1][0]) + " and iy = " + str(loc_high_por[0][0]) + "." ) print( " With a value of ", str(porosity_map[loc_high_por[0][0], loc_high_por[1][0]]) + ".", ) print( "Loc #2, ix = " + str(loc_high_por[1][1]) + " and iy = " + str(loc_high_por[0][1]) + "." ) print( " With a value of ", str(porosity_map[loc_high_por[0][1], loc_high_por[1][1]]) + ".", ) loc_high_por # Perhaps you want to do something more creative with your *ndarray*. The most flexible approach is to use a loop and iterate over the array. Let's add noise to our porosity map. To do this we take the previously calculated random array and center it (set the mean to 0.0 by subtracting the current mean), we will multiply it by a factor of 5 so that the result is more noticable and add it to the *porosity_map* array. # + porosity_map_noise = np.zeros( (100, 100) ) # use of loops to maniputale ndarrays for iy in range(ny): for ix in range(nx): porosity_map_noise[iy, ix] = ( porosity_map[iy, ix] + (rand_array[iy, ix] - 0.5) * 5 ) print(stats.describe(porosity_map_noise.flatten())) pixelplt( porosity_map_noise, xmin, xmax, ymin, ymax, cell_size, 0, 16, "Porosity With Noise", "X(m)", "Y(M)", "Porosity (%)", cmap, "Residual", ) # - # We could have done the above without the loops, by using the simple statement below. We can use algebriac operators on *ndarray*s like this example below if the *ndarray* are all the same size. porosity_map_noice2 = ( porosity_map + (rand_array - 0.5) * 5 ) # using matrix algebra to repeat the previous looped method print(stats.describe(porosity_map_noise.flatten())) pixelplt( porosity_map_noise, xmin, xmax, ymin, ymax, cell_size, 0, 16, "Porosity With Noise", "X(m)", "Y(M)", "Porosity (%)", cmap, "Residual2", ) # Let's write our new *ndarray* to a file for storage and to apply with other software such as GSLIB. ndarray2GSLIB( porosity_map_noise, "porosity_noise_GSLIB.dat", "porosity_noise" ) # write out 2D array to a Geo-DAS ASCII file # #### More Exercises # # There are so many more exercises and tests that one could attempt to gain experience with the NumPy package, *ndarray* objects in Python. I'll end here for brevity, but I invite you to continue. Check out the docs at https://docs.scipy.org/doc/numpy-1.14.0/reference/arrays.ndarray.html. I'm always happy to discuss, # # *Michael* # # <NAME>, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin # On twitter I'm the @GeostatsGuy. #
Python Workflows/Week_05b_PythonDataBasics_ndarrays.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: super-duper-fiesta # language: python # name: super-duper-fiesta # --- import matplotlib.pyplot as plt import numpy as np from scipy.optimize import curve_fit cross_sections = "./../data/nu_xs_12.txt" def read_xs_file(f): d={} log_e, cc_nu, nc_nu, cc_nu_bar, nc_nu_bar = ([] for i in range(5)) File = open(f,"r") lines = File.readlines() for line in lines: columns = line.split(' ') log_e.append(float(columns[0])) cc_nu.append(float(columns[1])) nc_nu.append(float(columns[2])) cc_nu_bar.append(float(columns[3])) nc_nu_bar.append(float(columns[4])) d["log_E"]=np.array(log_e) d["E"]=np.power(10, np.array(log_e)) d["cc_nu"]=np.array(cc_nu) d["nc_nu"]=np.array(nc_nu) d["cc_nu_bar"]=np.array(cc_nu_bar) d["nc_nu_bar"]=np.array(nc_nu_bar) File.close() return d xs=read_xs_file(cross_sections) # + #plt.plot(xs['E'], xs['nc_nu']) # + # def xs_exp_model(x, a, b): # return np.exp(a+b*x) # + # a,b = curve_fit(xs_exp_model, np.log10(xs["E"])[950:], xs["nc_nu"][950:], maxfev=20000 ) # + # # plt.yscale("log") # # plt.xscale("log") # #plt.plot(xs["E"][950:], xs_log(xs["E"][950:], a[0], a[1], a[2]), linewidth=3,label='Fitted log') # plt.plot(np.log10(xs["E"])[950:], xs_exp(np.log10(xs["E"])[950:], c[0], c[1]), linewidth=3,label='Fitted exp') # #plt.plot(xs["E"], xs_log(xs["E"], a[0], a[1], a[2]), label='Fitted function') # #plt.plot(xs["E"][1200:], xs_log(xs["E"][1200:], a[0], a[1], a[2]), label='Fitted function') # plt.plot(np.log10(xs["E"]),xs["nc_nu"], linewidth=3, alpha=0.5, label="data") # plt.legend() # - len(xs['E']) def expo_root(x): return np.exp(-8.17236*10 + x*0.812287) # This function contains the result of fitting to a model. The fit was obtained with ROOT for numu_nc. def expo_composite_nc(x): return np.piecewise(x, [x < 8, x >= 8], [lambda x : np.exp(-8.38165*10 + x*1.07417), lambda x : np.exp(-8.18376*10 + x*0.822837)]) plt.yscale("log") plt.plot(np.log10(xs["E"][650:]), expo_composite_nc(np.log10(xs["E"][650:])), linewidth=3,label='Fitted exp') plt.plot(np.log10(xs["E"]), xs['nc_nu'], linewidth=1,label='Fitted exp') def expo_composite_cc(x): return np.piecewise(x, [x < 8, x >= 8], [lambda x : np.exp(-8.26068*10 + x*1.03968), lambda x : np.exp(-8.08147*10 + x*0.812867)]) plt.yscale("log") plt.plot(np.log10(xs["E"][650:]), expo_composite_cc(np.log10(xs["E"][650:])), linewidth=3,label='Fitted exp') plt.plot(np.log10(xs["E"]), xs['cc_nu'], linewidth=1,label='Fitted exp')
notebooks/CrossSection.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Combined Cycle Power Plant Analysis # In Combined Cycle Power Plant analysis with the given dataset we have sensor data for Temperature (T), Ambient Pressure (AP), Relative Humidity (RH) and Exhaust Vacuum (V) of the power plant which produces the net hourly electrical energy output (EP) of the plant. Use Data Science Techniques to find out patterns from the available data to check which features impact the label most. # # A combined cycle power plant (CCPP) is composed of gas turbines (GT), steam turbines (ST) and heat recovery steam generators. In a CCPP, the electricity is generated by gas and steam turbines, which are combined in one cycle, and is transferred from one turbine to another. While the Vacuum is collected from and has the effect on the Steam Turbine, the other three of the ambient variables affect the GT performance. # # The Dataset is taken from here – [Combined Cycle Dataset](https://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant) # # 1. Importing the Library #Importing the necessary packages to process or plot the data import numpy as np import pandas as pd from sklearn import preprocessing import matplotlib.pyplot as plt import seaborn as sns # # 2. Loading the Dataset #Getting the data data = pd.read_csv("0000000000002419_training_ccpp_x_y_train.csv", delimiter=",") #Print the head of the Dataframe data.head() '''As we know the EP(Electrical energy output) is our target value, so we are going to save it''' y_train=data[' EP'] del data[' EP'] #Print the y_train y_train # # 3. Structure of the Dataset #Print the Describe function for the Dataframe data.describe() #Print the shape of the Dataframe data.shape #Print the shape of the y_train y_train.shape # ## Missing or Null Points data.isnull().sum() data.isna().sum() # However, there are no missing values in this dataset as shown above. So we will proceed further # # 4. Exploration of the Dataset # ## Statistics # For our very first coding implementation, we will calculate descriptive statistics about the Combined Cycle Power Plant. Since numpy has already been imported for us, using this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model. # # In the code cell below, we will need to implement the following: # # * Calculate the minimum, maximum, mean, median, and standard deviation of 'EP'. Store each calculation in their respective variable. # * Store each calculation in their respective variable. # + # TODO: Minimum EP of the data minimum_EP= np.min(y_train) # Alternative using pandas # minimum_EP = y_train.min() # TODO: Maximum EP of the data maximum_EP = np.max(y_train) # Alternative using pandas # maximum_EP = y_train.max() # TODO: Mean EP of the data mean_EP = np.mean(y_train) # Alternative using pandas # mean_EP = y_train.mean() # TODO: Median EP of the data median_EP = np.median(y_train) # Alternative using pandas # median_EP = y_train.median() # TODO: Standard deviation of EP of the data std_EP = np.std(y_train) # Alternative using pandas # std_EP = y_train.std(ddof=0) # There are other statistics you can calculate too like quartiles first_quartile = np.percentile(y_train, 25) third_quartile = np.percentile(y_train, 75) inter_quartile = third_quartile - first_quartile # Show the calculated statistics print("Statistics for Combined Cycle Power Plant dataset:\n") print("Minimum EP: ",minimum_EP) print("Maximum EP: ",maximum_EP) print("Mean EP: ",mean_EP) print("Median EP: ",median_EP) print("Standard deviation of EP: ",std_EP) print("First quartile of EP: ",first_quartile) print("Second quartile of EP: ",third_quartile) print("Interquartile (IQR) of EP: ",inter_quartile) # - # Let's analyse the EP in Graphical format sns.set(rc={'figure.figsize':(11.7,8.27)}) sns.distplot(y_train, bins=30,color='orange') plt.show() # From above analysis it can be seen that there is no such outliers are present in the target value. # ## Heat Map '''In this we will examine the correlations in the Dataframe'''; #Here we will create new dataframe with EP for correlation correlation_df=data.copy() correlation_df['EP']=y_train correlation_df.head() #Using the seaborn library for the heat map sns.set(style='ticks', color_codes=True) plt.figure(figsize=(14, 12)) sns.heatmap(correlation_df.astype(float).corr(), linewidths=0.1, square=True, linecolor='white', annot=True) plt.show() # A heat map uses a warm-to-cool color spectrum to show dataset analytics, namely which parts of data receive the most attention. # # The correlation coefficient ranges from -1 to 1. If the value is close to 1, it means that there is a strong positive correlation between the two variables. When it is close to -1, the variables have a strong negative correlation. # **Is there any correlations among the features?** # * T has a negative correlation of -0.95 with EP, which is very much close to -1. Thus T may have an inverse relation with EP. Thus the value of EP may decrease linearly with increase in the value of T, which supports previous analysis. # # * V has a negative correlation of -0.87 with EP, which is close to -1. Thus V may also have an inverse relation with EP. Thus the value of EP may also decrease linearly with an increase in V. # # * AP has a correlation of 0.52, this indicates that there may be an increase in the value of EP with an increase in AP. # # * RH has a correlation of 0.39, this indicates that there may be a slight increase in the value of EP with an increase in AP. # # ## Features Plot #Print all features in the Dataframe data.columns plt.plot(correlation_df["# T"],correlation_df["EP"], '+',color='green') plt.plot(np.unique(correlation_df["# T"]), np.poly1d(np.polyfit(correlation_df["# T"], correlation_df["EP"], 1))(np.unique(correlation_df["# T"])),color='yellow') plt.show() plt.plot(correlation_df[" V"],correlation_df["EP"], 'o',color='pink') plt.plot(np.unique(correlation_df[" V"]), np.poly1d(np.polyfit(correlation_df[" V"], correlation_df["EP"], 1))(np.unique(correlation_df[" V"])),color='blue') plt.show() # By analyzing the above graphs we get to understand that T and V may have inverse proportionality with EP,i.e as their values increases there may be a decrease in the energy EP released from the combined cycle power plant. plt.plot(correlation_df[" AP"],correlation_df["EP"], '*',color='blue') plt.plot(np.unique(correlation_df[" AP"]), np.poly1d(np.polyfit(correlation_df[" AP"], correlation_df["EP"], 1))(np.unique(correlation_df[" AP"])),color='maroon') plt.show() # The ambient pressure has somewhat linear distribution plot which infers that there may be a slight increase in EP with an increase in AP plt.plot(correlation_df[" RH"],correlation_df["EP"], 'o',color='yellow') plt.plot(np.unique(correlation_df[" RH"]), np.poly1d(np.polyfit(correlation_df[" RH"], correlation_df["EP"], 1))(np.unique(correlation_df[" RH"])),color='red') plt.show() # Relative humidity RH also has similar graph compared to AP vs EP but is somewhat more uniform which implies EP may have less effect due to RH, that to there may be an increase in EP with increase in RH. #Using Seaborn for Better Understanding of the Filter_data Features fig, axs = plt.subplots(ncols=4, nrows=1, figsize=(20, 10)) index = 0 axs = axs.flatten() for k,v in data.items(): sns.boxplot(y=k, data=data, ax=axs[index],color='orangered') index += 1 plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=5.0) # + #Printing the graphical representation sns.set(style='whitegrid', context='notebook') features_plot = data.columns sns.pairplot(data[features_plot], height=2.0); plt.tight_layout() plt.show() # - # **Features Observation Outcomes** # * As we can see that the range of value in AP is more different that the value in other features. # * So, in order to bring the features value in same range, we can use Feature Scaling Method #Feature Scaling scaler=preprocessing.StandardScaler() scaler.fit(data) scaler.transform(data) # # 5 .Gradient Descent Implementation # y = mx + b # m is slope, b is y-intercept '''Here we will compute the error for given points in the training dataset''' def compute_error_for_line_given_points(b, m, points): #take totalError inital as zero totalError = 0 #Iterate till len of points for i in range(0, len(points)): #Include x as 0th index of iterator #Include y as 1th index of iterator x = points[i, 0] y = points[i, 1] #Compute total error and add the new total error totalError += (y - (m * x + b)) ** 2 return totalError / float(len(points)) '''Here we will compute the step_gradient ''' def step_gradient(b_current, m_current, points, learningRate): #Intialise two variable one for b and other for m as zero #Take variable as length of points b_gradient = 0 m_gradient = 0 N = float(len(points)) for i in range(0, len(points)): #Include x as 0th index of iterator #Include y as 1th index of iterator x = points[i, 0] y = points[i, 1] #Update the new b and new m at every iteration by adding value as per formula given b_gradient += -(2/N) * (y - ((m_current * x) + b_current)) m_gradient += -(2/N) * x * (y - ((m_current * x) + b_current)) #Update new b and new m at last by subtracting the learningRate*new b for b and similarly for m new_b = b_current - (learningRate * b_gradient) new_m = m_current - (learningRate * m_gradient) return [new_b, new_m] '''Here we will implement the gradient_descent_runner function''' def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_iterations): #Save the starting_b and starting_m in some new variable b = starting_b m = starting_m #Iterate over the num_iterations for i in range(num_iterations): '''Call step_gradient on new_b and new_m''' b, m = step_gradient(b, m, np.array(points), learning_rate) return [b, m] '''Here we will implement the run function, which is going to call above functions''' def run(): #save the training data in some variable,i.e points points = np.genfromtxt("0000000000002419_training_ccpp_x_y_train.csv", delimiter=",") #Intialise the learning_Rate,initial_b,initial_m and num_iterations learning_rate = 0.0001 initial_b = 0 # initial y-intercept guess initial_m = 0 # initial slope guess num_iterations = 1000 #Print the Starting b,Starting m and Starting Cost print("Starting gradient descent at b = {0}, m = {1}, error = {2}".format(initial_b, initial_m, compute_error_for_line_given_points(initial_b, initial_m, points))) print("Running...") #Print the New_b, New_m and New_Cost [b, m] = gradient_descent_runner(points, initial_b, initial_m, learning_rate, num_iterations) print("After {0} iterations b = {1}, m = {2}, error = {3}".format(num_iterations, b, m, compute_error_for_line_given_points(b, m, points))) '''Call the Run Function''' ans=run() ans # # 6. Inbuilt Gradient Boosting Regressor #Getting the x_train x_train=data #Shape of x_train x_train.shape,y_train.shape # **Importing the library for Regression** from sklearn.ensemble import GradientBoostingRegressor alg=GradientBoostingRegressor(learning_rate=1.9,n_estimators=2000) alg # **Training** #Training the data alg.fit(x_train,y_train) # **Prediction** #Getting the test data x_test=np.genfromtxt("0000000000002419_test_ccpp_x_test.csv",delimiter=",") y_train.ravel(order='A') #Prediction y_predic=alg.predict(x_test) #Print the y_predic y_predic # **Model Evaluation** #Get the R2 score for the model '''Using inbuilt score function in the regression model''' alg.score(x_train,y_train) # **Saving the Prediction** np.savetxt("Predict.csv",y_predic,fmt="%.5f") # # 7. Conclusion # * By analyzing the given data, we can say that EP is increasing with T and V. While EP is decreasing with the increment of AP. # * So, in order to increase energy production of power plant(EP), we need to operate the combined cycle power plant at low AT, low V, high RH, and high AP. # * During Implementation we have seen that by changing the Learning rate, our slope and intercepts get changed.
Combined Cycle Power Plant/Gradient Descent - Combined Cycle Power Plant.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: LCLS-I py2 # language: python # name: ana1-current # --- # # Experiment and path specification # + # Specify the experiment for analysis experiment='cxix40218' runNumber = 52 # Set the installation and output path import os os.environ['INSTALLPATH']= '/cds/home/i/igabalsk/TRXS-Run18' os.environ['OUTPUTPATH']= '/cds/data/psdm/%s/%s/scratch' % (experiment[0:3],experiment) # - # # Import Libraries # + # Magic iPython command to enable plotting # %matplotlib inline # Load in the pythonBatchMagic library import sys sys.path.insert(0, os.environ['INSTALLPATH']+'/Libraries/pythonBatchMagic') from pythonBatchMagic import * # Determine current user currentUser, error = unixCMD("echo $USER") print(currentUser) os.environ['RESULTSPATH']= ('/cds/data/psdm/%s/%s/results/%s' % (experiment[0:3],experiment,currentUser)).strip() if not os.path.exists(os.environ['RESULTSPATH']): os.mkdir(os.environ['RESULTSPATH']) # - # # Leveraging the batch queue to quickly grab point data # ## Loading libraries # + sys.path.insert(0, os.environ['INSTALLPATH']+'/Libraries/LCLS') from LCLSdefault import * sys.path.insert(0, os.environ['INSTALLPATH']+'/Libraries/mattsTools') from picklez import * # Load in the get data library from dataAnalysis import * # Load in the batch library for lcls from lclsBatch import * # - # # Load timebinned CSPAD # ### This can either take a timebinned single run (Section 1) or a set of separate runs at different time delays and stitch them into a single ROI analysis (Section 2) # + # SECTION 1 timebins = np.load(os.environ['RESULTSPATH']+'/timebins-run-%d.npy' % runNumber ) CSPAD = np.load(os.environ['RESULTSPATH']+'/CSPAD-run-%d.npy' % runNumber) variance = np.load(os.environ['RESULTSPATH']+'/variance-run-%d.npy' % runNumber) counts = np.load(os.environ['RESULTSPATH']+'/counts-run-%d.npy' % runNumber) # END SECTION 1 # SECTION 2 # timebins_dict = {} # CSPAD_dict = {} # run_dict = {29:0, 31:1, 33:2, 32:3, 30:4, 28:5} # for run_number in [28, 29, 30, 31, 32, 33]: # timebins_dict[run_number] = np.load(os.environ['RESULTSPATH']+'/timebins-run-%d.npy' % run_number ) # CSPAD_dict[run_number] = np.load(os.environ['RESULTSPATH']+'/CSPAD-run-%d.npy' % run_number) # CSPAD_summed = np.zeros((8,512,1024,6)) # runs = [] # for key in CSPAD_dict.keys(): # runs.append(key) # CSPAD_run = np.nanmean(CSPAD_dict[key], axis=-1) # index = run_dict[key] # print key, index, np.mean(CSPAD_run), np.nansum(CSPAD_dict[key]) # CSPAD_summed[:,:,:,index] = CSPAD_run # print runs # END SECTION 2 # - # # Plot CSPAD CSPAD.shape # ### This has a modified plotCSPAD() function that can take in a list of ROIs [[x0,y0,dx,dy],...,[xn,yn,dx,dy]] and plot them on the detector as bright spots # + from IPython.display import clear_output from plotStyles import * def plotCSPAD( cspad , x , y, cspadMask=None, zLims = None, divergent=False, NTILE=8, ROIs=None ): figOpts = {'xLims':[-1e5,1e5],'yLims':[-1e5,1e5],'divergent':divergent, 'xIn':3, 'yIn':3*11.5/14.5} if zLims is not None: figOpts['zLims'] = zLims for iTile in range(NTILE): if cspadMask is not None: cspadTile = cspad[iTile,:,:] tileMask = ~cspadMask[iTile,:,:] cspadTile[tileMask] = 0 if ROIs: for mask in ROIs: x0 = mask[0] y0 = mask[1] dx = mask[2] dy = mask[3] roimask = ( x0 < x[iTile] ) & ( (x0+dx) > x[iTile] ) & ( y0 < y[iTile] ) & ( (y0+dy) > y[iTile] ) cspadTile[roimask] = 1000 if iTile == 0: newFigure = True else: newFigure = False clear_output() colorPlot( x[iTile,:,:], y[iTile,:,:], cspadTile , newFigure=newFigure, **figOpts); x,y = CSPADgeometry(detType='Jungfrau', run=runNumber, experiment=experiment) # cspadMask = createMask(experiment=experiment, run=runNumber, detType='Jungfrau').astype(bool) cspadMask = np.ones_like(x).astype(bool) print(cspadMask.shape) CSPADbinned = 1e7*np.copy(CSPAD) CSPADbinned[CSPADbinned>10]=0 # plotCSPAD( cspadMask, x , y , cspadMask=cspadMask, divergent=True ) # plotCSPAD( np.sum(CSPADbinned[:,:,:,:100], axis=-1)-np.sum(CSPADbinned[:,:,:,100:200], axis=-1), x , y , cspadMask=cspadMask, divergent=False, NTILE=8 ) # plotCSPAD( 3000*(CSPADbinned[:,:,:,2]-CSPADbinned[:,:,:,1]), x , y , zLims=[-100,100], # cspadMask=cspadMask, divergent=True, NTILE=8, ROIs=[[1e4,1e4,1e4,1e4],[5e4,1e4,1e4,1e4]] ) # plotCSPAD( 3e8*(CSPAD_summed[:,:,:,0]-0*CSPAD_summed[:,:,:,0]), x , y , zLims=[-100,100], # cspadMask=cspadMask, divergent=True, NTILE=8, ROIs=[[x1,y1,dx,dy]] ) plotCSPAD( 3e11*(CSPAD_summed[:,:,:,2]-CSPAD_summed[:,:,:,0]), x , y , zLims=[-100,100], cspadMask=cspadMask, divergent=True, NTILE=8, ROIs=[] ) # - # # ROI analysis # + def roiSummed( x0, y0, dx, dy, x, y, image ): idx = ( x0 < x ) & ( (x0+dx) > x ) & ( y0 < y ) & ( (y0+dy) > y ) return np.sum( image[idx , :] , 0 ) x0, y0 = -2e4, -2.5e4 x1, y1 = 5e4, -2e4 dx, dy = 1.5e4, 1.5e4 # roi1 = roiSummed( x0, y0, dx, dy, x, y, CSPADbinned ) # errroi1 = roiSummed( x0, y0, dx, dy, x, y, variance ) # roi2 = roiSummed( x1, y1, dx, dy, x, y, CSPADbinned ) # errroi2 = roiSummed( x1, y1, dx, dy, x, y, variance ) roi1 = roiSummed( x0, y0, dx, dy, x, y, CSPAD_summed ) # errroi1 = roiSummed( x0, y0, dx, dy, x, y, variance ) roi2 = roiSummed( x1, y1, dx, dy, x, y, CSPAD_summed ) # errroi2 = roiSummed( x1, y1, dx, dy, x, y, variance ) # errorratio = 1/roi2*np.sqrt(errroi1)+roi1/roi2**2*np.sqrt(errroi2) ratio = roi1/roi2 plotme = ratio[~np.isnan(ratio)]#-ratio[~np.isnan(ratio)].min() print ratio # linePlot( timebins[~np.isnan(ratio)], plotme , newFigure = True) linePlot( np.array([-100,-50,-25,25,50,100]), plotme , newFigure = True) # plt.errorbar( timebins[~np.isnan(ratio)], plotme, yerr = errorratio[~np.isnan(ratio)] ) # -
preproc/LSF/TRXS-Run18v4/Template-Notebooks/ROI-Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import re import string from nltk.corpus import stopwords from nltk.tokenize import word_tokenize from nltk.stem import PorterStemmer from nltk.stem import WordNetLemmatizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.naive_bayes import MultinomialNB from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers.embeddings import Embedding from keras.preprocessing import sequence from keras.layers import Dropout import gensim np.random.seed(7) import warnings warnings.filterwarnings("ignore") # - import nltk nltk.download('punkt') nltk.download('stopwords') nltk.download('wordnet') stop_words = set(stopwords.words('english')) def delete_redundant_cols(df, cols): for col in cols: del df[col] return df # + def preprocess_review_text(review): review = review.lower() review = re.sub(r"http\S+|www\S+|https\S+","", review, flags=re.MULTILINE) review = review.translate(str.maketrans("","", string.punctuation)) review = re.sub(r'\@\w+|\#', "",review) review_tokens = word_tokenize(review) filtered_words = [word for word in review_tokens if word not in stop_words] ps = PorterStemmer() stemmed_words = [ps.stem(w) for w in filtered_words] lemmatizer = WordNetLemmatizer() lemma_words = [lemmatizer.lemmatize(w, pos='a') for w in stemmed_words] return " ".join(lemma_words) preprocess_review_text("Hi there, How are you preparing for your exams?") # - def get_feature_vector(train_fit): vector = TfidfVectorizer(sublinear_tf=True) vector.fit(train_fit) return vector df=pd.read_csv("datasets/amazon_fine_food_review.csv", encoding='latin-1') len(df["Text"]) df.head() df.columns #Deduplication of entries final=df.drop_duplicates(subset={"UserId","ProfileName","Time","Text"}, keep='first', inplace=False) final.shape review_length = 400 df_filtered = final[final["Text"].map(len) < review_length] len(df_filtered["Text"]) from wordcloud import WordCloud import matplotlib.pyplot as plt all_reviews = ''.join([sentence for sentence in df_filtered['Text']]) word_cloud = WordCloud(width=1000,height=700,random_state=21,max_font_size=119).generate(all_reviews) print("Most used words in the dataset") plt.figure(figsize=(10,10)) plt.imshow(word_cloud,interpolation='bilinear') plt.axis('off') plt.show() # + redundant_cols=['Id', 'ProductId', 'UserId', 'ProfileName', 'Time'] df_filtered2=delete_redundant_cols(df_filtered, redundant_cols) # - df_filtered2.columns df_filtered2["Score"].unique() def int_to_string(sentiment): if sentiment == 1: return "Highly Negative" elif sentiment == 2: return "Somewhat Negative" elif sentiment == 3: return "Neutral" elif sentiment == 4: return "Somewhat Positive" else: return "Highly Positive" df_filtered2.Text = df_filtered2["Text"].apply(preprocess_review_text) df_filtered2 df_filtered2.iloc[:, 4] df_filtered2.iloc[:, 2] # + #data sample is too large, we take part of it thresh = 20000 sample_data_X = df_filtered2.iloc[:, 4][:thresh] sample_data_Y = df_filtered2.iloc[:, 2][:thresh] tf_vector = get_feature_vector(np.array(sample_data_X).ravel()) X = tf_vector.transform(np.array(sample_data_X).ravel()) y = np.array(sample_data_Y).ravel() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=30) print("X_train shape is:",X_train.shape) print("X_test shape is:",X_test.shape) # - string_data = [dt.split() for dt in sample_data_X] #convert data to interperable strings to run gensim model gensim_model = gensim.models.word2vec.Word2Vec(size=300, window=7, min_count=10, workers=8) gensim_model.build_vocab(string_data) gensim_results = gensim_model.wv.vocab.keys() vocabs= len(gensim_results) print("The total number of vocabs is: %d"%vocabs) # ### Using a Logistic Regression model to fit the data. LR_model = LogisticRegression(solver='lbfgs') LR_model.fit(X_train, y_train) y_predict_lr = LR_model.predict(X_test) print("The accuracy score of the Logistic Regression model is:",accuracy_score(y_test, y_predict_lr)) # ### Using a MultinomiaLNB model to fit the data. # NB_model = MultinomialNB() NB_model.fit(X_train, y_train) y_predict_nb = NB_model.predict(X_test) print("The accuracy score of the MultinomialNB model is:",accuracy_score(y_test, y_predict_nb)) from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() count_vect.fit(df_filtered2.iloc[:, 4]) vocabulary = count_vect.get_feature_names() print('Words in the Vocabulary : ',len(vocabulary)) # ### Using a Neural Network to fit the data # + sample_data_X = df_filtered2.iloc[:, 4][:thresh] sample_data_Y = df_filtered2.iloc[:, 2][:thresh] X_train, X_test, y_train, y_test = train_test_split(sample_data_X, sample_data_Y, test_size=0.3, random_state=30) print(X_train.shape,X_test.shape) # - from keras.preprocessing.text import Tokenizer tokeni = Tokenizer() tokeni.fit_on_texts(X_train) vocab_size = len(tokeni.word_index) + 1 print("Total words", vocab_size) #padding input sequences max_review_length = 100 X_train = sequence.pad_sequences(tokeni.texts_to_sequences(X_train), maxlen=max_review_length) X_test = sequence.pad_sequences(tokeni.texts_to_sequences(X_test), maxlen=max_review_length) from sklearn.preprocessing import LabelEncoder encoder = LabelEncoder() encoder.fit(y_train.tolist()) # + y_train = encoder.transform(y_train.tolist()) y_test = encoder.transform(y_test.tolist()) y_train = y_train.reshape(-1,1) y_test = y_test.reshape(-1,1) print("y_train",y_train.shape) print("y_test",y_test.shape) # + vocab_size = len(vocabulary) embedding_vecor_length = 32 epochs = 10 model = Sequential() model.add(Embedding(vocab_size+1,embedding_vecor_length,input_length= max_review_length)) model.add(LSTM(100,return_sequences=True, dropout=0.4, recurrent_dropout=0.4)) model.add(LSTM(100, dropout=0.4, recurrent_dropout=0.4)) model.add(Dense(5, activation='softmax')) print(model.summary()) # model = Sequential() # model.add(embedding_layer) # model.add(Dropout(0.5)) # model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2)) # + from keras.optimizers import adam opt = adam(lr=0.01) # Compile model model.compile( loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'] ) # + import time from keras import utils from keras.callbacks import ReduceLROnPlateau, EarlyStopping start_time = time.time() callbacks = [ ReduceLROnPlateau(monitor='val_loss', patience=5, cooldown=0), EarlyStopping(monitor='val_acc', min_delta=1e-4, patience=5)] history = model.fit(X_train, y_train,epochs=3,batch_size = 512,shuffle=True,validation_split=0.3,callbacks=callbacks) end_time = time.time() time_taken = end_time - start_time # + test_loss, test_acc = model.evaluate(X_test,y_test) print('Test Loss: {}'.format(test_loss)) print('Test Accuracy: {}'.format(test_acc)) # + plt.figure(figsize=(20,20)) plt.subplot(2,1,1) plt.suptitle("Optimizer: Adam", fontsize=10) plt.ylabel("Loss", fontsize=16) plt.plot(history.history['loss'],label='Training Loss') plt.plot(history.history['val_loss'],label='Validation Loss') plt.legend(loc='upper right') #do some printing to visualize accuracy = np.array(history.history['acc']) val_accuracy = np.array(history.history['val_acc']) avg = 20 a_sum = np.cumsum(accuracy) v_sum= np.cumsum(val_accuracy) a_sum[avg:] = a_sum[avg:] - a_sum[:-avg] a_plot =a_sum[avg-1:]/avg v_sum[avg:] = v_sum[avg:] - v_sum[:-avg] v_plot =v_sum[avg-1:]/avg plt.subplot(2,1,2) plt.ylabel("Accuracy", fontsize=16) plt.xlabel("Epochs",fontsize=16) # plt.plot(a_plot,label='Training Accuracy',color='blue',marker='o') plt.plot(history.history['acc'],label='T Accuracy',color='magenta') # plt.plot(v_plot,label='Validation Accuracy',marker='o',color='red') plt.plot(history.history['val_acc'],label='V Accuracy',color='black') plt.legend(loc='upper left') plt.show() # - def predict(text): # Tokenize text x_test = sequence.pad_sequences(tokeni.texts_to_sequences([text]), maxlen=max_review_length) # Predict score = model.predict([x_test])[0] # Decode sentiment pred = np.argmax(score) #get the index of the highest prediction 1,2,3,4,5 pred_score = score[pred] #get the highest score from the prediction # print(score) label = int_to_string(pred) #get the sentiment based on the prediction return {"label": label, "score": pred_score} # + test_a = "I loved the product" test_b = "My order came late, where can I make a complaint?" test_c = "The order was broken and was a mess, I will be asking for a refund" test_d = "The product was okay, not really great" test_e = "I am so happy I bought this product, I am going to buy more soon" tests = [test_a,test_b,test_c,test_d,test_e] for i,test in enumerate(tests): result = predict(test) print("Test "+str(i)+" result:",result) # -
code/.ipynb_checkpoints/Project-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # DAT210x - Programming with Python for DS # ## Module5- Lab8 # + import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('ggplot') # Look Pretty # - # ### A Convenience Function # This convenience method will take care of plotting your test observations, comparing them to the regression line, and displaying the R2 coefficient def drawLine(model, X_test, y_test, title): fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(X_test, y_test, c='g', marker='o') ax.plot(X_test, model.predict(X_test), color='orange', linewidth=1, alpha=0.7) print("Est 2014 " + title + " Life Expectancy: ", model.predict([[2014]])[0]) print("Est 2030 " + title + " Life Expectancy: ", model.predict([[2030]])[0]) print("Est 2045 " + title + " Life Expectancy: ", model.predict([[2045]])[0]) score = model.score(X_test, y_test) title += " R2: " + str(score) ax.set_title(title) plt.show() # ### The Assignment # Load up the data here into a variable called `X`. As usual, do a .describe and a print of your dataset and compare it to the dataset loaded in a text file or in a spread sheet application: # .. your code here .. pd.read_csv('Datasets/life_expectancy.csv') # Create your linear regression model here and store it in a variable called `model`. Don't actually train or do anything else with it yet: # + # .. your code here .. # - # Slice out your data manually (e.g. don't use `train_test_split`, but actually do the indexing yourself. Set `X_train` to be year values LESS than 1986, and `y_train` to be corresponding 'WhiteMale' age values. You might also want to read the note about slicing on the bottom of this document before proceeding: # + # .. your code here .. # - # Train your model then pass it into `drawLine` with your training set and labels. You can title it 'WhiteMale'. `drawLine` will output to the console a 2014 extrapolation / approximation for what it believes the WhiteMale's life expectancy in the U.S. will be... given the pre-1986 data you trained it with. It'll also produce a 2030 and 2045 extrapolation: # + # .. your code here .. # - # Print the actual 2014 'WhiteMale' life expectancy from your loaded dataset # + # .. your code here .. # - # Repeat the process, but instead of for WhiteMale, this time select BlackFemale. Create a slice for BlackFemales, fit your model, and then call `drawLine`. Lastly, print out the actual 2014 BlackFemale life expectancy: # + # .. your code here .. # - # Lastly, print out a correlation matrix for your entire dataset, and display a visualization of the correlation matrix, just as we described in the visualization section of the course: # + # .. your code here .. # - plt.show() # ### Notes On Fitting, Scoring, and Predicting: # Here's a hint to help you complete the assignment without pulling your hair out! When you use `.fit()`, `.score()`, and `.predict()` on your model, SciKit-Learn expects your training data to be in spreadsheet (2D Array-Like) form. This means you can't simply pass in a 1D Array (slice) and get away with it. # # To properly prep your data, you have to pass in a 2D Numpy Array, or a dataframe. But what happens if you really only want to pass in a single feature? # # If you slice your dataframe using `df[['ColumnName']]` syntax, the result that comes back is actually a _dataframe_. Go ahead and do a `type()` on it to check it out. Since it's already a dataframe, you're good -- no further changes needed. # # But if you slice your dataframe using the `df.ColumnName` syntax, OR if you call `df['ColumnName']`, the result that comes back is actually a series (1D Array)! This will cause SKLearn to bug out. So if you are slicing using either of those two techniques, before sending your training or testing data to `.fit` / `.score`, do `any_column = my_column.reshape(-1,1)`. # # This will convert your 1D array of `[n_samples]`, to a 2D array shaped like `[n_samples, 1]`. A single feature, with many samples. # # If you did something like `my_column = [my_column]`, that would produce an array in the shape of `[1, n_samples]`, which is incorrect because SKLearn expects your data to be arranged as `[n_samples, n_features]`. Keep in mind, all of the above only relates to your `X` or input data, and does not apply to your `y` or labels.
Module5/.ipynb_checkpoints/Module5 - Lab8-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: antidote # language: python # name: antidote # --- # + [markdown] pycharm={} # ## Injection benchmark # # ### Setup # # An i7 7700K were used for the timings. # + pycharm={} import sys from antidote import __version__, is_compiled print(f"Antidote: {__version__()} {'(cython)' if is_compiled() else ''}") print(f"Python {sys.version}") # + [markdown] pycharm={} # ### Results # The key take away from those benchmarks, is to avoid using injection on short functions which are called repeatedly, in a loop typically. In the most common use case of dependency injection, service instantiation, the overhead should be negligible. # # It should be noted that in most cases the worst scenario is used, as functions do nothing. In the real world, pure python functions are a lot slower. So to put the following results into perspective, here is the time needed to decode this simple JSON. # + pycharm={} import json # %timeit json.loads('{ "name":"John", "age":30, "city":"New York"}') # + pycharm={} from antidote import world, register, inject @register class Service1: pass @register class Service2: def __init__(self, service1: Service1): self.service1 = service1 @register class Service3: def __init__(self, service1: Service1, service2: Service2): self.service1 = service1 self.service2 = service2 @register class Service4: def __init__(self, service1: Service1, service2: Service2, service3: Service3): self.service1 = service1 self.service2 = service2 self.service3 = service3 # + [markdown] pycharm={} # ### Function call # # Injection overhead is here measured with a function which does nothing. # + pycharm={} def f(s1: Service1, s2: Service2, s3: Service3, s4: Service4): return s1, s2, s3, s4 # + [markdown] pycharm={} # Time necessary to only execute the function, without retrieving the services # + pycharm={} args = (world.get(Service1), world.get(Service2), world.get(Service3), world.get(Service4)) # %timeit f(*args) # + [markdown] pycharm={} # Overhead of the injection when all argument must be retrieved from the container. # + pycharm={} f_injected = inject(f) assert f(*args) == f_injected() # %timeit f_injected() # + [markdown] pycharm={} # Overhead of the injection when no argument has to be retrieved. # + pycharm={} assert f(*args) == f_injected(*args) # %timeit f_injected(*args) # + [markdown] pycharm={} # ### Object instantiation # + pycharm={} class Obj: def __init__(self, s1: Service1, s2: Service2, s3: Service3, s4: Service4): self.s1 = s1 self.s2 = s2 self.s3 = s3 self.s4 = s4 # %timeit Obj(*args) # + pycharm={} @register class ObjInjected: def __init__(self, s1: Service1, s2: Service2, s3: Service3, s4: Service4): self.s1 = s1 self.s2 = s2 self.s3 = s3 self.s4 = s4 # %timeit ObjInjected() # + [markdown] pycharm={"metadata": false, "name": "#%% md\n"} # ### Configuration # # + pycharm={"is_executing": false, "metadata": false, "name": "#%%\n"} from antidote import LazyConfigurationMeta class Conf(metaclass=LazyConfigurationMeta): A = 'A' B = 'B' def __call__(self, key): return key # + pycharm={"metadata": false, "name": "#%%\n"} def g(a, b): return a, b # %timeit g('A', 'B') # + pycharm={"metadata": false, "name": "#%%\n"} conf = Conf() # %timeit g(conf('A'), conf('B')) # + pycharm={"metadata": false, "name": "#%%\n"} g_injected = inject(g, dependencies=(Conf.A, Conf.B)) assert g(conf('A'), conf('B')) == g_injected() assert g(conf.A, conf.B) == g_injected() # %timeit g_injected()
benchmark.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="DnLV1HUefFtW" # # Text Features and Embeddings In CatBoost # + [markdown] colab_type="text" id="0UAHpnD8fFtZ" # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catboost/tutorials/blob/master/events/2020_11_18_catboost_tutorial/text_embedding_features.ipynb) # # # **Set GPU as hardware accelerator** # # First of all, you need to select GPU as hardware accelerator. There are two simple steps to do so: # Step 1. Navigate to **Runtime** menu and select **Change runtime type** # Step 2. Choose **GPU** as hardware accelerator. # That's all! # + [markdown] colab_type="text" id="9FM0IRyi8NOw" # Let's install CatBoost. # + colab={"base_uri": "https://localhost:8080/", "height": 361} colab_type="code" id="TpJdgt63fSOv" outputId="d62a776e-f741-4192-b919-91903ea0441b" # !pip install catboost # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="MNC1tP0UfFtd" outputId="2c0abe55-df9c-4a0f-daa4-dc8c8d858f63" import os import pandas as pd import numpy as np np.set_printoptions(precision=4) import catboost print(catboost.__version__) # + [markdown] colab_type="text" id="OkexL1k7fFti" # ## Preparing data # + [markdown] colab_type="text" id="viF18QJqfFtd" # In this tutorial we will use dataset **IMDB** from [Kaggle](https://www.kaggle.com) competition for our experiments. Data can be downloaded [here](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). # - # !wget https://transfersh.com/ou7jB/imdb.csv -O imdb.csv df = pd.read_csv('imdb.csv') df['label'] = (df['sentiment'] == 'positive').astype(int) df.drop(['sentiment'], axis=1, inplace=True) df.head() # + from catboost import Pool from sklearn.model_selection import train_test_split train_df, test_df = train_test_split(df, train_size=0.8, random_state=0) y_train, X_train = train_df['label'], train_df.drop(['label'], axis=1) y_test, X_test = test_df['label'], test_df.drop(['label'], axis=1) train_pool = Pool(data=X_train, label=y_train, text_features=['review']) test_pool = Pool(data=X_test, label=y_test, text_features=['review']) print('Train dataset shape: {}\n'.format(train_pool.shape)) # + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="VTi3eN58fFt6" outputId="e694fed2-1341-45a3-c799-334b32fbc01e" from catboost import CatBoostClassifier def fit_model(train_pool, test_pool, **kwargs): model = CatBoostClassifier( iterations=1000, learning_rate=0.05, eval_metric='AUC', **kwargs ) return model.fit( train_pool, eval_set=test_pool, verbose=100, ) model = fit_model(train_pool, test_pool, task_type='GPU') # + [markdown] colab_type="text" id="IiHpTGfbfFuV" # ## How it works? # # 1. **Text Tokenization** # 2. **Dictionary Creation** # 3. **Feature Calculation** # + [markdown] colab_type="text" id="MszSnbqH8NR3" # ## Text Tokenization # + [markdown] colab_type="text" id="mOBGuexjb8tr" # Usually we get our text as a sequence of Unicode symbols. So, if the task isn't a DNA classification we don't need such granularity, moreover, we need to extract more complicated entities, e.g. words. The process of extraction tokens -- words, numbers, punctuation symbols or special symbols which defines emoji from a sequence is called **tokenization**.<br> # # Tokenization is the first part of text preprocessing in CatBoost and performed as a simple splitting a sequence on a string pattern (e.g. space). # + colab={} colab_type="code" id="NAeELULufFuV" text_small = [ "Cats are so cute :)", "Mouse scare...", "The cat defeated the mouse", "Cute: Mice gather an army!", "Army of mice defeated the cat :(", "Cat offers peace", "Cat is scared :(", "Cat and mouse live in peace :)" ] target_small = [1, 0, 1, 1, 0, 1, 0, 1] # + colab={"base_uri": "https://localhost:8080/", "height": 161} colab_type="code" id="E21CQ8ocfFuX" outputId="f78b995b-29fc-41c9-b28c-b3adee167ba7" from catboost.text_processing import Tokenizer simple_tokenizer = Tokenizer() def tokenize_texts(texts): return [simple_tokenizer.tokenize(text) for text in texts] simple_tokenized_text = tokenize_texts(text_small) simple_tokenized_text # + [markdown] colab_type="text" id="ChZQ5cpJfFuZ" # ### More preprocessing! # # Lets take a closer look on the tokenization result of small text example -- the tokens contains a lot of mistakes: # # 1. They are glued with punctuation 'Cute:', 'army!', 'skare...'. # 2. The words 'Cat' and 'cat', 'Mice' and 'mice' seems to have same meaning, perhaps they should be the same tokens. # 3. The same problem with tokens 'are'/'is' -- they are inflected forms of same token 'be'. # # **Punctuation handling**, **lowercasing**, and **lemmatization** processes help to overcome these problems. # + [markdown] colab_type="text" id="qaoTjEmR8NSM" # ### Punctuation handling and lowercasing # + colab={"base_uri": "https://localhost:8080/", "height": 161} colab_type="code" id="6cPpYpmtfFuZ" outputId="2bc7abef-5828-43af-d588-48edb490eed9" tokenizer = Tokenizer( lowercasing=True, separator_type='BySense', token_types=['Word', 'Number'] ) tokenized_text = [tokenizer.tokenize(text) for text in text_small] tokenized_text # + [markdown] colab_type="text" id="JDhBkZzJfFua" # ### Removing stop words # # **Stop words** - the words that are considered to be uninformative in this task, e.g. function words such as *the, is, at, which, on*. # Usually stop words are removed during text preprocessing to reduce the amount of information that is considered for further algorithms. # Stop words are collected manually (in dictionary form) or automatically, for example taking the most frequent words. # + colab={"base_uri": "https://localhost:8080/", "height": 161} colab_type="code" id="d1MYzKgTfFub" outputId="865f655e-0cb9-4626-9d40-e459b9487b0f" stop_words = set(('be', 'is', 'are', 'the', 'an', 'of', 'and', 'in')) def filter_stop_words(tokens): return list(filter(lambda x: x not in stop_words, tokens)) tokenized_text_no_stop = [filter_stop_words(tokens) for tokens in tokenized_text] tokenized_text_no_stop # + [markdown] colab_type="text" id="vxofPVc1fFuc" # ### Lemmatization # # Lemma (Wikipedia) -- is the canonical form, dictionary form, or citation form of a set of words.<br> # For example, the lemma "go" represents the inflected forms "go", "goes", "going", "went", and "gone".<br> # The process of convertation word to its lemma called **lemmatization**. # # + colab={"base_uri": "https://localhost:8080/", "height": 89} colab_type="code" id="HWrijpMGfFud" outputId="1b6b8015-8cf9-47c5-89cf-5d5fc8b5f794" import nltk nltk_data_path = os.path.join(os.path.dirname(nltk.__file__), 'nltk_data') nltk.data.path.append(nltk_data_path) nltk.download('wordnet', nltk_data_path) lemmatizer = nltk.stem.WordNetLemmatizer() def lemmatize_tokens_nltk(tokens): return list(map(lambda t: lemmatizer.lemmatize(t), tokens)) # + colab={"base_uri": "https://localhost:8080/", "height": 161} colab_type="code" id="XfyhV9ONfFuf" outputId="4b0568c9-3bb8-483a-8f86-dd358c6fd2c5" text_small_lemmatized_nltk = [lemmatize_tokens_nltk(tokens) for tokens in tokenized_text_no_stop] text_small_lemmatized_nltk # + [markdown] colab_type="text" id="y63KVna4fFui" # Now words with same meaning represented by the same token, tokens are not glued with punctuation. # # <span style="color:red">Be carefull.</span> You should verify for your own task:<br> # Is it realy necessary to remove punctuation, lowercasing sentences or performing a lemmatization and/or by word tokenization?<br> # + [markdown] colab_type="text" id="qFWoSX-kfFui" # ### Let's check up accuracy with new text preprocessing # # Since CatBoost doesn't perform spacing punctuation, lowercasing letters and lemmatization, we need to preprocess text manually and then pass it to learning algorithm. # # Since the natural text features is only synopsis and review, we will preprocess only them. # + colab={"base_uri": "https://localhost:8080/", "height": 53} colab_type="code" id="ZHL3x7NwfFuj" outputId="85135452-02ea-4644-882d-726fcc568605" # %%time def preprocess_data(X): X_preprocessed = X.copy() X_preprocessed['review'] = X['review'].apply(lambda x: ' '.join(lemmatize_tokens_nltk(tokenizer.tokenize(x)))) return X_preprocessed X_preprocessed_train = preprocess_data(X_train) X_preprocessed_test = preprocess_data(X_test) train_processed_pool = Pool( X_preprocessed_train, y_train, text_features=['review'], ) test_processed_pool = Pool( X_preprocessed_test, y_test, text_features=['review'], ) # + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="0jJJSrFJfFuk" outputId="6baeef42-d430-4793-fc33-556095416a9b" model_on_processed_data = fit_model(train_processed_pool, test_processed_pool, task_type='GPU') # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="AXDdPAgyfFum" outputId="61e26e81-b858-4675-ab58-aaf3384428ae" def print_score_diff(first_model, second_model): first_accuracy = first_model.best_score_['validation']['AUC'] second_accuracy = second_model.best_score_['validation']['AUC'] gap = (second_accuracy - first_accuracy) / first_accuracy * 100 print('{} vs {} ({:+.2f}%)'.format(first_accuracy, second_accuracy, gap)) print_score_diff(model, model_on_processed_data) # + [markdown] colab_type="text" id="CJr7fXN7fFun" # ## Dictionary Creation # # After the first stage, preprocessing of text and tokenization, the second stage starts. The second stage uses the prepared text to select a set of units, which will be used for building new numerical features. # # A set of selected units is called dictionary. It might contain words, word bigramms, or character n-gramms. # + colab={} colab_type="code" id="D6H1MXf9fFuo" from catboost.text_processing import Dictionary # - text_small_lemmatized_nltk # + colab={} colab_type="code" id="Rn402k78fFuq" dictionary = Dictionary(occurence_lower_bound=0, max_dictionary_size=10) dictionary.fit(text_small_lemmatized_nltk); #dictionary.fit(text_small, tokenizer) # + colab={"base_uri": "https://localhost:8080/", "height": 253} colab_type="code" id="KJr0UBzOfFur" outputId="4ab23b42-0fb7-4ac4-c878-63da839c8635" dictionary.save('dictionary.tsv') # !cat dictionary.tsv # - dictionary.apply([text_small_lemmatized_nltk[0]]) # + [markdown] colab_type="text" id="U1wLb5MX8NTY" # ## Feature Calculation # + [markdown] colab_type="text" id="KYzNqXgcfFut" # ### Convertation into fixed size vectors # # The majority of classic ML algorithms are computing and performing predictions on a fixed number of features $F$.<br> # That means that learning set $X = \{x_i\}$ contains vectors $x_i = (a_0, a_1, ..., a_F)$ where $F$ is constant. # # Since text object $x$ is not a fixed length vector, we need to perform preprocessing of the origin set $D$.<br> # One of the simplest text to vector encoding technique is **Bag of words (BoW)**. # # ### Bag of words algorithm # # The algorithm takes in a dictionary and a text.<br> # During the algorithm text $x = (a_0, a_1, ..., a_k)$ converted into vector $\tilde x = (b_0, b_1, ..., b_F)$,<br> where $b_i$ is 0/1 (depending on whether there is a word with id=$i$ from dictionary into text $x$). # + X_proc_train_small, y_train_small = X_preprocessed_train[:1000]['review'].to_list(), y_train[:1000] X_proc_train_small = list(map(simple_tokenizer.tokenize, X_proc_train_small)) X_proc_test_small, y_test_small = X_preprocessed_test[:1000]['review'].to_list(), y_test[:1000] X_proc_test_small = list(map(simple_tokenizer.tokenize, X_proc_test_small)) dictionary = Dictionary(max_dictionary_size=100) dictionary.fit(X_proc_train_small); # + colab={"base_uri": "https://localhost:8080/", "height": 305} colab_type="code" id="ga0AfpT8fFuv" outputId="6b6e9abb-3e2a-4a8e-eac9-dacbac3c33fd" def bag_of_words(tokenized_text, dictionary): features = np.zeros((len(tokenized_text), dictionary.size)) for i, tokenized_sentence in enumerate(tokenized_text): indices = np.array(dictionary.apply([tokenized_sentence])[0]) if len(indices) > 0: features[i, indices] = 1 return features X_bow_train_small = bag_of_words(X_proc_train_small, dictionary) X_bow_test_small = bag_of_words(X_proc_test_small, dictionary) X_bow_train_small.shape # + colab={} colab_type="code" id="vhr-EyPyfFuy" from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import MultinomialNB from scipy.sparse import csr_matrix from sklearn.metrics import roc_auc_score def fit_linear_model(X, y): model = LogisticRegression() model.fit(X, y) return model def evaluate_model_auc(model, X, y): y_pred = model.predict_proba(X)[:,1] metric = roc_auc_score(y, y_pred) print('AUC: ' + str(metric)) # + colab={"base_uri": "https://localhost:8080/", "height": 125} colab_type="code" id="GekNCx5ofFuz" outputId="5b218b73-c7fd-4628-f218-29d0d30686eb" def evaluate_models(X_train, y_train, X_test, y_test): linear_model = fit_linear_model(X_train, y_train) print('Linear model') evaluate_model_auc(linear_model, X_test, y_test) print('Comparing to constant prediction') auc_constant_prediction = roc_auc_score(y_test, np.ones(shape=(len(y_test), 1)) * 0.5) print('AUC: ' + str(auc_constant_prediction)) evaluate_models(X_bow_train_small, y_train_small, X_bow_test_small, y_test_small) # + colab={"base_uri": "https://localhost:8080/", "height": 125} colab_type="code" id="uFsAWNE9fFu2" outputId="7197acdf-71ac-4c81-b507-4f06cafdbea8" unigram_dictionary = Dictionary(occurence_lower_bound=0, max_dictionary_size=1000) unigram_dictionary.fit(X_proc_train_small) X_bow_train_small = bag_of_words(X_proc_train_small, unigram_dictionary) X_bow_test_small = bag_of_words(X_proc_test_small, unigram_dictionary) print(X_bow_train_small.shape) evaluate_models(X_bow_train_small, y_train_small, X_bow_test_small, y_test_small) # + [markdown] colab_type="text" id="yvjUACB_fFu6" # ### Looking at sequences of letters / words # # Let's look at the example: texts 'The cat defeated the mouse' and 'Army of mice defeated the cat :('<br> # Simplifying it we have three tokens in each sentence 'cat defeat mouse' and 'mouse defeat cat'.<br> # After applying BoW we get two equal vectors with the opposite meaning: # # | cat | mouse | defeat | # |-----|-------|--------| # | 1 | 1 | 1 | # | 1 | 1 | 1 | # # How to distinguish them? # Lets add sequences of words as a single tokens into our dictionary: # # | cat | mouse | defeat | cat_defeat | mouse_defeat | defeat_cat | defeat_mouse | # |-----|-------|--------|------------|--------------|------------|--------------| # | 1 | 1 | 1 | 1 | 0 | 0 | 1 | # | 1 | 1 | 1 | 0 | 1 | 1 | 0 | # # **N-gram** is a continguous sequence of $n$ items from a given sample of text or speech (Wikipedia).<br> # In example above Bi-gram (Bigram) = 2-gram of words. # # Ngrams help to add into vectors more information about text structure, moreover there are n-grams has no meanings in separation, for example, 'Mickey Mouse company'. # + colab={"base_uri": "https://localhost:8080/", "height": 379} colab_type="code" id="WU6iWFPZClrf" outputId="b666b9a2-0782-472a-a729-0fa1b15bd9f2" dictionary = Dictionary(occurence_lower_bound=0, gram_order=2) dictionary.fit(text_small_lemmatized_nltk) dictionary.save('dictionary.tsv') # !cat dictionary.tsv # + colab={"base_uri": "https://localhost:8080/", "height": 125} colab_type="code" id="ypPTi_XXfFu7" outputId="59136696-c457-4f99-b884-cf1e2e68fb80" bigram_dictionary = Dictionary(occurence_lower_bound=0, max_dictionary_size=5000, gram_order=2) bigram_dictionary.fit(X_proc_train_small) X_bow_train_small = bag_of_words(X_proc_train_small, bigram_dictionary) X_bow_test_small = bag_of_words(X_proc_test_small, bigram_dictionary) print(X_bow_train_small.shape) evaluate_models(X_bow_train_small, y_train_small, X_bow_test_small, y_test_small) # + [markdown] colab_type="text" id="1uLlIfJHodEL" # ### Unigram + Bigram # + colab={"base_uri": "https://localhost:8080/", "height": 125} colab_type="code" id="XaRC74kNfFu8" outputId="f67a5ea4-0795-4b16-db80-2bff733109e9" X_bow_train_small = np.concatenate(( bag_of_words(X_proc_train_small, unigram_dictionary), bag_of_words(X_proc_train_small, bigram_dictionary) ), axis=1) X_bow_test_small = np.concatenate(( bag_of_words(X_proc_test_small, unigram_dictionary), bag_of_words(X_proc_test_small, bigram_dictionary) ), axis=1) print(X_bow_train_small.shape) evaluate_models(X_bow_train_small, y_train_small, X_bow_test_small, y_test_small) # + [markdown] colab_type="text" id="oFR_rMfH8NT_" # ## CatBoost Configuration # + [markdown] colab_type="text" id="8xoFAOiz8NT_" # Parameter names: # # 1. **Text Tokenization** - `tokenizers` # 2. **Dictionary Creation** - `dictionaries` # 3. **Feature Calculation** - `feature_calcers` # # \* More complex configuration with `text_processing` parameter # + [markdown] colab_type="text" id="Wntt3XrYgkhf" # ### `tokenizers` # # Tokenizers used to preprocess Text type feature columns before creating the dictionary. # # [Documentation](https://catboost.ai/docs/references/tokenizer_options.html). # # ``` # tokenizers = [{ # 'tokenizerId': 'Space', # 'delimiter': ' ', # 'separator_type': 'ByDelimiter', # },{ # 'tokenizerId': 'Sense', # 'separator_type': 'BySense', # }] # ``` # + [markdown] colab_type="text" id="aKqHyav7fFu-" # ### `dictionaries` # # Dictionaries used to preprocess Text type feature columns. # # [Documentation](https://catboost.ai/docs/references/dictionaries_options.html). # # ``` # dictionaries = [{ # 'dictionaryId': 'Unigram', # 'max_dictionary_size': '50000', # 'gram_count': '1', # },{ # 'dictionaryId': 'Bigram', # 'max_dictionary_size': '50000', # 'gram_count': '2', # },{ # 'dictionaryId': 'Trigram', # 'token_level_type': 'Letter', # 'max_dictionary_size': '50000', # 'gram_count': '3', # }] # ``` # + [markdown] colab_type="text" id="JT6I_LN98NUC" # ### `feature_calcers` # # Feature calcers used to calculate new features based on preprocessed Text type feature columns. # # 1. **`BoW`**<br> # Bag of words: 0/1 features (text sample has or not token_id).<br> # Number of produced numeric features = dictionary size.<br> # Parameters: `top_tokens_count` - maximum number of tokens that will be used for vectorization in bag of words, the most frequent $n$ tokens are taken (**highly affect both on CPU ang GPU RAM usage**). # # 2. **`NaiveBayes`**<br> # NaiveBayes: [Multinomial naive bayes](https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayes) model. As many new features as classes are added. This feature is calculated by analogy with counters in CatBoost by permutation ([estimation of CTRs](https://catboost.ai/docs/concepts/algorithm-main-stages_cat-to-numberic.html)). In other words, a random permutation is made and then we go from top to bottom on the dataset and calculate the probability of its belonging to this class for each object. # # 3. **`BM25`**<br> # [BM25](https://en.wikipedia.org/wiki/Okapi_BM25). As many new features as classes are added. The idea is the same as in Naive Bayes, but for each class we calculate not the conditional probability, but a certain relevance, which is similar to tf-idf, where the tokens instead of the words and the classes instead of the documents (or rather, the unification of all texts of this class). Only the tf multiplier in BM25 is replaced with another multiplier, which gives an advantage to classes that contain rare tokens. # # ``` # feature_calcers = [ # 'BoW:top_tokens_count=1000', # 'NaiveBayes', # 'BM25', # ] # ``` # + [markdown] colab_type="text" id="02lH5f1PgpYM" # ### `text_processing` # # ``` # text_processing = { # "tokenizers" : [{ # "tokenizer_id" : "Space", # "separator_type" : "ByDelimiter", # "delimiter" : " " # }], # # "dictionaries" : [{ # "dictionary_id" : "BiGram", # "max_dictionary_size" : "50000", # "occurrence_lower_bound" : "3", # "gram_order" : "2" # }, { # "dictionary_id" : "Word", # "max_dictionary_size" : "50000", # "occurrence_lower_bound" : "3", # "gram_order" : "1" # }], # # "feature_processing" : { # "default" : [{ # "dictionaries_names" : ["BiGram", "Word"], # "feature_calcers" : ["BoW"], # "tokenizers_names" : ["Space"] # }, { # "dictionaries_names" : ["Word"], # "feature_calcers" : ["NaiveBayes"], # "tokenizers_names" : ["Space"] # }], # } # } # ``` # + [markdown] colab_type="text" id="xlo77dzufFvE" # ## Summary: Text features in CatBoost # # ### The algorithm: # 1. Input text is loaded as a usual column. ``text_column: [string]``. # 2. Each text sample is tokenized via splitting by space. ``tokenized_column: [[string]]``. # 3. Dictionary estimation. # 4. Each string in tokenized column is converted into token_id from dictionary. ``text: [[token_id]]``. # 5. Depending on the parameters CatBoost produce features basing on the resulting text column: Bag of words, Multinomial naive bayes or Bm25. # 6. Computed float features are passed into the usual CatBoost learning algorithm. # + [markdown] colab={} colab_type="code" id="_A87DhGF8SIa" # # Embeddings In CatBoost # - # ### Get Embeddings # + # from sentence_transformers import SentenceTransformer # big_model = SentenceTransformer('roberta-large-nli-stsb-mean-tokens') # X_embed_train = big_model.encode(X_train['review'].to_list()) # X_embed_test = big_model.encode(X_test['review'].to_list()) # !wget https://transfersh.com/HDHxy/embedded_train.npy -O embedded_train.npy X_embed_train = np.load('embedded_train.npy') # !wget https://transfersh.com/whOm3/embedded_test.npy -O embedded_test.npy X_embed_test = np.load('embedded_test.npy') # - # ### Experiments X_embed_first_train_small, y_first_train_small = X_embed_train[:5000], y_train[:5000] X_embed_second_train_small, y_second_train_small = X_embed_train[5000:10000], y_train[5000:10000] X_embed_test_small, y_test_small = X_embed_test[:5000], y_test[:5000] # #### Pure embeddings evaluate_models(X_embed_second_train_small, y_second_train_small, X_embed_test_small, y_test_small) # #### Linear Discriminant Analysis # + from sklearn.discriminant_analysis import LinearDiscriminantAnalysis lda = LinearDiscriminantAnalysis(solver='svd') lda.fit(X_embed_first_train_small, y_first_train_small) X_lda_train_small = lda.transform(X_embed_second_train_small) X_lda_test_small = lda.transform(X_embed_test_small) print(X_lda_train_small.shape) evaluate_models(X_lda_train_small, y_second_train_small, X_lda_test_small, y_test_small) # - # ### Embeddings in CatBoost # + import csv with open('train_embed_text.tsv', 'w') as f: writer = csv.writer(f, delimiter='\t', quotechar='"') for y, text, row in zip(y_train, X_preprocessed_train['review'].to_list(), X_embed_train): writer.writerow((str(y), text, ';'.join(map(str, row)))) with open('test_embed_text.tsv', 'w') as f: writer = csv.writer(f, delimiter='\t', quotechar='"') for y, text, row in zip(y_test, X_preprocessed_test['review'].to_list(), X_embed_test): writer.writerow((str(y), text, ';'.join(map(str, row)))) with open('pool_text.cd', 'w') as f: f.write( '0\tLabel\n'\ '1\tText\n'\ '2\tNumVector' ) # - from catboost import Pool train_embed_pool = Pool('train_embed_text.tsv', column_description='pool_text.cd') test_embed_pool = Pool('test_embed_text.tsv', column_description='pool_text.cd') model_text_embeddings = fit_model(train_embed_pool, test_embed_pool) print_score_diff(model, model_text_embeddings) # # Thanks!
catboost/tutorials/events/2020_11_18_catboost_tutorial/text_embedding_features.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/JSJeong-me/Machine_Learning/blob/main/ML/11_RF_profiling-class-A.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="M2FXUNW0oRtV" # #!pip install -U pandas-profiling # + id="l4K4GyEunqsF" from sklearn.ensemble import RandomForestClassifier # + id="5dLImKxXnqsJ" from sklearn.model_selection import train_test_split # + id="7lGIAbd4nqsJ" import pandas as pd import numpy as np import pandas_profiling # + id="wUQXB8ElnqsK" # %matplotlib inline # + id="4-gBbOSTnqsL" df = pd.read_csv("./credit_cards_dataset.csv") # + colab={"base_uri": "https://localhost:8080/"} id="z1TS_DhXR-1K" outputId="32ce93f8-940d-4231-c861-e46f480d42f7" df.columns # + id="yXoMXHubSFKN" df['PAY_AVR'] = df[['PAY_0','PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6']].mean(axis=1) # + id="vZwuGI5LSakI" df['BILL_AMT'] = df[['BILL_AMT1', 'BILL_AMT2','BILL_AMT3', 'BILL_AMT4', 'BILL_AMT5', 'BILL_AMT6']].mean(axis=1) # + id="tReiOzJ1St8Q" df['PAY_AMT'] = df[['PAY_AMT1', 'PAY_AMT2', 'PAY_AMT3', 'PAY_AMT4', 'PAY_AMT5', 'PAY_AMT6']].mean(axis=1) # + colab={"base_uri": "https://localhost:8080/"} id="sM5QECVkS5UA" outputId="42d4c8f3-3872-42c1-f988-16509b3f0a3e" df_featured = df.drop(['PAY_0','PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6', 'BILL_AMT1', 'BILL_AMT2', 'BILL_AMT3', 'BILL_AMT4', 'BILL_AMT5', 'BILL_AMT6', 'PAY_AMT1', 'PAY_AMT2', 'PAY_AMT3', 'PAY_AMT4', 'PAY_AMT5', 'PAY_AMT6'], 1) # + id="IW1NNpylTjII" columns = list(df_featured.columns) # + id="98A2ml2cTsET" columns = ['ID', 'LIMIT_BAL', 'SEX', 'EDUCATION', 'MARRIAGE', 'AGE', 'PAY_AVR', 'BILL_AMT', 'PAY_AMT', 'default.payment.next.month'] # + id="2WB9tGp5UHmA" df_featured =df_featured[columns] # + id="j22pah5LUSfG" df = df_featured # + id="wmGmI4tonqsM" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="d504524d-57d4-4229-bf2c-dc7ba5ffcb03" df.describe() # + id="9Ab6xjChnqsO" # df.profile_report() # + id="L1-PPu2knqsO" #print("Original shape of the data: "+ str(df.shape)) features_names = df.columns # + id="Q8dSAhjCnqsO" #df.describe() # + colab={"base_uri": "https://localhost:8080/"} id="CfxpSDc0nqsO" outputId="dcb5219d-ef48-4a67-d448-9a6b9f41242c" X = df.drop('default.payment.next.month', axis =1).values y = df['default.payment.next.month'].values print(X.shape) print(y.shape) # + [markdown] id="gKHWf0h1nqsP" # Split my data into training and testing # + id="oP6sqee4nqsQ" X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7, random_state=42, shuffle=True) # + [markdown] id="iZXUXJ0EnqsQ" # Instantiate the random forest model with 200 trees # + id="I9J6JPoznqsQ" rf = RandomForestClassifier(n_estimators=200, criterion='entropy', max_features='log2', max_depth=15) # + colab={"base_uri": "https://localhost:8080/"} id="HrM3FIvLnqsR" outputId="12afdc5e-a026-4538-e450-30fb64844a15" rf.fit(X_train, y_train) # + id="8tZq3jOlnqsR" y_predict = rf.predict(X_test) # + [markdown] id="UGMj3rQbnqsR" # Check feature importance # # + colab={"base_uri": "https://localhost:8080/"} id="PKDJb_XEnqsR" outputId="8fa51df9-88f6-4394-a979-5e0babf0192a" sorted(zip(rf.feature_importances_, features_names), reverse=True) # + colab={"base_uri": "https://localhost:8080/", "height": 462} id="QwwgspM0nqsR" outputId="5577e6d3-f204-49e0-8d69-2daace6a8da3" ## plot the importances ## import matplotlib.pyplot as plt importances = rf.feature_importances_ indices = np.argsort(importances)[::-1] plt.figure(figsize=(12,6)) plt.title("Feature importances by DecisionTreeClassifier") plt.bar(range(len(indices)), importances[indices], color='lightblue', align="center") plt.step(range(len(indices)), np.cumsum(importances[indices]), where='mid', label='Cumulative') plt.xticks(range(len(indices)), features_names[indices], rotation='vertical',fontsize=14) plt.xlim([-1, len(indices)]) plt.show() # + [markdown] id="TTN5J36qnqsS" # # Making my prediction and seeing how well my model predicted by checking recall, precision, F1 score and making a confusion matrix. # # Recall -tells us generally or overall how well our model predicted based on # the total of how much it correctly predicted /correctly predicted + how many where actually right but predicted wrong. # # formula = TP/TP+FN # # Precision -tells us or gives us true measure how well our model predicted it shows correctly predicted /correctly predicted + how many the model predicted to be positive but where false. # # formula = TP/TP+FP # # F1 score - gives us a mean of precision and recall, a sumarization of how well it did in respect to recall and precision. # # + id="iXb_-ki9nqsS" from sklearn.metrics import precision_recall_fscore_support from sklearn.metrics import recall_score from sklearn.metrics import classification_report from sklearn.model_selection import GridSearchCV # + colab={"base_uri": "https://localhost:8080/"} id="aQdSKjUFnqsS" outputId="c0fd2be5-0a13-4230-c1e0-ee0601b62932" X_test.shape # + id="8QAxtfdknqsS" #Make my predictions y_prediction = rf.predict(X_test) # + id="B4k2IDRynqsS" y_probability = rf.predict_proba(X_test) # + colab={"base_uri": "https://localhost:8080/"} id="46nJvI_7nqsT" outputId="88375d3e-ca0e-46bd-9532-e3ed8807c506" y_probability.shape # + id="2qC-BbD4rLbJ" # + colab={"base_uri": "https://localhost:8080/"} id="D99WK0WAnqsT" outputId="b445aa01-9231-484a-c3cb-4d9408105889" print("Recall score:"+ str(recall_score(y_test, y_prediction))) # + colab={"base_uri": "https://localhost:8080/"} id="pxejJ6q7nqsT" outputId="6903455f-4930-4af2-d52c-93dde66ead34" y_prediction.reshape(-1,1) # + colab={"base_uri": "https://localhost:8080/"} id="VL5N5MPHnqsT" outputId="5983263a-332c-4a60-ded5-70109e4291ca" # This shows overall acuracy of how well it will predict defualt or non_default # The scores corresponding to every class will tell you the accuracy of the classifier # in classifying the data points in that particular class compared to all other classes. # The support is the number of samples of the true response that lie in that class. print(classification_report(y_test, y_prediction, target_names=["non_default", "default"])) # + id="537EH3GhnqsU" # Creating confusion matrix would give us a ration of non-default compared # to default. # + id="Q2-VR8DXnqsU" from sklearn.metrics import confusion_matrix import itertools import matplotlib import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import numpy as np # + id="aIlADg9wnqsU" def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") # + colab={"base_uri": "https://localhost:8080/", "height": 378} id="XP9RmawjnqsU" outputId="1d407136-f49b-40c9-a3f2-7c98bfd86bf3" cnf_matrix = confusion_matrix(y_test, y_prediction) plt.figure() plot_confusion_matrix(cnf_matrix, classes=['Non_Default','Default'], normalize=False, title='Non Normalized confusion matrix')
ML/11_RF_profiling-class-A.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### https://rsandstroem.github.io/sparkkmeans.html # ### https://datascience-enthusiast.com/Python/ml_RDD_DataFrames.html # ### https://github.com/jadianes/spark-py-notebooks # ### https://github.com/jlyang1990/Spark_Python_Do_Big_Data_Analytics from pyspark import SparkContext from pyspark.sql import SQLContext from pyspark.sql.types import * sc = SparkContext(appName="UseParquet") sqlContext = SQLContext(sc) # Read in the Parquet file created above. # Parquet files are self-describing so the schema is preserved. # The result of loading a parquet file is also a DataFrame. taxiparquet = sqlContext.read.parquet('./yellow_tripdata_2016-06-parquet') def prettySummary(df): """ Neat summary statistics of a Spark dataframe Args: pyspark.sql.dataframe.DataFrame (df): input dataframe Returns: pandas.core.frame.DataFrame: a pandas dataframe with the summary statistics of df """ import pandas as pd temp = df.describe().toPandas() temp.iloc[1:3,1:] = temp.iloc[1:3,1:].apply(pd.to_numeric, errors='coerce') pd.options.display.float_format = '{:,.2f}'.format return temp prettySummary(taxiparquet) taxi_select = taxiparquet.select(['passenger_count','trip_distance','total_amount','tip_amount','payment_type']) prettySummary(taxi_select) from __future__ import print_function from pyspark import SparkContext from pyspark.ml.regression import LinearRegression from pyspark.ml.feature import VectorAssembler features = ['passenger_count','trip_distance','total_amount','tip_amount'] assembler = VectorAssembler( inputCols=features, outputCol='features') assembled_taxi = assembler.transform(taxi_select) assembled_taxi.show(5) taxi_select.head(5) print(taxi_select) print(assembled_taxi) train, test = assembled_taxi.randomSplit([0.6, 0.4], seed=0) #assembled_taxi.sample(False,0.4, seed=0) lr = LinearRegression(maxIter=10).setLabelCol("payment_type").setFeaturesCol("features") model = lr.fit(train) testing_summary = model.evaluate(test) testing_summary.rootMeanSquaredError testing_summary.predictions.select('passenger_count','trip_distance','total_amount','tip_amount','payment_type','prediction').show(10) # + from __future__ import print_function import numpy as np import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn.datasets.samples_generator import make_blobs from pyspark import SparkContext from pyspark.ml.clustering import KMeans, KMeansModel from pyspark.ml.feature import VectorAssembler from pyspark.sql import SQLContext from pyspark.mllib.linalg import Vectors from pyspark.sql.types import Row # - from pyspark.sql.functions import monotonically_increasing_id taxi_select = taxi_select.withColumn("id", monotonically_increasing_id()) #move id first (left) taxi_select = taxi_select.select(['id','passenger_count','trip_distance','total_amount','tip_amount','payment_type']) features = ['passenger_count','trip_distance','total_amount','tip_amount'] assembler = VectorAssembler( inputCols=features, outputCol='features') assembled_taxid = assembler.transform(taxi_select).select('id', 'features') assembled_taxid.show(5) cost = np.zeros(12) sample = assembled_taxid.sample(False,0.1, seed=0) for k in range(5,12): kmeans = KMeans().setK(k).setSeed(1).setFeaturesCol("features") model = kmeans.fit(sample) cost[k] = model.computeCost(sample) # requires Spark 2.0 or later fig, ax = plt.subplots(1,1, figsize =(8,6)) ax.plot(range(5,12),cost[5:12]) ax.set_xlabel('k') ax.set_ylabel('cost') # + k = 10 sample = assembled_taxid.sample(False,0.1, seed=0) kmeans = KMeans().setK(k).setSeed(1).setFeaturesCol("features") model = kmeans.fit(sample) centers = model.clusterCenters() print("Cluster Centers: ") for center in centers: print(center) # - transformed = model.transform(sample).drop("features").select('id', 'prediction') rows = transformed.collect() print(rows[:3]) # taxi_pred = sqlContext.createDataFrame(rows) taxi_pred.show() taxi_pred = taxi_pred.join(taxi_select, 'id') taxi_pred.show() taxi_pred = taxi_pred.toPandas().set_index('id') taxi_pred.head()
rstudio/SparkMachineL01.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Criando Estruturas de Dados # ## Preparações Iniciais import pandas as pd # ## Series # # ### Criando Series com index customizado a partir de uma lista # + data = [1, 2, 3, 4, 5] index = ['Linha' + str(s) for s in range(len(data))] s = pd.Series(data = data, index = index) s # - # ### Operações com Series # Soma 2 a cada um dos registros da Series s s2 = s + 2 s2 # Soma duas series s3 = s + s2 s3 # Multiplica duas series s4 = s2 * s3 s4 # ## DataFrame data = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] index = ['Linha' + str(i) for i in range(len(data))] df1 = pd.DataFrame(data = data, index = index) df1 columns = ['Coluna' + str(i) for i in range(df1.shape[1])] df1.columns = columns df1 # ### Criando a partir de um dicionário # Transformando o DataFrame em um dicionário data = df1.to_dict() # Importando o dicionário em outro DataFrame df2 = pd.DataFrame(data) df2 # ### Criando a partir de uma lista de tuplas data = [(1, 2, 3), (4, 5, 6), (7, 8, 9)] df3 = pd.DataFrame(data = data, index = index, columns = columns) df3 # ### Operações nas atribuições df1[df1 > 0] = 'A' df2[df2 > 0] = 'B' df3[df3 > 0] = 'C' # ### Operações entre DataFrames # #### Concatenação df4 = pd.concat([df1, df2, df3]) df4 df5 = pd.concat([df1, df2, df3], axis = 1) df5 # ## Exercícios df1 = pd.DataFrame({'A': {'X': 1}, 'B': {'X': 2}}) df2 = pd.DataFrame({'C': {'X': 3}, 'D': {'X': 4}}) pd.concat([df1, df2]) dados = [('A', 'B'), ('C', 'D')] df = pd.DataFrame(dados, columns = ['L1', 'L2'], index = ['C1', 'C2']) df dados = [[1, 2, 3], [4, 5, 6]] index = 'X,Y'.split(',') columns = list('CBA')[::-1] df = pd.DataFrame(dados, index, columns) df dados = {'A': {'X': 1, 'Y': 3}, 'B': {'X': 2, 'Y': 4}} df = pd.DataFrame(dados) df
Pandas/extras/Criando Estruturas de Dados.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tempfile, shutil, os if os.environ.get('GOOGLE_APPLICATION_CREDENTIALS', None): # !gcloud auth activate-service-account --key-file $GOOGLE_APPLICATION_CREDENTIALS # + snapshot_path = os.environ.get('PROMETHEUS_SNAPSHOT_PATH', '/data/snapshots') bucket_name = os.environ.get('GCS_BUCKET', None) prom_host = os.environ.get('PROMETHEUS_HOST', 'localhost') prom_port = int(os.environ.get('PROMETHEUS_PORT', 9090)) slack_url = os.environ.get('SLACK_API_URL', None) prom_namespace = os.environ.get('PROMETHEUS_NAMESPACE', 'default') prom_container = os.environ.get('PROMETHEUS_CONTAINER', ' prometheus-server') if bucket_name is None: raise Exception("GCS_BUCKET must be specified!") if slack_url is None: raise Exception("SLACK_API_URL must be specified!") # - import http.client,datetime,os,requests, json def create_snapshot(https, host, port): conn = None try: if https: conn = http.client.HTTPSConnection(host, port=port) else: conn = http.client.HTTPConnection(host, port=port) conn.request( "POST", "/api/v1/admin/tsdb/snapshot") response = conn.getresponse() except Exception as e: print(e) return (False, str(e)) if response.status != 200: msg = f"Request failed on snapshot creation! Status code: {response.status}({ response.reason}) {response.read()}" print(msg) return (False, msg) result = json.load(response) if conn != None: conn.close() return (True, result['data']['name']) # + (success, snapshot_name) = create_snapshot(False, prom_host, prom_port) if success: tempdir = tempfile.mkdtemp() try: _, prom_pod = !kubectl get pod -l app=prometheus,component=server -o custom-columns=:metadata.name if snapshot_path == '' or snapshot_name == '': raise Exception(f'snapshot_path and snapshot_name cannot be empty! (snapshot_name: {snapshot_name}, snapshot_path: {snapshot_path})') # !kubectl cp -n {prom_namespace} -c {prom_container} {prom_pod}:{snapshot_path}/{snapshot_name} {tempdir}/{snapshot_name} # !gsutil -m cp -r {tempdir}/{snapshot_name} gs://{bucket_name}/{snapshot_name} finally: if snapshot_path != '' and snapshot_name != '': # !kubectl exec -n {prom_namespace} -c {prom_container} {prom_pod} -- rm -rf {snapshot_path}/{snapshot_name} shutil.rmtree(tempdir) else: now = datetime.datetime.now() message = str(now) + "\n" + snapshot_name data = {'text': message} requests.post(slack_url, json=data)
save_prometheus_snapshot.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # What the data looks like # # for each episode there is a json-file with following structure # # {"results":[{"alternatives":[{"transcript":"Hi, this is transcript of first segment","confidence":0.91,"words":[{ # "startTime":"0s", # "endTime":"0.300s", # "word":"Hi," # }, # {"startTime":"0.500s", # "endTime":"0.900s", # "word":"this"}]}]}]} # # # # # What I want the data to look like (kind of) # # for each show there shoud be one dict containing show id, show name and all episodes of show split into segments # # [{'show-id':'123','show-name':'alex & sigge','episodes':[{'name':'first episode','date':'2020-01-01','segmentTranscripts':[{'segNo':0,'startTime':'00:00s','endTime':'00:20s','transcript':'......',},{},]}]},{},] # # - show name would have to be looked up in the metadata.tsv # + tags=[] import json #for each episode extract {word:'hello',timestamp:'00.00s'} def extract_segments(path): with open(path, "r") as read_file: episode = json.load(read_file) segments=[] #had to do "manual" iteration due to irregularities in data iter=0 for segment in episode["results"]: seg_result={} #make sure there is only one dict in this list (should be true according to dataset description) assert len(segment["alternatives"])==1 segment_dict=segment["alternatives"][0] #sometimes "alternatives" dict is empty... if "words" and "transcript" in segment_dict: #add segment number seg_result["segNum"]=iter #add timestamp of the first word in this segment seg_result["startTime"]=segment_dict["words"][0]["startTime"] #add timestamp of the last word in this segment seg_result["endTime"]=segment_dict["words"][-1]["endTime"] #add transcript of this segment seg_result["transcript"]=segment_dict["transcript"] segments.append(seg_result) iter+=1 return segments # - extract_segments('/Users/Simpan/spotify/spotify-podcasts-2020/podcasts-transcripts/7/0/show_70AtBgIej68YuFXDt6l0aB/3GsS8PAYL6u3ClsZFnW3RZ.json').keys() # + tags=[] # !pwd # + tags=[] import json from training_data_collection import collect_training_episodes #TODO: formalize data organization, no hard codes input_file = '../data/podcasts_2020_train.1-8.qrels.txt' root_dir = '../data/podcasts-no-audio-13GB' training_episodes = collect_training_episodes(root_dir, input_file) training_segments = {} for episode in training_episodes: episode_id=episode.split('/')[-1].split('.json')[0] training_segments[episode_id]=extract_segments(episode) with open('../data/training_sub.json', 'w') as fout: json.dump(training_segments , fout) # + tags=[] import json path='../data/training_sub.json' with open(path,'r') as f: data=json.load(f) # + #fetch queries from xml and put into list #def fetch_queries(path) import xml.etree.ElementTree as ET tree = ET.parse('../../podcasts_2020_topics_train.xml') root = tree.getroot() topics=[{'topic-no':element[0].text,'query':element[2].text,'description':element[3].text} for element in root] # + tags=[] #fetch topic number, correct jump-in point for training set with open('../../podcasts_2020_train.1-8.qrels.txt','r') as f: contents=f.readlines() targets=[{'episode':line.split()[2].split('_')[0].split(':')[2],'topic-no':line[0],'target-time':line.split()[2].split('_')[1]} for line in contents] len(targets) # + tags=[] #TODO #match episodes, target times and queries #rewrite semi pseudo-code from sentence_transformers import SentenceTransformer, util import numpy as np embedder = SentenceTransformer('bert-base-nli-mean-tokens') #semi pseduo-code n_correct=0 for episode in data: episode_id= target_time= data=extract_segments(path) segments=[item["transcript"] for item in data] timespans=[(item["startTime"],item["stopTime"]) for item in data] query=topics[episode['episode-no']]['query'] episode_embeddings=embedder.encode(segments, convert_to_tensor=True) query_embedding=embedder(query) cos_scores = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0] #if running on gpu #if cuda.device()==GPU #cos_scores = cos_scores.cpu() top_k=5 top_results = np.argpartition(-cos_scores, range(top_k))[0:top_k] idx_closest=top_results[0] pred_timespan=timespans[idx_closest] if pred_timespan[0]<=target_timespan<=pred_timespan[1]: n_correct+=1 accuracy=n_correct/len(test_dataset) # - segments_l # + tags=[] #dont run import os base_dir="/Users/Simpan/spotify/spotify-podcasts-2020/podcasts-transcripts" data=[] for sub_dir in os.listdir(base_dir): for sub_sub_dir in os.listdir(base_dir+'/'+sub_dir): for show_dir in os.listdir(base_dir+'/'+sub_dir+'/'+sub_sub_dir): show_dict={} show_dict["showId"]=show_dir show_dict["episodes"]=[] for i,episode in enumerate(os.listdir(base_dir+'/'+sub_dir+'/'+sub_sub_dir+'/'+show_dir)): episode_dict={} #remove .json ending episode_dict["episodeId"]=episode[:-5] episode_dict["episodeNo"]=i episode_path=base_dir+'/'+sub_dir+'/'+sub_sub_dir+'/'+show_dir+'/'+episode episode_dict["episodeSegments"]=extract_segments(episode_path) show_dict["episodes"].append(episode_dict) data.append(show_dict) with open("test.json", "w+") as write_file: json.dump(data, write_file)
src/data_extraction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Estatísticas de formatos de arquivo no censo de Diários Oficiais # # Agora temos uma funcionalidade no site do [Censo](https://censo.ok.org.br/) que permite baixar os dados do mapeamento. # A partir desses dados, podemos encontrar analisar os formatos de arquivos que estão sendo utilizados nos diários oficiais e identificar oportunidades para maior aderência aos princípios de dados abertos. # # Para reproduzir esse notebook: # 1. Acesse a página do [andamento do censo](https://censo.ok.org.br/andamento/#view) e faça o download dos dados # 2. Coloque o arquivo na pasta `notebooks/` # importa bibliotecas import pandas as pd import numpy as np import matplotlib.pyplot as plt # carrega arquivo df = pd.read_csv('base_mapeamento.csv', sep=',') pd.options.display.max_colwidth = 50 df.head() # ## Análise preliminar: quais os formatos de arquivo mais utilizados nos D.O. municipais? df["tipo_arquivo"].describe() df["tipo_arquivo"].value_counts() df[df["tipo_arquivo"]=="None"] df.shape # + # Exclui do dataframe os municípios sem informação de formato df = df[df['tipo_arquivo'] != "None"] # Verifica-se que os dois registros sem indicação de formato foram retirados df.shape # - # Municípios que disponibilizam o diário oficial em "Outro formato" pd.options.display.max_colwidth = 1000 df[df['tipo_arquivo'] == 'Outro formato'][['municipio','observacoes']] # ### Análise preliminar: resultados # - Verifica-se que o dataframe possui registros de 326 municípios com mais de 100.000 habitantes, utilizando 4 formatos de arquivo principais em seus diários oficiais: PDF texto, HTML, PDF imagem e DOCX. # - Além desses, há 2 registros sem formato identificado ("None") (municípios de Rio Grande - RS e Viamão - RS) e 5 sob a categoria "Outro formato" (Lagarto - SE e Poá -SP, que disponibilizam seus D.O. em formatos de imagem; São Pedro da Aldeia - RJ, sem mais informações sobre formato; e Tatuí - SP e Tubarão - SC, que disponibilizam seus D.O. em vários formatos, a depender do acesso). # - O formato mais utilizado é o PDF texto, seguido por HTML. Tal constatação é positiva, pois esses são formatos abertos e mais aderentes aos princípios de dados abertos que PDF imagem (que dificulta a localização e extração de dados) e DOCX (que é um formato proprietário da Microsoft). # ## Análise nacional: qual o percentual de municípios e da população utilizando cada formato? # + # Percentual de municípios utilizando cada formato s_m = df.groupby(["tipo_arquivo"])["municipio"].nunique() num_municipios = df['municipio'].nunique() s_m_perc = s_m/num_municipios # Percentual da população com acesso a cada formato s_p = df.groupby(["tipo_arquivo"])["populacao_2020"].sum() pop_total = df['populacao_2020'].sum() s_p_perc = s_p/pop_total # Percentuais de municípios e da população com acesso em cada formato s_m_perc = s_m_perc.reset_index() s_p_perc = s_p_perc.reset_index() df_m_p_perc = s_m_perc.merge(s_p_perc, on="tipo_arquivo") df_m_p_perc.sort_values(by='municipio', ascending=False).style.format({ 'municipio': '{:.2%}'.format, 'populacao_2020': '{:.2%}'.format, }) # - # Visualização da quantidade de municípios utilizando cada formato s_m.sort_values().plot.barh() # Visualização da parcela de municípios e da população com acesso em cada formato df_m_p_perc.sort_values(by='municipio', ascending=False).plot.bar(x="tipo_arquivo") plt.legend(bbox_to_anchor=(1, 1.02)) plt.show() # ### Análise nacional: resultados # - Dos municípios com mais de 100 mil habitantes (excetuando os 2 removidos), verifica-se que quase 70% dos municípios disponibilizam seus D.O. no formato PDF texto. Esses municípios atendem a quase 80% da população desse recorte, o que indica que os municípios mais populosos dão preferência a esse formato. # - Esse fenômeno se inverte para os outros formatos (a participação é maior em termos de números de municípios do que de população atendida). Isso indica que os outros formatos são mais utilizados por municípios cuja população é menor que a média nacional. # # ** Observação: para frisar, todos os percentuais mencionados na análise referem-se aos municípios com mais de 100 mil habitantes que possuem formato de arquivo de publicação confirmado (foram excluídos da análise 2 municípios para os quais não havia informação de formato). # ## Análise regional: que regiões utilizam os melhores e os piores formatos? # Número de municípios utilizando cada formato, por região df_r = df.groupby(["regiao","tipo_arquivo"])["municipio"].nunique() df_r = df_r.reset_index().pivot('regiao', 'tipo_arquivo', 'municipio') df_r = df_r.fillna(0).astype(int) df_r.sort_values(by='PDF texto', ascending=False) # Percentual de municípios utilizando cada formato, por região df_r_perc = df_r.apply(lambda x: x/x.sum(), axis=1) df_r_perc.sort_values(by=['PDF texto'], ascending=False).style.format({ 'DOCX': '{:.2%}'.format, 'HTML': '{:.2%}'.format, 'Outro formato': '{:.2%}'.format, 'PDF imagem': '{:.2%}'.format, 'PDF texto': '{:.2%}'.format, }) # Visualização da parcela de municípios utilizando cada formato, por região ax = df_r_perc.sort_values(by=['PDF texto']).plot.barh(stacked=True) plt.legend(bbox_to_anchor=(1, 1.02)) ax.set_ylabel("Região") plt.show() # + # População com acesso a cada formato, por região df_r_p = df.groupby(["regiao","tipo_arquivo"])["populacao_2020"].sum() df_r_p = df_r_p.reset_index().pivot('regiao', 'tipo_arquivo', 'populacao_2020') df_r_p = df_r_p.fillna(0).astype(int) # Percentual da população com acesso a cada formato, por região df_r_p_perc = df_r_p.apply(lambda x: x/x.sum(), axis=1) df_r_p_perc.sort_values(by=['PDF texto'], ascending=False).style.format({ 'DOCX': '{:.2%}'.format, 'HTML': '{:.2%}'.format, 'Outro formato': '{:.2%}'.format, 'PDF imagem': '{:.2%}'.format, 'PDF texto': '{:.2%}'.format, }) # - # Visualização da parcela da população com acesso a cada formato, por região ax = df_r_p_perc.sort_values(by=['PDF texto']).plot.barh(stacked=True) plt.legend(bbox_to_anchor=(1, 1.02)) ax.set_ylabel("Região") plt.show() # ### Análise regional: resultados # - Verifica-se que a Região Sudeste é aquela que mais oferta D.O. no formato PDF texto sob todos os critérios medidos: em número total de municípios, em percentual de municípios e em percentual da população atendida. # - Tendo em conta as desvantagens do formato PDF imagem (dificuldade para encontrar e extrair informações), o destaque negativo fica para a Região Nordeste, que possui o maior percentual de municípios e da população atendida nesse formato. # # # ** Observação: para frisar, todos os percentuais mencionados na análise referem-se aos municípios com mais de 100 mil habitantes que possuem formato de arquivo de publicação confirmado (foram excluídos da análise 2 municípios para os quais não havia informação de formato). # ## Análise por UF: quais unidades federativas utilizam os melhores e os piores formatos? # + # Número de municípios utilizando cada formato, por UF df_uf = df.groupby(["UF", "tipo_arquivo"])["municipio"].nunique() df_uf = df_uf.reset_index().pivot('UF', 'tipo_arquivo', 'municipio') df_uf = df_uf.fillna(0).astype(int) # Percentual de municípios utilizando cada formato, por UF df_uf_perc = df_uf.apply(lambda x: x/x.sum(), axis=1) df_uf_perc.sort_values(by=['PDF texto'], ascending=False).style.format({ 'DOCX': '{:.2%}'.format, 'HTML': '{:.2%}'.format, 'Outro formato': '{:.2%}'.format, 'PDF imagem': '{:.2%}'.format, 'PDF texto': '{:.2%}'.format, }) # - # Visualização da parcela de municípios utilizando cada formato, por UF df_uf_perc.sort_values(by=['PDF texto'], ascending=False).plot.bar(stacked=True, figsize=(10,5)) plt.legend(bbox_to_anchor=(1, 1.02)) plt.show() # População com acesso a cada formato, por UF df_uf_p = df.groupby(["UF", "tipo_arquivo"])["populacao_2020"].sum() df_uf_p = df_uf_p.reset_index().pivot('UF', 'tipo_arquivo', 'populacao_2020') df_uf_p = df_uf_p.fillna(0).astype(int) # Percentual da população com acesso a cada formato, por região df_uf_p_perc = df_uf_p.apply(lambda x: x/x.sum(), axis=1) df_uf_p_perc.sort_values(by=['PDF texto'], ascending=False).style.format({ 'DOCX': '{:.2%}'.format, 'HTML': '{:.2%}'.format, 'Outro formato': '{:.2%}'.format, 'PDF imagem': '{:.2%}'.format, 'PDF texto': '{:.2%}'.format, }) # Visualização da parcela da população com acesso a cada formato, por UF df_uf_p_perc.sort_values(by=['PDF texto'], ascending=False).plot.bar(stacked=True, figsize=(10,5)) plt.legend(bbox_to_anchor=(1, 1.02)) plt.show() # ### Análise por UF: resultados # - Nesta seção, o destaque positivo fica para os estados que disponibilizam 100% de seus D.O. no formato PDF texto: Acre, Maranhão, Roraima, Rio Grande do Norte, Piauí, Paraíba, Alagoas e Tocantins; além do Distrito Federal. # - No extremo oposto, destaca-se o estado do Amapá, que disponibiliza 100% dos D.O. da amostra no formato PDF imagem. # # ** Observação: para frisar, todos os percentuais mencionados na análise referem-se aos municípios com mais de 100 mil habitantes que possuem formato de arquivo de publicação confirmado (foram excluídos da análise 2 municípios para os quais não havia informação de formato).
notebooks/2021-04-10-faustosateles-file-types.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] Collapsed="false" # This is a basic LogisticRegression model trained using the data from https://www.kaggle.com/eoveson/convai-datasets-baseline-models # # The baseline model in that kernal is tuned a little to get the data for this kernal This kernal scored 0.044 in the LB # + Collapsed="false" _cell_guid="eb9acbb1-40db-4a60-9c00-7e1134408cb1" _uuid="7e97dad72af19207237cb816bc898ca5818f4389" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline from scipy import sparse # set stopwords # from subprocess import check_output # print(check_output(["ls", "../input"]).decode("utf8")) # Any results you write to the current directory are saved as output. # + Collapsed="false" _cell_guid="bb967e03-d30b-46ec-b9d2-c0f5d4c0ee68" _uuid="97b399586c43626b73bc77b50e58b952d86ea8da" train = pd.read_csv('train.csv') test = pd.read_csv('test.csv') # + Collapsed="false" _cell_guid="1eebb207-607e-4985-908e-9848888808b1" _uuid="3e90295dde0dd25158ea9e3464165aa8ea62fd1c" feats_to_concat = ['comment_text'] # combining test and train alldata = pd.concat([train[feats_to_concat], test[feats_to_concat]], axis=0) alldata.comment_text.fillna('unknown', inplace=True) # + Collapsed="false" _cell_guid="88a8e609-b287-4a7e-b72d-5dcac6f4a55f" _uuid="741273ee4b5122a37d978708ba29e16879e5b33f" vect_words = TfidfVectorizer(max_features=50000, analyzer='word', ngram_range=(1, 1)) vect_chars = TfidfVectorizer(max_features=20000, analyzer='char', ngram_range=(1, 3)) # + Collapsed="false" _cell_guid="6db22032-8e99-4848-8978-be7c68a1e936" _uuid="cf10b99072cef22bf87ee92c9aa51f035a26e893" all_words = vect_words.fit_transform(alldata.comment_text) all_chars = vect_chars.fit_transform(alldata.comment_text) # + Collapsed="false" _cell_guid="8f42e0d7-5938-4bb0-beb7-7ddf9f85685d" _uuid="d074b6b6c5271f462c129c534980c5a0d287599f" train_new = train test_new = test # + Collapsed="false" _cell_guid="c068c9bb-bf28-4342-aa71-e575c6d93788" _uuid="09975f14757c51e19876dab638a39671dfd555e4" train_words = all_words[:len(train_new)] test_words = all_words[len(train_new):] train_chars = all_chars[:len(train_new)] test_chars = all_chars[len(train_new):] # + Collapsed="false" _cell_guid="5d55e152-e1cb-4cf0-aa41-e3eec5850b3a" _uuid="0338f2d0b8f09c751f97afebf1cf8e77d8a10fe3" feats = ['toxic_level', 'attack'] # make sparse matrix with needed data for train and test train_feats = sparse.hstack([train_words, train_chars, alldata[feats][:len(train_new)]]) test_feats = sparse.hstack([test_words, test_chars, alldata[feats][len(train_new):]]) # + Collapsed="false" _cell_guid="350aad79-ee6f-44bc-9d85-4e9652956bd3" _uuid="da2082c68a367369fac28ddc09eec2e5b6c718bb" jupyter={"outputs_hidden": true} col = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate'] only_col = ['toxic'] preds = np.zeros((test_new.shape[0], len(col))) for i, j in enumerate(col): print('===Fit '+j) model = LogisticRegression(C=4.0, solver='sag') print('Fitting model') model.fit(train_feats, train_new[j]) print('Predicting on test') preds[:,i] = model.predict_proba(test_feats)[:,1] # + Collapsed="false" _cell_guid="9d84b909-d93b-4778-b432-701f65a73d3c" _uuid="3605ca797e6d5e4d05ac2c63d70766c23d2a8cf1" jupyter={"outputs_hidden": true} subm = pd.read_csv('../input/jigsaw-toxic-comment-classification-challenge/sample_submission.csv') submid = pd.DataFrame({'id': subm["id"]}) submission = pd.concat([submid, pd.DataFrame(preds, columns = col)], axis=1) submission.to_csv('feat_lr_2cols.csv', index=False) # + Collapsed="false" _cell_guid="6d350714-1262-4f91-af11-a7f95750ec84" _uuid="be385cfe2683246d05dc872d7b09cb4608b73337" jupyter={"outputs_hidden": true}
Machine Learning/Natural Language Processing/toxic comments - fat and easy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sanjay_ddmmyy # # 1. Built-in-functions: import pandas as pd import numpy as np import datetime import time import requests import re from bs4 import BeautifulSoup # # 2. Functions def greaterbuysell(buy,sell): symbol = "" res = 0.0 if buy>sell: symbol = "B" res = np.round(buy/sell, decimals = 2) else: symbol = "S" res = np.round(sell/buy, decimals = 2) return (str(symbol," ",res)) def con(arg1): res = [] for i in arg1: if i=='-': res.append(float(0)) continue temp = i.split(sep = ",") temp2 = "" for j in temp: temp2+=j res.append(float(temp2)) return res def df_beau_soup(result,str1): resdf = pd.DataFrame(columns = ["SYMBOL","LAST PRICE","TOT BUY Q","TOT SELL Q","TIME CPD"]) Base_url=str1 page=requests.get(Base_url) soup=BeautifulSoup(page.content,'html.parser') #price=soup.find('div',attrs={'id':'responseDiv'}) price=soup.find(id='responseDiv') print(str(price.text)) str1 = price.text str1 = str1.replace("\r","") str1 = str1.replace("\n","") str1 = str1.replace('false','"false"') d = eval(str1) e = d["data"][0] ind = range(len(e)) df_test = pd.DataFrame(e, index = ind) df = pd.DataFrame(df_test.loc[0]) df = df.T resdf["SYMBOL"] = df["symbol"] resdf["LAST PRICE"] = df["lastPrice"] resdf["TOT BUY Q"] = df["totalBuyQuantity"] resdf["TOT SELL Q"] = df["totalSellQuantity"] t = datetime.datetime.now() h = str(t.hour) m = str(t.minute) s = str(t.second) temp = h+" : "+m+" : "+s resdf["TIME CPD"] = temp result = result.append(resdf) return result def task(result,df,cnt,len_rows): df["SB"+str(cnt)] = list(result["GREATER QUANT (S/B)"]) li = [] for i in range(len_rows): if df.loc[i]["SB"+str(cnt-1)]!=df.loc[i]["SB"+str(cnt)]: tem = str(df.loc[i]["SYMBOL"]) li += [str(df.loc[i]["SYMBOL"])+"_"+str(result.loc[tem]["TIME CPD"])] else: li+=["-"] if df_task["RES"].values.all!=0: del df["RES"] df["RES"] = li return df # # 3. Codes: # + companies = ["AXISBANK","M%26M","IBULHSGFIN","INDUSINDBK","INFRATEL","TATAMOTORS","MARUTI","HINDUNILVR","EICHERMOT","BAJAJ-AUTO"] url_companies = [] str_url1 = "https://www.nseindia.com/live_market/dynaContent/live_watch/get_quote/GetQuote.jsp?symbol=" [url_companies.append(str_url1+i) for i in companies] url_companies # - result = pd.DataFrame() for i in url_companies: result = df_beau_soup(result,i) result = result.set_index("SYMBOL") res_buy = con(result["TOT BUY Q"]) res_sell = con(result["TOT SELL Q"]) bool_sellbuy = list(map(lambda x,y: bool(x>y), res_buy, res_sell)) new_col = [] for i in bool_sellbuy: if i==False: new_col.append("S") else: new_col.append("B") result["GREATER QUANT (S/B)"] = new_col if_buy_greater = list(map(lambda x,y: x/y, res_buy, res_sell)) if_sell_greater = list(map(lambda x,y: y/x, res_buy, res_sell)) resbuy = [] for i in if_buy_greater: if i>1: resbuy.append(np.round(i,decimals = 2)) else: resbuy.append(0.00) ressell = [] for i in if_sell_greater: if i>1: ressell.append(np.round(i,decimals = 2)) else: ressell.append(0.00) new_col = list(map(lambda x,y: x+y, resbuy, ressell)) result["Times Greater"] = new_col new_col = [] for i in range(10): temp = str(result["GREATER QUANT (S/B)"][i])+str(" ")+str(result["Times Greater"][i]) new_col.append(temp) result["Greater Quant"] = new_col result df_task = pd.DataFrame() df_task["SYMBOL"] = list(result.index) df_task["SB1"] = list(result["GREATER QUANT (S/B)"]) df_task len_rows = len(df_task.index) len_rows cnt = 2 result = pd.DataFrame() for i in url_companies: result = df_beau_soup(result,i) result = result.set_index("SYMBOL") res_buy = con(result["TOT BUY Q"]) res_sell = con(result["TOT SELL Q"]) bool_sellbuy = list(map(lambda x,y: bool(x>y), res_buy, res_sell)) new_col = [] for i in bool_sellbuy: if i==False: new_col.append("S") else: new_col.append("B") result["GREATER QUANT (S/B)"] = new_col if_buy_greater = list(map(lambda x,y: x/y, res_buy, res_sell)) if_sell_greater = list(map(lambda x,y: y/x, res_buy, res_sell)) resbuy = [] for i in if_buy_greater: if i>1: resbuy.append(np.round(i,decimals = 2)) else: resbuy.append(0.00) ressell = [] for i in if_sell_greater: if i>1: ressell.append(np.round(i,decimals = 2)) else: ressell.append(0.00) new_col = list(map(lambda x,y: x+y, resbuy, ressell)) result["Times Greater"] = new_col new_col = [] for i in range(10): temp = str(result["GREATER QUANT (S/B)"][i])+str(" ")+str(result["Times Greater"][i]) new_col.append(temp) result["Greater Quant"] = new_col list(result["GREATER QUANT (S/B)"]) df_task["SB2"] = list(result["GREATER QUANT (S/B)"]) df_task x = str(df_task.loc[0]["SYMBOL"]) x "_" str(result.loc[x]["TIME CPD"]) df_task df_task = task(result,df_task,2,10) df_task result = pd.DataFrame() for i in url_companies: result = df_beau_soup(result,i) result = result.set_index("SYMBOL") res_buy = con(result["TOT BUY Q"]) res_sell = con(result["TOT SELL Q"]) bool_sellbuy = list(map(lambda x,y: bool(x>y), res_buy, res_sell)) new_col = [] for i in bool_sellbuy: if i==False: new_col.append("S") else: new_col.append("B") result["GREATER QUANT (S/B)"] = new_col if_buy_greater = list(map(lambda x,y: x/y, res_buy, res_sell)) if_sell_greater = list(map(lambda x,y: y/x, res_buy, res_sell)) resbuy = [] for i in if_buy_greater: if i>1: resbuy.append(np.round(i,decimals = 2)) else: resbuy.append(0.00) ressell = [] for i in if_sell_greater: if i>1: ressell.append(np.round(i,decimals = 2)) else: ressell.append(0.00) new_col = list(map(lambda x,y: x+y, resbuy, ressell)) result["Times Greater"] = new_col new_col = [] for i in range(10): temp = str(result["GREATER QUANT (S/B)"][i])+str(" ")+str(result["Times Greater"][i]) new_col.append(temp) result["Greater Quant"] = new_col
Data-Science-HYD-2k19/Projects/codes/PROJECT 2 ( mercury) ( Web-Scrapping)/TataSteel/send_mercury_project.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/ChelseyGuasis/CPEN-21A-CPE-1-1/blob/main/Final_Exam.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="apQNF8yed4RJ" # ###Final Exam # + [markdown] id="2O-23OPjeOt3" # Problem Set 1 # + colab={"base_uri": "https://localhost:8080/"} id="Ktfaklj6eRxX" outputId="6c841d0c-5d32-40ed-aa31-14039d2656b6" sum = 0 number = [-6,-4,-2,-1,0,1,2,3,4,5] for x in (number): sum = sum + x print(sum) # + [markdown] id="lyUuHqr4j6B-" # Problem Set 2 # + colab={"base_uri": "https://localhost:8080/"} id="cNFy8qAmzakT" outputId="6ed0e0ff-a080-4c54-be3e-428fbcfcb5f8" z = int(input("1st number: ")) while (z !=0): n = int(input("2nd number: ")) o = int(input("3rd number: ")) p = int(input("4th number: ")) q = int(input("5th number: ")) break x = q while (x !=0): x = z + q print("The sum of first and last number is",x) z -= 1 break # + [markdown] id="owcUB9PlnVI1" # Problem Set 3 # + colab={"base_uri": "https://localhost:8080/"} id="5mUNEctfnX5Q" outputId="d5ef47c5-2e43-47e8-fb51-ace07a247c13" grade = float(input("Enter your numerical grade: ")) if grade >=90: print("Character Grade: A") elif grade >=80: print("Character Grade: B") elif grade >=70: print("Character Grade: C") elif grade >=60: print("Character Grade: D") else: print("Character Grade: F")
Final_Exam.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (Torch) # language: python # name: torch # --- import sys sys.path.insert(0, "../..") # + import numpy as np import pandas as pd import torch from sklearn.model_selection import train_test_split from logistic_regression import LogisticRegression from extrapolation import * from experiments import RestartingExperiment # - df = pd.read_csv("../Epicurious/epi_r.csv") df = df.dropna(axis="rows") X = df.drop(["title", "dessert"], axis="columns").values y = df["dessert"].values y = np.where(y > 0.5, 1, -1) np.random.seed(2020) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15) # + device = "cuda" X_train = torch.tensor(X_train, device=device) X_test = torch.tensor(X_test, device=device) y_train = torch.tensor(y_train, device=device) y_test = torch.tensor(y_test, device=device) # - model = LogisticRegression(X_train, y_train, 1e-2, device=device) model.run_steps(50500) len(model.log) preds = model.predict(X_test) torch.mean((preds == y_test).double()) experiment = RestartingExperiment(model, 5, device="cuda") n = 8000 experiment.run_method("RRE+QR", RRE, n, {"qr": True}) experiment.run_method("RNA", RNA, n, {"lambda_range": (1e-15, 1e-2), "linesearch": False, "norm": False}) experiment.run_method("RNA+norm", RNA, n, {"lambda_range": (1e-15, 1e-2), "linesearch": False}) experiment.run_method("RNA+ls", RNA, n, {"lambda_range": (1e-15, 1e-2), "linesearch": True}) experiment.plot_values(n=35000, figsize=(12, 10), ylim=(6000, 10000)) experiment.plot_values(n=200, figsize=(12, 10), ylim=(7000, None)) experiment.plot_log_diff(n=30000, figsize=(12, 10), ylim=(None, 4)) experiment.save("epicurous-restarts-k=5.p") experiment = RestartingExperiment(model, 10, device="cuda") n = 4000 experiment.run_method("RRE+QR", RRE, n, {"qr": True}) experiment.run_method("RNA", RNA, n, {"lambda_range": (1e-15, 1e-2), "linesearch": False, "norm": False}) experiment.run_method("RNA+norm", RNA, n, {"lambda_range": (1e-15, 1e-2), "linesearch": False}) experiment.run_method("RNA+ls", RNA, n, {"lambda_range": (1e-15, 1e-2), "linesearch": True}) experiment.plot_values(n=35000, figsize=(12, 10), ylim=(6000, 10000)) experiment.plot_values(n=200, figsize=(12, 10), ylim=(7000, None)) experiment.plot_log_diff(n=30000, figsize=(12, 10), ylim=(None, 4)) experiment.save("epicurous-restarts-k=10.p")
notebooks/logistic regression/epicurious-restarts.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <center> # <img src="../../img/ods_stickers.jpg"> # ## Открытый курс по машинному обучению # <center> Автор материала: <NAME> (@tetelias) # # <center>mlxtend</center> # Данная библиотека содержит модули расширения и вспомогательные инструменты для программных библиотек # Python, предназначенных для анализа данных и машинного обучения. # # В данном материале будет описаны: # # * процесс установки; # * методы отбора признаков; # * прочие методы, доступные в этой библиотеке; # ### <center>Установка</center> # Происходит стандартным для библиотеки Python образом: # # pip install mlxtend # # Или через Anaconda: # # conda install -c conda-forge mlxtend # ### <center>Отбор признаков методом полного перебора</center> # Сначала мы рассмотрим обертку для полного перебора всех возможных комбинаций признаков. Для этого воспользуемся алгоритмом ExhaustiveFeatureSelector, которому необходимо передать следующие параметры: # * Модель; # * Минимальное количество признаков(`min_features=`); # * Максимальное количество признаков(`max_features=`); # * Метрика оценки(`scoring=`); # * Параметр кросс-валидации; # В качестве модели алгоритм принимает любую реализацию классификации или регрессии из scikit-learn. # # Среди метрик доступны {"accuracy", "f1", "precision", "recall", "roc_auc"} для классификации и {'mean_absolute_error', 'mean_squared_error', 'median_absolute_error', 'r2'} для регрессии. # # Рассмотрим простой пример на основе стандартного набора данных __Ирис__. Свойства `best_idx_` и `best_score_` метода ExhaustiveFeatureSelector позволяют получить список индексов в списке признаков и результат метрики для лучшего набора: # + from sklearn.neighbors import KNeighborsClassifier from sklearn.datasets import load_iris from mlxtend.feature_selection import ExhaustiveFeatureSelector as efs iris = load_iris() X = iris.data y = iris.target knn = KNeighborsClassifier(n_neighbors=3) fs1 = efs(knn, min_features=1, max_features=4, scoring='accuracy', print_progress=True, cv=5) fs1 = fs1.fit(X, y) print('Best accuracy score: %.2f' % fs1.best_score_) print('Best subset:', fs1.best_idx_) # - # С помощью свойства **subsets_** мы можем увидеть подробности каждого шага: fs1.subsets_ # Данный метод может также применяться в процессе подбора параметров моделей с помощью GridSearchCV. Для этого необходимо применение метода make_pipeline. Для извлечения лучшего набора признаков нужно указать **refit=True** в GridSearchCV: # + from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.pipeline import make_pipeline from sklearn.linear_model import LogisticRegression from mlxtend.feature_selection import ExhaustiveFeatureSelector as efs iris = load_iris() X, y = iris.data, iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=13) logit = LogisticRegression(multi_class='multinomial', solver='lbfgs', random_state=123) fs1 = efs(estimator=logit, min_features=2, max_features=3, scoring='accuracy', print_progress=False, clone_estimator=False, cv=5, n_jobs=1) pipe = make_pipeline(fs1, logit) param_grid = {'exhaustivefeatureselector__estimator__C': [0.1, 1.0, 10.0]} gs = GridSearchCV(estimator=pipe, param_grid=param_grid, scoring='accuracy', n_jobs=1, cv=5, verbose=1, refit=True) # run gridearch gs = gs.fit(X_train, y_train) # - # Лучшие параметры GridSearchCV выводятся стандартным gs.best_params_ # А вот индексы лучших признаков отыскать не так просто: gs.best_estimator_.steps[0][1].best_idx_ # ### <center>Отбор признаков методом последовательного перебора</center> # Полный перебор прост в исполнении, но количество вариантов растет пропорционально 2 в степени количества признаков. Для того, чтобы избежать такого массового перебора есть группа методов последовательного отбора признаков. Метод Sequential Forward Selection начинает с 0 признаков и выбирает тот, что максимально увеличивает заданную пользователем метрику. Затем к отобранному добавленяетсяием еще один и т.д.. Метод Sequential Backward Selection наоборот начинает с полного набора и отбрасывает по одному признаки менее всего положительно влияющие на заданную метрику. Есть еще надстройки над этими методами: Sequential Forward loating Selection и Sequential Backward Floating Selection. Они по сравнению с базовыми делают проверку, не улучшат ли уже отброшенные до этого признаки показатель метрики, если их все-таки добавить на текущем этапе работы алгоритма. # Необходимые параметры: # * Модель; # * Количество признаков, которое мы хотим получить на выходе, задаваемое через k_features; # * Направление прохождения алгоритма: от нулевого(`forward=True`) или полного(`forward=False`) набора признаков; # * Пытаться ли вернуть ранее отброшенные признаки: да(`floating=True`) или нет(`floating=False`); # * Метрика оценки(`scoring=`); # * Параметр кросс-валидации. # # Но к ним добавляется еще и кол-во признаков, которое мы хотим получить на выходе # + from sklearn.neighbors import KNeighborsClassifier from sklearn.datasets import load_iris from mlxtend.feature_selection import SequentialFeatureSelector as sfs iris = load_iris() X = iris.data y = iris.target knn = KNeighborsClassifier(n_neighbors=4) sfs1 = sfs(knn, k_features=3, forward=True, floating=False, verbose=2, scoring='accuracy', cv=0) sfs1 = sfs1.fit(X, y) # - # Как и в предыдущий раз свойство **subsets_** позволяет увидеть подробности каждого шага: sfs1.subsets_ # Индексы лучшего набора признаков можно получить, используя свойство **`k_feature_idx_`**. sfs1.k_feature_idx_ # Так же, как и в случае с полным перебором, данный метод можно использовать вместе с GridSearchCV. Но надо не забыть указать в GridSearchCV **refit=True**: # + from sklearn.neighbors import KNeighborsClassifier from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline from mlxtend.feature_selection import SequentialFeatureSelector as sfs import mlxtend iris = load_iris() X, y = iris.data, iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=13) knn = KNeighborsClassifier(n_neighbors=2) sfs1 = sfs(estimator=knn, k_features=3, forward=True, floating=False, scoring='accuracy', cv=5) pipe = Pipeline([('sfs', sfs1), ('knn', knn)]) param_grid = [ {'sfs__k_features': [1, 2, 3, 4], 'sfs__estimator__n_neighbors': [1, 2, 3, 4]} ] gs = GridSearchCV(estimator=pipe, param_grid=param_grid, scoring='accuracy', n_jobs=1, cv=5, verbose=1, refit=True) # run gridearch gs = gs.fit(X_train, y_train) # - # И опять получение лучшего набора признаков не выглядит элегантно: gs.best_estimator_.steps[0][1].k_feature_idx_ # ### <center>Другие интересные методы библиотеки</center> # Библиотека также содержит единственную на Python реализацию алгоритма Априори для построения правил ассоциативности: # очень простого, но обладающего исключительной интерпретируемостью метода исследования данных. Помимо [небольшого пояснения от автора](http://rasbt.github.io/mlxtend/user_guide/frequent_patterns/apriori/) рекомендую к прочтению вот [эту статью](http://pbpython.com/market-basket-analysis.html). # # # И напоследок: библиотека также имеет средства работы с текстом, включая возможность обрабатывать смайлики. Можно пытаться делать свой шаблон через библиотеку __re__ или просто использовать # + from mlxtend.text import tokenizer_emoticons tokenizer_emoticons('</a>This :) is :( a test ;-)!') # - # Можно извлекать смайлики не только отдельно, но и вместе с текстом: # + from mlxtend.text import tokenizer_words_and_emoticons tokenizer_words_and_emoticons('</a>This :) is :( a test :-)!') # - # # Ссылки # * [Документация](http://rasbt.github.io/mlxtend/) mlxtend # * [Репозитарий на Github](https://github.com/rasbt/mlxtend)
jupyter/tutorials/mlxtend_tetelias.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from sklearn import datasets wine_dataset = datasets.load_wine() X = wine_dataset.data y = wine_dataset.target # - import xgboost as xgb from sklearn.model_selection import StratifiedKFold # + from skopt import BayesSearchCV n_iterations = 50 # - estimator = xgb.XGBClassifier( n_jobs=-1, objective="multi:softmax", eval_metric="merror", verbosity=0, num_class=len(set(y)), ) search_space = { "learning_rate": (0.01, 1.0, "log-uniform"), "min_child_weight": (0, 10), "max_depth": (1, 50), "max_delta_step": (0, 10), "subsample": (0.01, 1.0, "uniform"), "colsample_bytree": (0.01, 1.0, "log-uniform"), "colsample_bylevel": (0.01, 1.0, "log-uniform"), "reg_lambda": (1e-9, 1000, "log-uniform"), "reg_alpha": (1e-9, 1.0, "log-uniform"), "gamma": (1e-9, 0.5, "log-uniform"), "min_child_weight": (0, 5), "n_estimators": (5, 5000), "scale_pos_weight": (1e-6, 500, "log-uniform"), } cv = StratifiedKFold(n_splits=3, shuffle=True) bayes_cv_tuner = BayesSearchCV( estimator=estimator, search_spaces=search_space, scoring="accuracy", cv=cv, n_jobs=-1, n_iter=n_iterations, verbose=0, refit=True, ) # + import pandas as pd import numpy as np def print_status(optimal_result): """Shows the best parameters found and accuracy attained of the search so far.""" models_tested = pd.DataFrame(bayes_cv_tuner.cv_results_) best_parameters_so_far = pd.Series(bayes_cv_tuner.best_params_) print( "Model #{}\nBest accuracy so far: {}\nBest parameters so far: {}\n".format( len(models_tested), np.round(bayes_cv_tuner.best_score_, 3), bayes_cv_tuner.best_params_, ) ) clf_type = bayes_cv_tuner.estimator.__class__.__name__ models_tested.to_csv(clf_type + "_cv_results_summary.csv") # - result = bayes_cv_tuner.fit(X, y, callback=print_status)
Chapter01/Hyperparameter Tuning with Scikit-Optimize/Hyperparameter Tuning with Scikit-Optimize.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential import matplotlib.pyplot as plt import numpy as np import os import PIL import pathlib print(tf.version) print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) gpus = tf.config.experimental.list_physical_devices(device_type='GPU') tf.config.experimental.set_memory_growth(gpus[0], True) # 1. Load Data #Move Covid and Normal image folder to new train folder dataset_path = "COVID-19_Radiography_Dataset/train/" data_dir = pathlib.Path(dataset_path) batch_size = 32 img_height = 256 img_width = 256 train_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=0.2, subset="training", seed=123, image_size=(img_height, img_width), batch_size=batch_size) val_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=0.2, subset="validation", seed=123, image_size=(img_height, img_width), batch_size=batch_size) tb_callback = tf.keras.callbacks.TensorBoard(log_dir="logs") es_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3) # Check class name's assigned class_names = train_ds.class_names print(class_names) # + import matplotlib.pyplot as plt plt.figure(figsize=(10, 10)) for images, labels in train_ds.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype("uint8")) plt.title(class_names[labels[i]]) plt.axis("off") # - # 2. Train Custom CNN Model # + AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) # - # Data Augmentation data_augmentation_layer = keras.Sequential( [ layers.experimental.preprocessing.RandomFlip("horizontal"), layers.experimental.preprocessing.RandomRotation(0.1), ] ) plt.figure(figsize=(10, 10)) for images, _ in train_ds.take(1): for i in range(9): augmented_images = data_augmentation_layer(images) ax = plt.subplot(3, 3, i + 1) plt.imshow(augmented_images[0].numpy().astype("uint8")) plt.axis("off") normalization_layer = layers.experimental.preprocessing.Rescaling(1./255) # + num_classes = 2 model = Sequential([ layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)), layers.Conv2D(16, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(1,activation='sigmoid') ]) # - model.compile(optimizer='adam', loss=tf.keras.losses.binary_crossentropy, metrics=['accuracy']) model.summary() epochs=10 history = model.fit( train_ds, validation_data=val_ds, epochs=epochs ) model_2 = Sequential([ layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(16, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(1, activation='sigmoid') ]) model_2.compile(optimizer='adam', loss=tf.keras.losses.binary_crossentropy, metrics=['accuracy']) model_2.summary() history_2 = model_2.fit( train_ds, validation_data=val_ds, epochs=epochs ) model_3 = Sequential([ data_augmentation_layer, layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)), layers.Conv2D(16, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(1, activation='sigmoid') ]) model_3.compile(optimizer='adam', loss=tf.keras.losses.binary_crossentropy, metrics=['accuracy',keras.metrics.AUC(name='prc', curve='PR')]) history_3 = model_3.fit( train_ds, validation_data=val_ds, epochs=50, callbacks=[tb_callback,es_callback] ) # + img_path = "COVID-19_Radiography_Dataset/train/COVID/COVID-1002.png" img = keras.preprocessing.image.load_img( img_path, target_size=(img_height, img_width) ) img_array = keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) # Create a batch predictions = model_3.predict(img_array) score = predictions[0] print( "This image is %.2f percent Covid and %.2f percent Normal." % (100 * (1 - score), 100 * score) ) # - model_4 = Sequential([ data_augmentation_layer, layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)), layers.Conv2D(16, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(128, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(256, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(512, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(1, activation='sigmoid') ]) model_4.compile(optimizer='adam', loss=tf.keras.losses.binary_crossentropy, metrics=['accuracy',keras.metrics.AUC(name='prc', curve='PR')]) history_4 = model_4.fit( train_ds, validation_data=val_ds, epochs=50, callbacks=[tb_callback,es_callback] ) # 3. Transfer Learning IMG_SHAPE = (img_height, img_width) + (3,) base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') base_model.trainable = False base_model.summary() image_batch, label_batch = next(iter(train_ds)) feature_batch = base_model(image_batch) print(feature_batch.shape) global_average_layer = tf.keras.layers.GlobalAveragePooling2D() feature_batch_average = global_average_layer(feature_batch) print(feature_batch_average.shape) prediction_layer = tf.keras.layers.Dense(1) prediction_batch = prediction_layer(feature_batch_average) print(prediction_batch.shape) data_augmentation = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.RandomFlip('horizontal'), tf.keras.layers.experimental.preprocessing.RandomRotation(0.2), ]) preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input inputs = tf.keras.Input(shape=(256, 256, 3)) x = data_augmentation(inputs) x = preprocess_input(x) x = base_model(x, training=False) x = global_average_layer(x) x = tf.keras.layers.Dropout(0.2)(x) outputs = prediction_layer(x) model = tf.keras.Model(inputs, outputs) len(model.trainable_variables) base_learning_rate = 0.0001 model.compile(optimizer=tf.keras.optimizers.Adam(lr=base_learning_rate), loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) model.summary() # + initial_epochs = 10 loss0, accuracy0 = model.evaluate(val_ds) # - print("initial loss: {:.2f}".format(loss0)) print("initial accuracy: {:.2f}".format(accuracy0)) history = model.fit(train_ds, epochs=initial_epochs, validation_data=val_ds) base_model.trainable = True # + print("Number of layers in the base model: ", len(base_model.layers)) # Fine-tune from this layer onwards fine_tune_at = 100 # Freeze all the layers before the `fine_tune_at` layer for layer in base_model.layers[:fine_tune_at]: layer.trainable = False # - model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer = tf.keras.optimizers.RMSprop(lr=base_learning_rate/10), metrics=['accuracy']) model.summary() # + fine_tune_epochs = 10 total_epochs = initial_epochs + fine_tune_epochs history_fine = model.fit(train_ds, epochs=total_epochs, initial_epoch=history.epoch[-1], validation_data=val_ds) # + img_path = "COVID-19_Radiography_Dataset/train/COVID/COVID-103.png" img = keras.preprocessing.image.load_img( img_path, target_size=(img_height, img_width) ) img_array = keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) # Create a batch predictions = model.predict(img_array) score = tf.nn.sigmoid(predictions[0]) print( "This image is %.2f percent Covid and %.2f percent Normal." % (100 * (1 - score), 100 * score) ) # - model.save('tl_model_tf.h5',save_format='tf')
notebook/.ipynb_checkpoints/covid_19-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 (tensorflow) # language: python # name: tensorflow # --- # ## Credit Card Fraud Detection using CNN ''' Before running the program first download the dataset from Kaggle, using the link "https://www.kaggle.com/mlg-ulb/creditcardfraud" and place that dataset in the same folder as this program. ''' import tensorflow as tf from tensorflow import keras from tensorflow.keras import Sequential from tensorflow.keras.layers import Flatten, Dense, Dropout, BatchNormalization from tensorflow.keras.layers import Conv1D, MaxPool1D from tensorflow.keras.optimizers import Adam print(tf.__version__) import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler data = pd.read_csv('creditcard.csv') data.head() data.shape data.isnull().sum() data.info() data['Class'].value_counts() # ### Balance Dataset non_fraud = data[data['Class']==0] fraud = data[data['Class']==1] non_fraud.shape, fraud.shape non_fraud = non_fraud.sample(fraud.shape[0]) non_fraud.shape data = fraud.append(non_fraud, ignore_index=True) data data['Class'].value_counts() X = data.drop('Class', axis = 1) y = data['Class'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0, stratify = y) X_train.shape, X_test.shape scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) y_train = y_train.to_numpy() y_test = y_test.to_numpy() X_train.shape X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1) X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1) X_train.shape, X_test.shape # ### Build CNN # + epochs = 20 model = Sequential() model.add(Conv1D(32, 2, activation='relu', input_shape = X_train[0].shape)) model.add(BatchNormalization()) model.add(Dropout(0.2)) model.add(Conv1D(64, 2, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) # - model.summary() model.compile(optimizer=Adam(lr=0.0001), loss = 'binary_crossentropy', metrics=['accuracy']) history = model.fit(X_train, y_train, epochs=epochs, validation_data=(X_test, y_test), verbose=1) def plot_learningCurve(history, epoch): # Plot training & validation accuracy values epoch_range = range(1, epoch+1) plt.plot(epoch_range, history.history['accuracy']) plt.plot(epoch_range, history.history['val_accuracy']) plt.title('Model accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Val'], loc='upper left') plt.show() # Plot training & validation loss values plt.plot(epoch_range, history.history['loss']) plt.plot(epoch_range, history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Val'], loc='upper left') plt.show() plot_learningCurve(history, epochs) # ### Adding MaxPool # + epochs = 50 model = Sequential() model.add(Conv1D(32, 2, activation='relu', input_shape = X_train[0].shape)) model.add(BatchNormalization()) model.add(MaxPool1D(2)) model.add(Dropout(0.2)) model.add(Conv1D(64, 2, activation='relu')) model.add(BatchNormalization()) model.add(MaxPool1D(2)) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer=Adam(lr=0.0001), loss = 'binary_crossentropy', metrics=['accuracy']) history = model.fit(X_train, y_train, epochs=epochs, validation_data=(X_test, y_test), verbose=1) plot_learningCurve(history, epochs)
Credit Card Transaction Fraud Detection using CNN.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:tensorflow2] # language: python # name: conda-env-tensorflow2-py # --- # # Question Answering BERT Testing Notebook # ## Libraries import json import os import random import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from transformers import BertTokenizer, TFBertModel, BertConfig from tokenizers import BertWordPieceTokenizer # ## Global Parameters MAX_LEN = 512 MODEL_NAME = "dbmdz/bert-base-turkish-cased" configuration = BertConfig() # ## Create Model def create_model(): ## BERT encoder encoder = TFBertModel.from_pretrained(MODEL_NAME) # QA model input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32) token_type_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32) attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32) embedding = encoder.bert(input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask)[0] start_logits = layers.Dense(1, name="start_logit", use_bias=False)(embedding) start_logits = layers.Flatten()(start_logits) end_logits = layers.Dense(1, name="end_logit", use_bias=False)(embedding) end_logits = layers.Flatten()(end_logits) start_probs = layers.Activation(keras.activations.softmax)(start_logits) end_probs = layers.Activation(keras.activations.softmax)(end_logits) model = keras.Model( inputs=[input_ids, token_type_ids, attention_mask], outputs=[start_probs, end_probs], ) loss = keras.losses.SparseCategoricalCrossentropy(from_logits=False) optimizer = keras.optimizers.Adam(lr=5e-5) model.compile(optimizer=optimizer, loss=[loss, loss]) return model # ## Load Tokenizer # + slow_tokenizer = BertTokenizer.from_pretrained(MODEL_NAME, do_lower_case=False) splitted_model = MODEL_NAME.split("/") save_path = "bert_base_turkish_cased/" if not os.path.exists(save_path): os.makedirs(save_path) slow_tokenizer.save_pretrained(save_path) tokenizer = BertWordPieceTokenizer(save_path + "vocab.txt", lowercase=False) # - # ## Create model and Load Weights model = create_model() model.summary() model.load_weights("bert_base_turkish_cased/weights/dbmdz-bert-base-turkish-cased_seqlen512_epochs10.h5") # ## Testing class WikiElement: def __init__(self, question, context): self.question = question self.context = context def preprocess(self): # tokenize context tokenized_context = tokenizer.encode(self.context) # tokenize question tokenized_question = tokenizer.encode(self.question) # create inputs input_ids = tokenized_context.ids + tokenized_question.ids[1:] token_type_ids = [0] * len(tokenized_context.ids) + [1] * len(tokenized_question.ids[1:]) attention_mask = [1] * len(input_ids) # padding for equal lenght sequence padding_length = MAX_LEN - len(input_ids) if padding_length > 0: # pad input_ids = input_ids + ([0] * padding_length) attention_mask = attention_mask + ([0] * padding_length) # len(input) [1] + padding [0] token_type_ids = token_type_ids + ([0] * padding_length) # context [0] + question [1] + padding [0] elif padding_length < 0: return self.input_ids = input_ids self.token_type_ids = token_type_ids self.attention_mask = attention_mask self.context_token_to_char = tokenized_context.offsets def create_input_targets(element): dataset_dict = { "input_ids": [], "token_type_ids": [], "attention_mask": [], } i = 0 for key in dataset_dict: dataset_dict[key].append(getattr(element, key)) for key in dataset_dict: dataset_dict[key] = np.array(dataset_dict[key]) x = [ dataset_dict["input_ids"], dataset_dict["token_type_ids"], dataset_dict["attention_mask"], ] return x def predict_answer(question, context): element = WikiElement(my_question, my_context) element.preprocess() x = create_input_targets(element) pred_start, pred_end = model2.predict(x) start = np.argmax(pred_start) end = np.argmax(pred_end) offsets = element.context_token_to_char pred_char_start = offsets[start][0] if end < len(offsets): pred_char_end = offsets[end][1] pred_ans = element.context[pred_char_start:pred_char_end] else: pred_ans = element.context[pred_char_start:] print("question: {}\n\npredicted_answer: {}\n\ncontext: {}".format(element.question, pred_ans, element.context)) result = {"question": element.question, "predicted_answer": pred_ans, "context": element.context} return result my_context = "Soy gaz veya asal gaz, standart şartlar altında her biri, diğer elementlere kıyasla daha düşük kimyasal reaktifliğe sahip, kokusuz, renksiz, tek atomlu gaz olan kimyasal element grubudur. Helyum (He), neon (Ne), argon (Ar), kripton (Kr), ksenon (Xe) ve radon (Rn) doğal olarak bulunan altı soy gazdır ve tamamı ametaldir. Her biri periyodik tablonun sırasıyla ilk altı periyodunda, 18. grubunda (8A) yer alır. Grupta yer alan oganesson (Og) için ise önceleri soy gaz olabileceği ihtimali üzerinde durulsa da günümüzde metalik görünümlü reaktif bir katı olduğu öngörülmektedir." my_question = "Doğal olarak bulunan altı soy gaz nelerdir?" result = predict_answer(my_question, my_context) model.save("bert_base_turkish_cased/dbmdz-bert-base-turkish-cased_seqlen512_epochs10") # Hata için buna bak [github](https://github.com/huggingface/transformers/issues/3627) # # Modeli model dosyası olarak kaydet, yoksa yüklemesi baya uzun sürüyor. # Arkaplanda çalışan bir servis olsun. Bert, Electra, Albert. # # model2 = keras.models.load_model('bert_base_turkish_cased/dbmdz-bert-base-turkish-cased_seqlen512_epochs10')
BERT Model Weights to Pb Files.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline from decoder_aux import collect_all_data, collect_all_data_final from plot import plot_all cache_file = 'fig3.hdf5' data_collect_dict = collect_all_data(cache_file, False) final_result = collect_all_data_final(data_collect_dict) final_result plot_all(final_result)
decoding/fig3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "skip"} tags=[] # To convert this for slides: # # ``` # jupyter nbconvert "L23*.ipynb" --to slides --TagRemovePreprocessor.enabled=True --TagRemovePreprocessor.remove_input_tags remove_input # ``` # + [markdown] slideshow={"slide_type": "slide"} tags=[] # ## Today: # # 1. **What** to optimize (the "scoring" param in `cross_validate` and `gridsearch`) # 1. How to optimize # + [markdown] slideshow={"slide_type": "slide"} tags=[] # ## What to optimize # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # ### Last time on "LendingClub2013"... # # We had two logistic models to predict loan default. # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # **Model 1** used the default settings: # # ![image.png](attachment:2a8f4469-a3d2-4b9b-ab75-8d4cc9efe055.png) # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # **Model 2** "improved" on this by using one possible fix for **imbalanced classes**: # # ![image.png](attachment:886d3446-e730-425c-b992-c6a981c2b31d.png) # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # ### Exercises # # Make (reasonable) assumptions as needed to compare the two models: # 1. How much money does this improved model save us relative to the prior model from reduced charge-offs? # 1. How much foregone profit does this "improved" model have relative to the prior model? # 1. How does this improved model do from the standpoint of a profit maximizing lender? Meaning: Would you rather use the first or the second model, and why? # 1. Write down a profit function for firm using the cells of the confusion matrix. (The four cells of the matrix are TN, TP, FN, and FP.) # 1. Based on your profit function, which metric(s) do you want to maximize/minimize in this model? There might not be a clean answer, if so: Discuss candidates and what they capture correctly about the profit function and what they miss. # + [markdown] slideshow={"slide_type": "slide"} tags=[] # ## How to optimize (hyperparameters) # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # [As the book says,](https://ledatascifi.github.io/ledatascifi-2022/content/05/04f_optimizing_a_model.html#tuning-hyperparameters) # 1. Set up your parameter grids: Start with a wide (and sparse) net. # 1. Set up `gridsearchCV` and then `.fit()` it. # 1. Plot the performance of the models. # # Repeat the above steps as needed, adjusting the parameter grids to hone in on best models, until you've found an optimized model. # # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # ### Let's try that... # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # I've loaded the LC data. Let's set up the model: # + slideshow={"slide_type": "skip"} tags=[] import matplotlib.pyplot as plt import pandas as pd import numpy as np from df_after_transform import df_after_transform from sklearn import set_config from sklearn.calibration import CalibrationDisplay from sklearn.compose import ( ColumnTransformer, make_column_selector, make_column_transformer, ) from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegression from sklearn.metrics import ( ConfusionMatrixDisplay, DetCurveDisplay, PrecisionRecallDisplay, RocCurveDisplay, classification_report, ) from sklearn.model_selection import ( GridSearchCV, KFold, cross_validate, train_test_split, ) from sklearn.pipeline import make_pipeline from sklearn.preprocessing import OneHotEncoder, StandardScaler set_config(display="diagram") # display='text' is the default pd.set_option( "display.max_colwidth", 1000, "display.max_rows", 50, "display.max_columns", None ) import warnings warnings.filterwarnings('ignore') # + slideshow={"slide_type": "skip"} tags=[] # load data loans = pd.read_csv('ML - prof only/inputs/2013_subsample.zip') # create holdout sample y = loans.loan_status == "Charged Off" y.value_counts() loans = loans.drop("loan_status", axis=1) X_train, X_test, y_train, y_test = train_test_split( loans, y, stratify=y, test_size=0.2, random_state=0 ) # (stratify will make sure that test/train both have equal fractions of outcome) # set up pipeline to clean each type of variable (1 pipe per var type) numer_pipe = make_pipeline(SimpleImputer(strategy="mean"), StandardScaler()) cat_pipe = make_pipeline(OneHotEncoder(drop="first")) # combine those (make_column_transformer is like make_pipeline, maybe easier on the eyes!) preproc_pipe = make_column_transformer( (numer_pipe, ["annual_inc", "int_rate"]), (cat_pipe, ["grade"]), remainder="drop", ) # + slideshow={"slide_type": "fragment"} tags=[] clf_logit = make_pipeline(preproc_pipe, LogisticRegression(class_weight='balanced')) # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # When I use `cross_validate` or `gridsearchCV`, I need to pick a scorer to optimize on. # # Let's score the models directly on profit. Following the docs, I'll make a custom scorer: # + slideshow={"slide_type": "fragment"} tags=[] # define the profit function def custom_prof_score(y, y_pred, roa=0.02, haircut=0.20): ''' Firm profit is this times the average loan size. We can ignore that term for the purposes of maximization. ''' TN = sum((y_pred==0) & (y==0)) # count loans made and actually paid back FN = sum((y_pred==0) & (y==1)) # count loans made and actually defaulting return TN*roa - FN*haircut # so that we can use the fcn in sklearn, "make a scorer" out of that function from sklearn.metrics import make_scorer prof_score = make_scorer(custom_prof_score) # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # In this example, we will see if the "regularization" parameter in the logit function matters. # # Here, I'm going to try a lot of small values, and then some higher values. # + slideshow={"slide_type": "fragment"} tags=[] parameters = {'logisticregression__C': list(np.linspace(0.00001,.5,25))+list(np.linspace(0.55,5.55,6)) } grid_search = GridSearchCV(estimator = clf_logit, param_grid = parameters, cv = 5, scoring=prof_score ) # + slideshow={"slide_type": "skip"} tags=[] results = grid_search.fit(X_train,y_train) # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # After fitting the `grid_search`, we examine the performance: # + slideshow={"slide_type": "fragment"} tags=["remove_input"] fig, ax = plt.subplots(1,2,figsize=(12,5)) ( pd.DataFrame(results.cv_results_) # get as a DF [['param_logisticregression__C','mean_test_score']] .plot(kind='line',x='param_logisticregression__C',y='mean_test_score', legend=False,title='Avg Profit in Validation Samples', xlabel='Logit C parameter',ax=ax[0]) ) ( pd.DataFrame(results.cv_results_) # get as a DF [['param_logisticregression__C','mean_test_score']] .plot(kind='scatter',x='param_logisticregression__C',y='mean_test_score', legend=False,title='Avg Profit in Validation Samples', xlabel='Logit C parameter',ax=ax[0]) ) ( pd.DataFrame(results.cv_results_) # get as a DF [['std_test_score','mean_test_score']] .plot(kind='scatter',x='std_test_score',y='mean_test_score', legend=False,xlabel='STD',ylabel='Mean', title='Performance of estimators across CV folds', ax=ax[1]) ) # manually looked for my preferred model ax[1].scatter(5.127832, -2.088, color='red') ax[1].text(5.127832, -3.5, 'Opt A: C=0.020843',color='red') ax[1].scatter(5.236610 , -2.080, color='orange') ax[1].text(5.236610 , -4.1, 'Opt B: C=0.041676',color='orange') plt.show() # + [markdown] slideshow={"slide_type": "fragment"} tags=[] # - Do you like A or B more? # - How should we adjust the `parameters` grid? # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # ### IMO: # # - Option A has nearly the same performance on average with a reasonable reduction in variance # - Option A is probably not the optimal value: Adjust the grid around Option A to look for possible improvements # + [markdown] slideshow={"slide_type": "fragment"} tags=[] # - To get large (not marginal) gains: We need to upgrade the model # - Add X vars? # - Different estimator? # + [markdown] slideshow={"slide_type": "subslide"} tags=[] # ### Next time on "LendingClub2013"... # # - Upgrading this model
lectures/L23_optimization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # 1.3 Some popular data sets in image classification # # In this subsection, we will introduce some popular and standard data sets in image # classification. # # ```{list-table} # :header-rows: 1 # :name: example-table # * - dataset # - training(N) # - test(M) # - classes(k) # - channels(c) # - input size(d) # * - MNIST # - 60K # - 10K # - 10 # - GRAYSCALE # - 28*28 # * - CIFAR-10 # - 50K # - 10K # - 10 # - RGB # - 32*32 # * - CIFAR-100 # - 50K # - 10K # - 10 # - RGB # - 32*32 # * - ImageNet # - 1.2M # - 50K # - 1000 # - RGB # - 224*224 # ``` # ## MNIST(Modified National Institute of Standards and Technology Database) # # ### This is a database for handwritten digits # - Training set : N = 60,000 # - Test set : M = 10,000 # - Image size : d = 28 * 28 * 1 = 784 # - Classes: k = 10 # ```{figure} ../figures/mnist2_1.png # :name: mnist # ``` # ## CIFAR # # ### CIFAR-10 # - Training set : N = 50,000 # - Test set : M = 10,000 # - Image size : d = 32 * 32 * 3 # - Classes: k = 10 # ```{figure} ../figures/mnist1_1.png # :name: mnist # ``` # # ### CIFAR-100 # - Training set : N = 50,000 # - Test set : M = 10,000 # - Image size : d = 32 * 32 * 3 # - Classes: k = 100 # ```{figure} ../figures/mnist1_1.png # :name: mnist # ``` # # ## ImageNet # # - All data set: N + M = 1,431,167 # - Image size : d = 224 * 224 * 3 # - Classes: k = 1,000 # ```{figure} ../figures/imagenet1_1.png # :name: mnist # ``` # ```{figure} ../figures/imagenet1_2.png # :name: mt # ```
docs/_sources/ch01/ch1_3_video.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # https://www.hackerrank.com/challenges/py-set-add/problem # + a_set = set() for i in range(int(input())): a_set.add(input()) print(len(a_set)) # + # using list comprehensions # - print(len(list(set([input() for i in range(int(input()))]))))
04 - Sets/03 - Set .add().ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # NFQ # - # !nvidia-smi # + import warnings ; warnings.filterwarnings('ignore') import os import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import numpy as np from IPython.display import display from collections import namedtuple, deque import matplotlib.pyplot as plt import matplotlib.pylab as pylab from itertools import cycle, count from textwrap import wrap import matplotlib import subprocess import os.path import tempfile import random import base64 import pprint import glob import time import json import sys import gym import io import os import gc from gym import wrappers from subprocess import check_output from IPython.display import HTML LEAVE_PRINT_EVERY_N_SECS = 60 ERASE_LINE = '\x1b[2K' EPS = 1e-6 BEEP = lambda: os.system("printf '\a'") RESULTS_DIR = os.path.join('..', 'results') SEEDS = (12, 34, 56, 78, 90) # %matplotlib inline # - plt.style.use('fivethirtyeight') params = { 'figure.figsize': (15, 8), 'font.size': 24, 'legend.fontsize': 20, 'axes.titlesize': 28, 'axes.labelsize': 24, 'xtick.labelsize': 20, 'ytick.labelsize': 20 } pylab.rcParams.update(params) np.set_printoptions(suppress=True) torch.cuda.is_available() def get_make_env_fn(**kargs): def make_env_fn(env_name, seed=None, render=None, record=False, unwrapped=False, monitor_mode=None, inner_wrappers=None, outer_wrappers=None): mdir = tempfile.mkdtemp() env = None if render: try: env = gym.make(env_name, render=render) except: pass if env is None: env = gym.make(env_name) if seed is not None: env.seed(seed) env = env.unwrapped if unwrapped else env if inner_wrappers: for wrapper in inner_wrappers: env = wrapper(env) env = wrappers.Monitor( env, mdir, force=True, mode=monitor_mode, video_callable=lambda e_idx: record) if monitor_mode else env if outer_wrappers: for wrapper in outer_wrappers: env = wrapper(env) return env return make_env_fn, kargs def get_videos_html(env_videos, title, max_n_videos=5): videos = np.array(env_videos) if len(videos) == 0: return n_videos = max(1, min(max_n_videos, len(videos))) idxs = np.linspace(0, len(videos) - 1, n_videos).astype(int) if n_videos > 1 else [-1,] videos = videos[idxs,...] strm = '<h2>{}<h2>'.format(title) for video_path, meta_path in videos: video = io.open(video_path, 'r+b').read() encoded = base64.b64encode(video) with open(meta_path) as data_file: meta = json.load(data_file) html_tag = """ <h3>{0}<h3/> <video width="960" height="540" controls> <source src="data:video/mp4;base64,{1}" type="video/mp4" /> </video>""" strm += html_tag.format('Episode ' + str(meta['episode_id']), encoded.decode('ascii')) return strm def get_gif_html(env_videos, title, subtitle_eps=None, max_n_videos=4): videos = np.array(env_videos) if len(videos) == 0: return n_videos = max(1, min(max_n_videos, len(videos))) idxs = np.linspace(0, len(videos) - 1, n_videos).astype(int) if n_videos > 1 else [-1,] videos = videos[idxs,...] strm = '<h2>{}<h2>'.format(title) for video_path, meta_path in videos: basename = os.path.splitext(video_path)[0] gif_path = basename + '.gif' if not os.path.exists(gif_path): ps = subprocess.Popen( ('ffmpeg', '-i', video_path, '-r', '7', '-f', 'image2pipe', '-vcodec', 'ppm', '-crf', '20', '-vf', 'scale=512:-1', '-'), stdout=subprocess.PIPE) output = subprocess.check_output( ('convert', '-coalesce', '-delay', '7', '-loop', '0', '-fuzz', '2%', '+dither', '-deconstruct', '-layers', 'Optimize', '-', gif_path), stdin=ps.stdout) ps.wait() gif = io.open(gif_path, 'r+b').read() encoded = base64.b64encode(gif) with open(meta_path) as data_file: meta = json.load(data_file) html_tag = """ <h3>{0}<h3/> <img src="data:image/gif;base64,{1}" />""" prefix = 'Trial ' if subtitle_eps is None else 'Episode ' sufix = str(meta['episode_id'] if subtitle_eps is None \ else subtitle_eps[meta['episode_id']]) strm += html_tag.format(prefix + sufix, encoded.decode('ascii')) return strm class DiscountedCartPole(gym.Wrapper): def __init__(self, env): gym.Wrapper.__init__(self, env) def reset(self, **kwargs): return self.env.reset(**kwargs) def step(self, a): o, r, d, _ = self.env.step(a) (x, x_dot, theta, theta_dot) = o pole_fell = x < -self.env.unwrapped.x_threshold \ or x > self.env.unwrapped.x_threshold \ or theta < -self.env.unwrapped.theta_threshold_radians \ or theta > self.env.unwrapped.theta_threshold_radians r = -1 if pole_fell else 0 return o, r, d, _ # # NFQ class FCQ(nn.Module): def __init__(self, input_dim, output_dim, hidden_dims=(32,32), activation_fc=F.relu): super(FCQ, self).__init__() self.activation_fc = activation_fc self.input_layer = nn.Linear(input_dim, hidden_dims[0]) self.hidden_layers = nn.ModuleList() for i in range(len(hidden_dims)-1): hidden_layer = nn.Linear(hidden_dims[i], hidden_dims[i+1]) self.hidden_layers.append(hidden_layer) self.output_layer = nn.Linear(hidden_dims[-1], output_dim) device = "cpu" if torch.cuda.is_available(): device = "cuda:0" self.device = torch.device(device) self.to(self.device) def _format(self, state): x = state if not isinstance(x, torch.Tensor): x = torch.tensor(x, device=self.device, dtype=torch.float32) x = x.unsqueeze(0) return x def forward(self, state): x = self._format(state) x = self.activation_fc(self.input_layer(x)) for hidden_layer in self.hidden_layers: x = self.activation_fc(hidden_layer(x)) x = self.output_layer(x) return x def numpy_float_to_device(self, variable): variable = torch.from_numpy(variable).float().to(self.device) return variable def load(self, experiences): states, actions, rewards, new_states, is_terminals = experiences states = torch.from_numpy(states).float().to(self.device) actions = torch.from_numpy(actions).long().to(self.device) new_states = torch.from_numpy(new_states).float().to(self.device) rewards = torch.from_numpy(rewards).float().to(self.device) is_terminals = torch.from_numpy(is_terminals).float().to(self.device) return states, actions, rewards, new_states, is_terminals class GreedyStrategy(): def __init__(self): self.exploratory_action_taken = False def select_action(self, model, state): with torch.no_grad(): q_values = model(state).cpu().detach().data.numpy().squeeze() return np.argmax(q_values) class EGreedyStrategy(): def __init__(self, epsilon=0.1): self.epsilon = epsilon self.exploratory_action_taken = None def select_action(self, model, state): self.exploratory_action_taken = False with torch.no_grad(): q_values = model(state).cpu().detach().data.numpy().squeeze() if np.random.rand() > self.epsilon: action = np.argmax(q_values) else: action = np.random.randint(len(q_values)) self.exploratory_action_taken = action != np.argmax(q_values) return action class NFQ(): def __init__(self, value_model_fn, value_optimizer_fn, value_optimizer_lr, training_strategy_fn, evaluation_strategy_fn, batch_size, epochs): self.value_model_fn = value_model_fn self.value_optimizer_fn = value_optimizer_fn self.value_optimizer_lr = value_optimizer_lr self.training_strategy_fn = training_strategy_fn self.evaluation_strategy_fn = evaluation_strategy_fn self.batch_size = batch_size self.epochs = epochs def optimize_model(self, experiences): states, actions, rewards, next_states, is_terminals = experiences batch_size = len(is_terminals) max_a_q_sp = self.online_model(next_states).detach().max(1)[0].unsqueeze(1) target_q_s = rewards + self.gamma * max_a_q_sp * (1 - is_terminals) q_sa = self.online_model(states).gather(1, actions) td_errors = q_sa - target_q_s value_loss = td_errors.pow(2).mul(0.5).mean() self.value_optimizer.zero_grad() value_loss.backward() self.value_optimizer.step() def interaction_step(self, state, env): action = self.training_strategy.select_action(self.online_model, state) new_state, reward, is_terminal, info = env.step(action) is_truncated = 'TimeLimit.truncated' in info and info['TimeLimit.truncated'] is_failure = is_terminal and not is_truncated experience = (state, action, reward, new_state, float(is_failure)) self.experiences.append(experience) self.episode_reward[-1] += reward self.episode_timestep[-1] += 1 self.episode_exploration[-1] += int(self.training_strategy.exploratory_action_taken) return new_state, is_terminal def train(self, make_env_fn, make_env_kargs, seed, gamma, max_minutes, max_episodes, goal_mean_100_reward): training_start, last_debug_time = time.time(), float('-inf') self.checkpoint_dir = tempfile.mkdtemp() self.make_env_fn = make_env_fn self.make_env_kargs = make_env_kargs self.seed = seed self.gamma = gamma env = self.make_env_fn(**self.make_env_kargs, seed=self.seed) torch.manual_seed(self.seed) ; np.random.seed(self.seed) ; random.seed(self.seed) nS, nA = env.observation_space.shape[0], env.action_space.n self.episode_timestep = [] self.episode_reward = [] self.episode_seconds = [] self.evaluation_scores = [] self.episode_exploration = [] self.online_model = self.value_model_fn(nS, nA) self.value_optimizer = self.value_optimizer_fn(self.online_model, self.value_optimizer_lr) self.training_strategy = training_strategy_fn() self.evaluation_strategy = evaluation_strategy_fn() self.experiences = [] result = np.empty((max_episodes, 5)) result[:] = np.nan training_time = 0 for episode in range(1, max_episodes + 1): episode_start = time.time() state, is_terminal = env.reset(), False self.episode_reward.append(0.0) self.episode_timestep.append(0.0) self.episode_exploration.append(0.0) for step in count(): state, is_terminal = self.interaction_step(state, env) if len(self.experiences) >= self.batch_size: experiences = np.array(self.experiences) batches = [np.vstack(sars) for sars in experiences.T] experiences = self.online_model.load(batches) for _ in range(self.epochs): self.optimize_model(experiences) self.experiences.clear() if is_terminal: gc.collect() break # stats episode_elapsed = time.time() - episode_start self.episode_seconds.append(episode_elapsed) training_time += episode_elapsed evaluation_score, _ = self.evaluate(self.online_model, env) self.save_checkpoint(episode-1, self.online_model) total_step = int(np.sum(self.episode_timestep)) self.evaluation_scores.append(evaluation_score) mean_10_reward = np.mean(self.episode_reward[-10:]) std_10_reward = np.std(self.episode_reward[-10:]) mean_100_reward = np.mean(self.episode_reward[-100:]) std_100_reward = np.std(self.episode_reward[-100:]) mean_100_eval_score = np.mean(self.evaluation_scores[-100:]) std_100_eval_score = np.std(self.evaluation_scores[-100:]) lst_100_exp_rat = np.array( self.episode_exploration[-100:])/np.array(self.episode_timestep[-100:]) mean_100_exp_rat = np.mean(lst_100_exp_rat) std_100_exp_rat = np.std(lst_100_exp_rat) wallclock_elapsed = time.time() - training_start result[episode-1] = total_step, mean_100_reward, \ mean_100_eval_score, training_time, wallclock_elapsed reached_debug_time = time.time() - last_debug_time >= LEAVE_PRINT_EVERY_N_SECS reached_max_minutes = wallclock_elapsed >= max_minutes * 60 reached_max_episodes = episode >= max_episodes reached_goal_mean_reward = mean_100_eval_score >= goal_mean_100_reward training_is_over = reached_max_minutes or \ reached_max_episodes or \ reached_goal_mean_reward elapsed_str = time.strftime("%H:%M:%S", time.gmtime(time.time() - training_start)) debug_message = 'el {}, ep {:04}, ts {:06}, ' debug_message += 'ar 10 {:05.1f}\u00B1{:05.1f}, ' debug_message += '100 {:05.1f}\u00B1{:05.1f}, ' debug_message += 'ex 100 {:02.1f}\u00B1{:02.1f}, ' debug_message += 'ev {:05.1f}\u00B1{:05.1f}' debug_message = debug_message.format( elapsed_str, episode-1, total_step, mean_10_reward, std_10_reward, mean_100_reward, std_100_reward, mean_100_exp_rat, std_100_exp_rat, mean_100_eval_score, std_100_eval_score) print(debug_message, end='\r', flush=True) if reached_debug_time or training_is_over: print(ERASE_LINE + debug_message, flush=True) last_debug_time = time.time() if training_is_over: if reached_max_minutes: print(u'--> reached_max_minutes \u2715') if reached_max_episodes: print(u'--> reached_max_episodes \u2715') if reached_goal_mean_reward: print(u'--> reached_goal_mean_reward \u2713') break final_eval_score, score_std = self.evaluate(self.online_model, env, n_episodes=100) wallclock_time = time.time() - training_start print('Training complete.') print('Final evaluation score {:.2f}\u00B1{:.2f} in {:.2f}s training time,' ' {:.2f}s wall-clock time.\n'.format( final_eval_score, score_std, training_time, wallclock_time)) env.close() ; del env self.get_cleaned_checkpoints() return result, final_eval_score, training_time, wallclock_time def evaluate(self, eval_policy_model, eval_env, n_episodes=1): rs = [] for _ in range(n_episodes): s, d = eval_env.reset(), False rs.append(0) for _ in count(): a = self.evaluation_strategy.select_action(eval_policy_model, s) s, r, d, _ = eval_env.step(a) rs[-1] += r if d: break return np.mean(rs), np.std(rs) def get_cleaned_checkpoints(self, n_checkpoints=5): try: return self.checkpoint_paths except AttributeError: self.checkpoint_paths = {} paths = glob.glob(os.path.join(self.checkpoint_dir, '*.tar')) paths_dic = {int(path.split('.')[-2]):path for path in paths} last_ep = max(paths_dic.keys()) # checkpoint_idxs = np.geomspace(1, last_ep+1, n_checkpoints, endpoint=True, dtype=np.int)-1 checkpoint_idxs = np.linspace(1, last_ep+1, n_checkpoints, endpoint=True, dtype=np.int)-1 for idx, path in paths_dic.items(): if idx in checkpoint_idxs: self.checkpoint_paths[idx] = path else: os.unlink(path) return self.checkpoint_paths def demo_last(self, title='Fully-trained {} Agent', n_episodes=3, max_n_videos=3): env = self.make_env_fn(**self.make_env_kargs, monitor_mode='evaluation', render=True, record=True) checkpoint_paths = self.get_cleaned_checkpoints() last_ep = max(checkpoint_paths.keys()) self.online_model.load_state_dict(torch.load(checkpoint_paths[last_ep])) self.evaluate(self.online_model, env, n_episodes=n_episodes) env.close() data = get_gif_html(env_videos=env.videos, title=title.format(self.__class__.__name__), max_n_videos=max_n_videos) del env return HTML(data=data) def demo_progression(self, title='{} Agent progression', max_n_videos=5): env = self.make_env_fn(**self.make_env_kargs, monitor_mode='evaluation', render=True, record=True) checkpoint_paths = self.get_cleaned_checkpoints() for i in sorted(checkpoint_paths.keys()): self.online_model.load_state_dict(torch.load(checkpoint_paths[i])) self.evaluate(self.online_model, env, n_episodes=1) env.close() data = get_gif_html(env_videos=env.videos, title=title.format(self.__class__.__name__), subtitle_eps=sorted(checkpoint_paths.keys()), max_n_videos=max_n_videos) del env return HTML(data=data) def save_checkpoint(self, episode_idx, model): torch.save(model.state_dict(), os.path.join(self.checkpoint_dir, 'model.{}.tar'.format(episode_idx))) nfq_results = [] best_agent, best_eval_score = None, float('-inf') for seed in SEEDS: environment_settings = { 'env_name': 'CartPole-v1', 'gamma': 1.00, 'max_minutes': 20, 'max_episodes': 10000, 'goal_mean_100_reward': 475 } value_model_fn = lambda nS, nA: FCQ(nS, nA, hidden_dims=(512,128)) # value_optimizer_fn = lambda net, lr: optim.Adam(net.parameters(), lr=lr) value_optimizer_fn = lambda net, lr: optim.RMSprop(net.parameters(), lr=lr) value_optimizer_lr = 0.0005 training_strategy_fn = lambda: EGreedyStrategy(epsilon=0.5) # evaluation_strategy_fn = lambda: EGreedyStrategy(epsilon=0.05) evaluation_strategy_fn = lambda: GreedyStrategy() batch_size = 1024 epochs = 40 env_name, gamma, max_minutes, \ max_episodes, goal_mean_100_reward = environment_settings.values() agent = NFQ(value_model_fn, value_optimizer_fn, value_optimizer_lr, training_strategy_fn, evaluation_strategy_fn, batch_size, epochs) # make_env_fn, make_env_kargs = get_make_env_fn( # env_name=env_name, addon_wrappers=[DiscountedCartPole,]) make_env_fn, make_env_kargs = get_make_env_fn(env_name=env_name) result, final_eval_score, training_time, wallclock_time = agent.train( make_env_fn, make_env_kargs, seed, gamma, max_minutes, max_episodes, goal_mean_100_reward) nfq_results.append(result) if final_eval_score > best_eval_score: best_eval_score = final_eval_score best_agent = agent nfq_results = np.array(nfq_results) _ = BEEP() best_agent.demo_progression() best_agent.demo_last() # + nfq_max_t, nfq_max_r, nfq_max_s, \ nfq_max_sec, nfq_max_rt = np.max(nfq_results, axis=0).T nfq_min_t, nfq_min_r, nfq_min_s, \ nfq_min_sec, nfq_min_rt = np.min(nfq_results, axis=0).T nfq_mean_t, nfq_mean_r, nfq_mean_s, \ nfq_mean_sec, nfq_mean_rt = np.mean(nfq_results, axis=0).T nfq_x = np.arange(len(nfq_mean_s)) # nfq_max_t, nfq_max_r, nfq_max_s, \ # nfq_max_sec, nfq_max_rt = np.nanmax(nfq_results, axis=0).T # nfq_min_t, nfq_min_r, nfq_min_s, \ # nfq_min_sec, nfq_min_rt = np.nanmin(nfq_results, axis=0).T # nfq_mean_t, nfq_mean_r, nfq_mean_s, \ # nfq_mean_sec, nfq_mean_rt = np.nanmean(nfq_results, axis=0).T # nfq_x = np.arange(len(nfq_mean_s)) # + fig, axs = plt.subplots(5, 1, figsize=(15,30), sharey=False, sharex=True) # NFQ axs[0].plot(nfq_max_r, 'y', linewidth=1) axs[0].plot(nfq_min_r, 'y', linewidth=1) axs[0].plot(nfq_mean_r, 'y', label='NFQ', linewidth=2) axs[0].fill_between(nfq_x, nfq_min_r, nfq_max_r, facecolor='y', alpha=0.3) axs[1].plot(nfq_max_s, 'y', linewidth=1) axs[1].plot(nfq_min_s, 'y', linewidth=1) axs[1].plot(nfq_mean_s, 'y', label='NFQ', linewidth=2) axs[1].fill_between(nfq_x, nfq_min_s, nfq_max_s, facecolor='y', alpha=0.3) axs[2].plot(nfq_max_t, 'y', linewidth=1) axs[2].plot(nfq_min_t, 'y', linewidth=1) axs[2].plot(nfq_mean_t, 'y', label='NFQ', linewidth=2) axs[2].fill_between(nfq_x, nfq_min_t, nfq_max_t, facecolor='y', alpha=0.3) axs[3].plot(nfq_max_sec, 'y', linewidth=1) axs[3].plot(nfq_min_sec, 'y', linewidth=1) axs[3].plot(nfq_mean_sec, 'y', label='NFQ', linewidth=2) axs[3].fill_between(nfq_x, nfq_min_sec, nfq_max_sec, facecolor='y', alpha=0.3) axs[4].plot(nfq_max_rt, 'y', linewidth=1) axs[4].plot(nfq_min_rt, 'y', linewidth=1) axs[4].plot(nfq_mean_rt, 'y', label='NFQ', linewidth=2) axs[4].fill_between(nfq_x, nfq_min_rt, nfq_max_rt, facecolor='y', alpha=0.3) # ALL axs[0].set_title('Moving Avg Reward (Training)') axs[1].set_title('Moving Avg Reward (Evaluation)') axs[2].set_title('Total Steps') axs[3].set_title('Training Time') axs[4].set_title('Wall-clock Time') plt.xlabel('Episodes') axs[0].legend(loc='upper left') plt.show() # + nfq_root_dir = os.path.join(RESULTS_DIR, 'nfq') not os.path.exists(nfq_root_dir) and os.makedirs(nfq_root_dir) np.save(os.path.join(nfq_root_dir, 'x'), nfq_x) np.save(os.path.join(nfq_root_dir, 'max_r'), nfq_max_r) np.save(os.path.join(nfq_root_dir, 'min_r'), nfq_min_r) np.save(os.path.join(nfq_root_dir, 'mean_r'), nfq_mean_r) np.save(os.path.join(nfq_root_dir, 'max_s'), nfq_max_s) np.save(os.path.join(nfq_root_dir, 'min_s'), nfq_min_s ) np.save(os.path.join(nfq_root_dir, 'mean_s'), nfq_mean_s) np.save(os.path.join(nfq_root_dir, 'max_t'), nfq_max_t) np.save(os.path.join(nfq_root_dir, 'min_t'), nfq_min_t) np.save(os.path.join(nfq_root_dir, 'mean_t'), nfq_mean_t) np.save(os.path.join(nfq_root_dir, 'max_sec'), nfq_max_sec) np.save(os.path.join(nfq_root_dir, 'min_sec'), nfq_min_sec) np.save(os.path.join(nfq_root_dir, 'mean_sec'), nfq_mean_sec) np.save(os.path.join(nfq_root_dir, 'max_rt'), nfq_max_rt) np.save(os.path.join(nfq_root_dir, 'min_rt'), nfq_min_rt) np.save(os.path.join(nfq_root_dir, 'mean_rt'), nfq_mean_rt)
notebooks/chapter_08/chapter-08.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Combining Datasets: Merge and Join # One essential feature offered by Pandas is its high-performance, in-memory join and merge operations. # If you have ever worked with databases, you should be familiar with this type of data interaction. # The main interface for this is the ``pd.merge`` function, and we'll see few examples of how this can work in practice. # # For convenience, we will start by redefining the ``display()`` functionality from the previous section: # + import pandas as pd import numpy as np class display(object): """Display HTML representation of multiple objects""" template = """<div style="float: left; padding: 10px;"> <p style='font-family:"Courier New", Courier, monospace'>{0}</p>{1} </div>""" def __init__(self, *args): self.args = args def _repr_html_(self): return '\n'.join(self.template.format(a, eval(a)._repr_html_()) for a in self.args) def __repr__(self): return '\n\n'.join(a + '\n' + repr(eval(a)) for a in self.args) # - # ## Relational Algebra # # The behavior implemented in ``pd.merge()`` is a subset of what is known as *relational algebra*, which is a formal set of rules for manipulating relational data, and forms the conceptual foundation of operations available in most databases. # The strength of the relational algebra approach is that it proposes several primitive operations, which become the building blocks of more complicated operations on any dataset. # With this lexicon of fundamental operations implemented efficiently in a database or other program, a wide range of fairly complicated composite operations can be performed. # # Pandas implements several of these fundamental building-blocks in the ``pd.merge()`` function and the related ``join()`` method of ``Series`` and ``Dataframe``s. # As we will see, these let you efficiently link data from different sources. # ## Categories of Joins # # The ``pd.merge()`` function implements a number of types of joins: the *one-to-one*, *many-to-one*, and *many-to-many* joins. # All three types of joins are accessed via an identical call to the ``pd.merge()`` interface; the type of join performed depends on the form of the input data. # Here we will show simple examples of the three types of merges, and discuss detailed options further below. # ### One-to-one joins # # Perhaps the simplest type of merge expresion is the one-to-one join, which is in many ways very similar to the column-wise concatenation seen in [Combining Datasets: Concat & Append](03.06-Concat-And-Append.ipynb). # As a concrete example, consider the following two ``DataFrames`` which contain information on several employees in a company: df1 = pd.DataFrame({'employee': ['Bob', 'Jake', 'Lisa', 'Sue'], 'group': ['Accounting', 'Engineering', 'Engineering', 'HR']}) df2 = pd.DataFrame({'employee': ['Lisa', 'Bob', 'Jake', 'Sue'], 'hire_date': [2004, 2008, 2012, 2014]}) display('df1', 'df2') # To combine this information into a single ``DataFrame``, we can use the ``pd.merge()`` function: df3 = pd.merge(df1, df2) df3 # The ``pd.merge()`` function recognizes that each ``DataFrame`` has an "employee" column, and automatically joins using this column as a key. # The result of the merge is a new ``DataFrame`` that combines the information from the two inputs. # Notice that the order of entries in each column is not necessarily maintained: in this case, the order of the "employee" column differs between ``df1`` and ``df2``, and the ``pd.merge()`` function correctly accounts for this. # Additionally, keep in mind that the merge in general discards the index, except in the special case of merges by index (see the ``left_index`` and ``right_index`` keywords, discussed momentarily). # ### Many-to-one joins # Many-to-one joins are joins in which one of the two key columns contains duplicate entries. # For the many-to-one case, the resulting ``DataFrame`` will preserve those duplicate entries as appropriate. # Consider the following example of a many-to-one join: df4 = pd.DataFrame({'group': ['Accounting', 'Engineering', 'HR'], 'supervisor': ['Carly', 'Guido', 'Steve']}) display('df3', 'df4', 'pd.merge(df3, df4)') # The resulting ``DataFrame`` has an aditional column with the "supervisor" information, where the information is repeated in one or more locations as required by the inputs. # ### Many-to-many joins # Many-to-many joins are a bit confusing conceptually, but are nevertheless well defined. # If the key column in both the left and right array contains duplicates, then the result is a many-to-many merge. # This will be perhaps most clear with a concrete example. # Consider the following, where we have a ``DataFrame`` showing one or more skills associated with a particular group. # By performing a many-to-many join, we can recover the skills associated with any individual person: df5 = pd.DataFrame({'group': ['Accounting', 'Accounting', 'Engineering', 'Engineering', 'HR', 'HR'], 'skills': ['math', 'spreadsheets', 'coding', 'linux', 'spreadsheets', 'organization']}) display('df1', 'df5', "pd.merge(df1, df5)") # These three types of joins can be used with other Pandas tools to implement a wide array of functionality. # But in practice, datasets are rarely as clean as the one we're working with here. # In the following section we'll consider some of the options provided by ``pd.merge()`` that enable you to tune how the join operations work. # ## Specification of the Merge Key # We've already seen the default behavior of ``pd.merge()``: it looks for one or more matching column names between the two inputs, and uses this as the key. # However, often the column names will not match so nicely, and ``pd.merge()`` provides a variety of options for handling this. # ### The ``on`` keyword # # Most simply, you can explicitly specify the name of the key column using the ``on`` keyword, which takes a column name or a list of column names: display('df1', 'df2', "pd.merge(df1, df2, on='employee')") # This option works only if both the left and right ``DataFrame``s have the specified column name. # ### The ``left_on`` and ``right_on`` keywords # # At times you may wish to merge two datasets with different column names; for example, we may have a dataset in which the employee name is labeled as "name" rather than "employee". # In this case, we can use the ``left_on`` and ``right_on`` keywords to specify the two column names: df3 = pd.DataFrame({'name': ['Bob', 'Jake', 'Lisa', 'Sue'], 'salary': [70000, 80000, 120000, 90000]}) display('df1', 'df3', 'pd.merge(df1, df3, left_on="employee", right_on="name")') # The result has a redundant column that we can drop if desired–for example, by using the ``drop()`` method of ``DataFrame``s: pd.merge(df1, df3, left_on="employee", right_on="name").drop('name', axis=1) # ### The ``left_index`` and ``right_index`` keywords # # Sometimes, rather than merging on a column, you would instead like to merge on an index. # For example, your data might look like this: df1a = df1.set_index('employee') df2a = df2.set_index('employee') display('df1a', 'df2a') # You can use the index as the key for merging by specifying the ``left_index`` and/or ``right_index`` flags in ``pd.merge()``: display('df1a', 'df2a', "pd.merge(df1a, df2a, left_index=True, right_index=True)") # For convenience, ``DataFrame``s implement the ``join()`` method, which performs a merge that defaults to joining on indices: display('df1a', 'df2a', 'df1a.join(df2a)') # If you'd like to mix indices and columns, you can combine ``left_index`` with ``right_on`` or ``left_on`` with ``right_index`` to get the desired behavior: display('df1a', 'df3', "pd.merge(df1a, df3, left_index=True, right_on='name')") # All of these options also work with multiple indices and/or multiple columns; the interface for this behavior is very intuitive. # For more information on this, see the ["Merge, Join, and Concatenate" section](http://pandas.pydata.org/pandas-docs/stable/merging.html) of the Pandas documentation. # ## Specifying Set Arithmetic for Joins # In all the preceding examples we have glossed over one important consideration in performing a join: the type of set arithmetic used in the join. # This comes up when a value appears in one key column but not the other. Consider this example: df6 = pd.DataFrame({'name': ['Peter', 'Paul', 'Mary'], 'food': ['fish', 'beans', 'bread']}, columns=['name', 'food']) df7 = pd.DataFrame({'name': ['Mary', 'Joseph'], 'drink': ['wine', 'beer']}, columns=['name', 'drink']) display('df6', 'df7', 'pd.merge(df6, df7)') # Here we have merged two datasets that have only a single "name" entry in common: Mary. # By default, the result contains the *intersection* of the two sets of inputs; this is what is known as an *inner join*. # We can specify this explicitly using the ``how`` keyword, which defaults to ``"inner"``: pd.merge(df6, df7, how='inner') # Other options for the ``how`` keyword are ``'outer'``, ``'left'``, and ``'right'``. # An *outer join* returns a join over the union of the input columns, and fills in all missing values with NAs: display('df6', 'df7', "pd.merge(df6, df7, how='outer')") # The *left join* and *right join* return joins over the left entries and right entries, respectively. # For example: display('df6', 'df7', "pd.merge(df6, df7, how='left')") # The output rows now correspond to the entries in the left input. Using # ``how='right'`` works in a similar manner. # # All of these options can be applied straightforwardly to any of the preceding join types. # ## Overlapping Column Names: The ``suffixes`` Keyword # Finally, you may end up in a case where your two input ``DataFrame``s have conflicting column names. # Consider this example: df8 = pd.DataFrame({'name': ['Bob', 'Jake', 'Lisa', 'Sue'], 'rank': [1, 2, 3, 4]}) df9 = pd.DataFrame({'name': ['Bob', 'Jake', 'Lisa', 'Sue'], 'rank': [3, 1, 4, 2]}) display('df8', 'df9', 'pd.merge(df8, df9, on="name")') # Because the output would have two conflicting column names, the merge function automatically appends a suffix ``_x`` or ``_y`` to make the output columns unique. # If these defaults are inappropriate, it is possible to specify a custom suffix using the ``suffixes`` keyword: display('df8', 'df9', 'pd.merge(df8, df9, on="name", suffixes=["_L", "_R"])') # These suffixes work in any of the possible join patterns, and work also if there are multiple overlapping columns. # For more information on these patterns, see [Aggregation and Grouping](03.08-Aggregation-and-Grouping.ipynb) where we dive a bit deeper into relational algebra. # Also see the [Pandas "Merge, Join and Concatenate" documentation](http://pandas.pydata.org/pandas-docs/stable/merging.html) for further discussion of these topics. # ## Example: US States Data # # Merge and join operations come up most often when combining data from different sources. # Here we will consider an example of some data about US states and their populations. # The data files can be found at http://github.com/jakevdp/data-USstates/: # + # Following are shell commands to download the data (already downloaded) # #!curl -O https://raw.githubusercontent.com/jakevdp/data-USstates/master/state-population.csv # #!curl -O https://raw.githubusercontent.com/jakevdp/data-USstates/master/state-areas.csv # #!curl -O https://raw.githubusercontent.com/jakevdp/data-USstates/master/state-abbrevs.csv # - # Let's take a look at the three datasets, using the Pandas ``read_csv()`` function: # + pop = pd.read_csv('../../data/state-population.csv') areas = pd.read_csv('../../data/state-areas.csv') abbrevs = pd.read_csv('../../data/state-abbrevs.csv') display('pop.head()', 'areas.head()', 'abbrevs.head()') # - # Given this information, say we want to compute a relatively straightforward result: rank US states and territories by their 2010 population density. # We clearly have the data here to find this result, but we'll have to combine the datasets to find the result. # # We'll start with a many-to-one merge that will give us the full state name within the population ``DataFrame``. # We want to merge based on the ``state/region`` column of ``pop``, and the ``abbreviation`` column of ``abbrevs``. # We'll use ``how='outer'`` to make sure no data is thrown away due to mismatched labels. merged = pd.merge(pop, abbrevs, how='outer', left_on='state/region', right_on='abbreviation') merged = merged.drop('abbreviation', 1) # drop duplicate info merged.head() # Let's double-check whether there were any mismatches here, which we can do by looking for rows with nulls: merged.isnull().any() # Some of the ``population`` info is null; let's figure out which these are! merged[merged['population'].isnull()].head() # It appears that all the null population values are from Puerto Rico prior to the year 2000; this is likely due to this data not being available from the original source. # # More importantly, we see also that some of the new ``state`` entries are also null, which means that there was no corresponding entry in the ``abbrevs`` key! # Let's figure out which regions lack this match: merged.loc[merged['state'].isnull(), 'state/region'].unique() # We can quickly infer the issue: our population data includes entries for Puerto Rico (PR) and the United States as a whole (USA), while these entries do not appear in the state abbreviation key. # We can fix these quickly by filling in appropriate entries: merged.loc[merged['state/region'] == 'PR', 'state'] = 'Puerto Rico' merged.loc[merged['state/region'] == 'USA', 'state'] = 'United States' merged.isnull().any() # No more nulls in the ``state`` column: we're all set! # # Now we can merge the result with the area data using a similar procedure. # Examining our results, we will want to join on the ``state`` column in both: final = pd.merge(merged, areas, on='state', how='left') final.head() # Again, let's check for nulls to see if there were any mismatches: final.isnull().any() # There are nulls in the ``area`` column; we can take a look to see which regions were ignored here: final['state'][final['area (sq. mi)'].isnull()].unique() # We see that our ``areas`` ``DataFrame`` does not contain the area of the United States as a whole. # We could insert the appropriate value (using the sum of all state areas, for instance), but in this case we'll just drop the null values because the population density of the entire United States is not relevant to our current discussion: final.dropna(inplace=True) final.head() # Now we have all the data we need. To answer the question of interest, let's first select the portion of the data corresponding with the year 2000, and the total population. # We'll use the ``query()`` function to do this quickly (this requires the ``numexpr`` package to be installed; see [High-Performance Pandas: ``eval()`` and ``query()``](03.12-Performance-Eval-and-Query.ipynb)): data2010 = final.query("year == 2010 & ages == 'total'") data2010.head() # Now let's compute the population density and display it in order. # We'll start by re-indexing our data on the state, and then compute the result: data2010.set_index('state', inplace=True) density = data2010['population'] / data2010['area (sq. mi)'] density.sort_values(ascending=False, inplace=True) density.head() # The result is a ranking of US states plus Washington, DC, and Puerto Rico in order of their 2010 population density, in residents per square mile. # We can see that by far the densest region in this dataset is Washington, DC (i.e., the District of Columbia); among states, the densest is New Jersey. # # We can also check the end of the list: density.tail() # We see that the least dense state, by far, is Alaska, averaging slightly over one resident per square mile. # # This type of messy data merging is a common task when trying to answer questions using real-world data sources. # I hope that this example has given you an idea of the ways you can combine tools we've covered in order to gain insight from your data!
week4/day2/theory/pandas/7.Merge-and-Join.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from nltk.sentiment.vader import SentimentIntensityAnalyzer df = pd.read_pickle('../Data/Processed Data/Headphone_CleanText.pkl') df = df[~df['clean_text'].isna()] df = df.drop(df[(df['review_length'] > 150) & (df['rating_class'] == 'good') ].index) df.shape analyser = SentimentIntensityAnalyzer() def sentiment(x): return analyser.polarity_scores(x) df['sentiment'] = df['clean_text'].apply(sentiment) df2 = pd.concat([df.drop(['sentiment'], axis=1), df['sentiment'].apply(pd.Series)], axis=1) df2.head(2) # --- # Check the sentiments for a sample good and bad review: #Check the sentiments for a sample review df.loc[1140]['overall'] df.loc[1140]['sentiment'] df.loc[1140]['review_summary'] df.loc[147330]['overall'] df.loc[147330]['sentiment'] df.loc[147330]['review_summary'] # --- # Plot avg sentiments per rating: # + ratings = (df2.groupby('overall',as_index=False)['compound'] .mean() .sort_values('overall', ascending = False)) ratings.rename({'overall':'Rating', 'compound':'Sentiment' }, inplace=True, axis=1) ratings.index=[5,4,3,2,1] ratings # + plt.figure(figsize = (8,4)) mycolors =['peachpuff', 'orange', 'darkorange', 'chocolate', 'saddlebrown'] sns.barplot(x='Rating', y='Sentiment', data = ratings, palette= mycolors, order=[1,2,3,4,5]) sns.despine() plt.title('Average Sentiment Per Rating') plt.xlabel('Rating', fontsize = 13) plt.ylabel('Sentiment (compound)', fontsize = 13); plt.savefig('../Images/Average_Sentiment_Per_Rating.jpg') # - # --- # Plot sentiments over time: # + df2=df2[df2['year']>2003] plt.figure(figsize=(10,3)) sns.lineplot(x='year', y= 'compound', data = df2[df2['overall']==5], ci=0, label='rating=5', color='red') sns.lineplot(x='year', y= 'compound', data = df2[df2['overall']==4], ci=0, label='rating=4', color='indianred') sns.lineplot(x='year', y= 'compound', data = df2[df2['overall']==3], ci=0, label='rating=3', color='saddlebrown') sns.lineplot(x='year', y= 'compound', data = df2[df2['overall']==2], ci=0, label='rating=2', color='orangered') sns.lineplot(x='year', y= 'compound', data = df2[df2['overall']==1], ci=0, label='rating=1', color='orange') plt.legend(bbox_to_anchor=(1.05, 1)) sns.despine() plt.title("Ratings' Average Sentiment Per Year") plt.xlabel('Year', fontsize = 13) plt.ylabel("Sentiment (compound)", fontsize = 13) plt.savefig("../Images/Ratings'_Average_Sentiment_Per_Year.jpg", bbox_inches='tight'); #mycolors =['saddlebrown', 'sienna','chocolate', 'darkorange', 'orange'] # + plt.figure(figsize=(10,3)) sns.lineplot(x='year', y= 'compound', data = df2, ci=0, color='saddlebrown') sns.despine() plt.title("Average Sentiment Per Year") plt.xlabel('Year', fontsize = 13) plt.ylabel("Sentiment (compound)", fontsize = 13) plt.savefig("../Images/Average_Sentiment_Per_Year.jpg", bbox_inches='tight');
Notebooks/3_Sentiment Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <p style="z-index: 101;background: #fde073;text-align: center;line-height: 2.5;overflow: hidden;font-size:22px;">Please <a href="https://www.pycm.ir/doc/#Cite" target="_blank">cite us</a> if you use the software</p> # # Example-6 (Unbalanced data) # ## Binary classification for unbalanced data from pycm import ConfusionMatrix # ### Case1 (Both classes have a good result) # $$Case_1=\begin{bmatrix}26900 & 40 \\25 & 500 \end{bmatrix}$$ case1 = ConfusionMatrix(matrix={"Class1": {"Class1": 26900, "Class2":40}, "Class2": {"Class1": 25, "Class2": 500}}) case1.print_normalized_matrix() print('ACC:',case1.ACC) print('MCC:',case1.MCC) print('CEN:',case1.CEN) print('MCEN:',case1.MCEN) print('DP:',case1.DP) print('Kappa:',case1.Kappa) print('RCI:',case1.RCI) print('SOA1:',case1.SOA1) # ### Case2 (The first class has a good result) # $$Case_2=\begin{bmatrix}26900 & 40 \\500 & 25 \end{bmatrix}$$ case2 = ConfusionMatrix(matrix={"Class1": {"Class1": 29600, "Class2":40}, "Class2": {"Class1": 500, "Class2": 25}}) case2.print_normalized_matrix() print('ACC:',case2.ACC) print('MCC:',case2.MCC) print('CEN:',case2.CEN) print('MCEN:',case2.MCEN) print('DP:',case2.DP) print('Kappa:',case2.Kappa) print('RCI:',case2.RCI) print('SOA1:',case2.SOA1) # ### Case3 (Second class has a good result ) # $$Case_3=\begin{bmatrix}40 & 26900 \\25 & 500 \end{bmatrix}$$ case3 = ConfusionMatrix(matrix={"Class1": {"Class1": 40, "Class2":26900}, "Class2": {"Class1": 25, "Class2": 500}}) case3.print_normalized_matrix() print('ACC:',case3.ACC) print('MCC:',case3.MCC) print('CEN:',case3.CEN) print('MCEN:',case3.MCEN) print('DP:',case3.DP) print('Kappa:',case3.Kappa) print('RCI:',case3.RCI) print('SOA1:',case3.SOA1) # ## Multi-class classification for unbalanced data # ### Case1 (All classes have good result and are unbalanced) # $$Case_1=\begin{bmatrix}4 & 0 &0&1 \\0 & 4&1&0\\0&1&4&0\\0&0&1&4000 \end{bmatrix}$$ case1 = ConfusionMatrix(matrix={"Class1": {"Class1": 4, "Class2":0, "Class3":0, "Class4":1}, "Class2": {"Class1": 0, "Class2":4, "Class3":1, "Class4":0}, "Class3": {"Class1": 0, "Class2":1, "Class3":4, "Class4":0}, "Class4": {"Class1": 0, "Class2":0, "Class3":1, "Class4":40000}}) case1.print_normalized_matrix() print('ACC:',case1.ACC) print('MCC:',case1.MCC) print('CEN:',case1.CEN) print('MCEN:',case1.MCEN) print('DP:',case1.DP) print('Kappa:',case1.Kappa) print('RCI:',case1.RCI) print('SOA1:',case1.SOA1) # ### Case2 (All classes have same result and are balanced) # $$Case_2=\begin{bmatrix}1 & 1 &1&1 \\1 & 1&1&1\\1&1&1&1\\1&1&1&1 \end{bmatrix}$$ case2 = ConfusionMatrix(matrix={"Class1": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class2": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class3": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class4": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}}) case2.print_normalized_matrix() print('ACC:',case2.ACC) print('MCC:',case2.MCC) print('CEN:',case2.CEN) print('MCEN:',case2.MCEN) print('DP:',case2.DP) print('Kappa:',case2.Kappa) print('RCI:',case2.RCI) print('SOA1:',case2.SOA1) # ### Case3 (A class has a bad result and is a bit unbalanced) # $$Case_3=\begin{bmatrix}1 & 1 &1&1 \\1 & 1&1&1\\1&1&1&1\\10&1&1&1 \end{bmatrix}$$ case3 = ConfusionMatrix(matrix={"Class1": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class2": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class3": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class4": {"Class1": 10, "Class2":1, "Class3":1, "Class4":1}}) case3.print_normalized_matrix() print('ACC:',case3.ACC) print('MCC:',case3.MCC) print('CEN:',case3.CEN) print('MCEN:',case3.MCEN) print('DP:',case3.DP) print('Kappa:',case3.Kappa) print('RCI:',case3.RCI) print('SOA1:',case3.SOA1) # ### Case4 (A class is very unbalaned and get bad result) # $$Case_4=\begin{bmatrix}1 & 1 &1&1 \\1 & 1&1&1\\1&1&1&1\\10000&1&1&1 \end{bmatrix}$$ case4 = ConfusionMatrix(matrix={"Class1": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class2": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class3": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class4": {"Class1": 10000, "Class2":1, "Class3":1, "Class4":1}}) case3.print_normalized_matrix() print('ACC:',case4.ACC) print('MCC:',case4.MCC) print('CEN:',case4.CEN) print('MCEN:',case4.MCEN) print('DP:',case4.DP) print('Kappa:',case4.Kappa) print('RCI:',case4.RCI) print('SOA1:',case4.SOA1) # ### Case5 (A class is very unbalaned and get bad result) # $$Case_5=\begin{bmatrix}1 & 1 &1&1 \\1 & 1&1&1\\1&1&1&1\\10&10&10&10 \end{bmatrix}$$ case5 = ConfusionMatrix(matrix={"Class1": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class2": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class3": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class4": {"Class1": 10, "Class2":10, "Class3":10, "Class4":10}}) case5.print_normalized_matrix() print('ACC:',case5.ACC) print('MCC:',case5.MCC) print('CEN:',case5.CEN) print('MCEN:',case5.MCEN) print('DP:',case5.DP) print('Kappa:',case5.Kappa) print('RCI:',case5.RCI) print('SOA1:',case5.SOA1) # ### Case6 (A class is very unbalaned and get bad result) # $$Case_6=\begin{bmatrix}1 & 1 &1&1 \\1 & 1&1&1\\1&1&1&1\\10000&10000&10000&10000 \end{bmatrix}$$ case6 = ConfusionMatrix(matrix={"Class1": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class2": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class3": {"Class1": 1, "Class2":1, "Class3":1, "Class4":1}, "Class4": {"Class1": 10000, "Class2":10000, "Class3":10000, "Class4":10000}}) case6.print_normalized_matrix() print('ACC:',case6.ACC) print('MCC:',case6.MCC) print('CEN:',case6.CEN) print('MCEN:',case6.MCEN) print('DP:',case6.DP) print('Kappa:',case6.Kappa) print('RCI:',case6.RCI) print('SOA1:',case6.SOA1)
Document/Example6.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.7.2 # language: julia # name: julia-1.7 # --- # # Options 1 # # This notebook shows payoff functions and pricing bounds for options. # ## Load Packages and Extra Functions # + using Printf include("jlFiles/printmat.jl") # + using Plots, LaTeXStrings #pyplot(size=(600,400)) gr(size=(480,320)) default(fmt = :png) # - # # Payoffs and Profits of Options # Let $K$ be the strike price and $S_m$ the price of the underlying at expiration ($m$ years ahead) of the option contract. # # The call and put profits (at expiration) are # # $\text{call profit}_{m}\ = \max\left( 0,S_{m}-K\right) - e^{my}C$ # # $\text{put profit}_{m}=\max\left( 0,K-S_{m}\right) - e^{my}P $, # # where $C$ and $P$ are the call and put option prices (paid today). In both cases the first term ($\max()$) represents the payoff at expiration, while the second term ($e^{my}C$ or $e^{my}P$) subtracts the capitalised value of the option price (premium) paid at inception of the contract. # # The profit of a straddle is the sum of those of a call and a put. # # ### A Remark on the Code # - `Sₘ .> K` creates a vector of false/true # - `ifelse.(Sₘ.>K,"yes","no")` creates a vector of "yes" or "no" depending on whether `Sₘ.>K` or not. # + Sₘ = [4.5,5.5] #possible values of underlying at expiration K = 5 #strike price C = 0.4 #call price (just a number that I made up) P = 0.4 #put price (y,m) = (0,1) #zero interest to keep it simple, 1 year to expiration CallPayoff = max.(0,Sₘ.-K) #payoff at expiration CallProfit = CallPayoff .- exp(m*y)*C #profit at expiration ExerciseIt = ifelse.(Sₘ.>K,"yes","no") #"yes"/"no" for exercise printblue("Payoff and profit of a call option with strike price $K and price (premium) of $C:\n") printmat([Sₘ ExerciseIt CallPayoff CallProfit],colNames=["Sₘ","Exercise","Payoff","Profit"]) # + Sₘ = 0:0.1:10 #more possible outcomes, for plotting CallProfit = max.(0,Sₘ.-K) .- exp(m*y)*C PutProfit = max.(0,K.-Sₘ) .- exp(m*y)*P p1 = plot( Sₘ,[CallProfit PutProfit], linecolor = [:red :green], linestyle = [:dash :dot], linewidth = 2, label = ["call" "put"], ylim = (-1,5), legend = :top, title = "Profits of call and put options, strike = $K", xlabel = "Asset price at expiration" ) display(p1) # + StraddleProfit = CallProfit + PutProfit #a straddle: 1 call and 1 put p1 = plot( Sₘ,StraddleProfit, linecolor = :blue, linewidth = 2, label = "call+put", ylim = (-1,5), legend = :top, title = "Profit of a straddle, strike = $K", xlabel = "Asset price at expiration" ) display(p1) # - # # Put-Call Parity for European Options # # A no-arbitrage condition says that # $ # C-P=e^{-my}(F-K) # $ # must hold, where $F$ is the forward price. This is the *put-call parity*. # # Also, when the underlying asset has no dividends (until expiration of the option), then the forward-spot parity says that $F=e^{my}S$. # + (S,K,m,y) = (42,38,0.5,0.05) #current price of underlying etc C = 5.5 #assume this is the price of a call option(K) F = exp(m*y)*S #forward-spot parity P = C - exp(-m*y)*(F-K) printblue("Put-Call parity when (C,S,y,m)=($C,$S,$y,$m):\n") printmat([C,exp(-m*y),F-K,P],rowNames=["C","exp(-m*y)","F-K","P"]) # - # # Pricing Bounds # The pricing bounds for (American and European) call options are # # $\begin{align} # C & \leq e^{-my}F\\ # C & \geq\max[0,e^{-my}(F-K)] # \end{align}$ # + (S,K,m,y) = (42,38,0.5,0.05) #current price of underlying etc F = exp(m*y)*S C_Upper = exp(-m*y)*F C_Lower = max.(0,exp(-m*y)*(F-K)) #pricing bounds for a (single) strike price printlnPs("Pricing bounds for European call option with strike $K: ",C_Lower,C_Upper) # + K_range = 30:0.5:50 #pricing bounds for many strike prices n = length(K_range) C_Upper = exp(-m*y)*F C_Lower = max.(0,exp(-m*y)*(F.-K_range)); # - p1 = plot( K_range,[C_Upper*ones(n) C_Lower], linecolor = [:green :blue], linewidth = 2, linestyle = [:dash :dot], label = [L"C \leq \exp(-my)F " L"C \geq \max[0,\exp(-my)(F-K)]"], ylim = (-1,S+1), legend = :right, title = "Price bounds on European call options", xlabel = "Strike price" ) display(p1) # The pricing bounds for (European) put options are # # $\begin{align} # P_{E} & \leq e^{-my}K\\ # e^{-my}(K-F) & \leq P_{E} # \end{align}$ # + P_Upper = exp(-m*y)*K_range P_Lower = max.(0,exp(-m*y)*(K_range.-F)) p1 = plot( K_range,[P_Upper P_Lower], linecolor = [:green :blue], linewidth = 2, linestyle = [:dash :dot], label = [L" P \leq \exp(-my)K " L" P \geq \max[0,\exp(-my)(K-F)] "], ylim = (-1,50), legend = :right, title = "Price bounds on European put options", xlabel = "Strike price" ) display(p1) # -
Ch19_Options1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # 420-A52-SF - Algorithmes d'apprentissage supervisé - Hiver 2020 - Spécialisation technique en Intelligence Artificielle<br/> # MIT License - Copyright (c) 2020 <NAME> # <br/> # ![Travaux Pratiques - Bagging, forêts aléatoires et boosting](static/16-tp-banner.png) # <br/> # **Objectif:** cette séance de travaux pratiques a pour objectif la mise en oeuvre des techniques suivantes: # * Bagging # * Forêts aléatoires # * Gradient Boosting # * AdaBoost # * XGBoost # * LightGBM # # Le jeu de données utilisée sera **Heart** # %reload_ext autoreload # %autoreload 2 # %matplotlib inline # ## Exercice 1 - Chargement et préparation des données import pandas as pd HRT = pd.read_csv('../../data/Heart.csv', index_col=[0]) HRT = HRT.dropna() HRT_onehot = pd.get_dummies(HRT, columns=['ChestPain','Thal'], prefix = ['cp','thal'], drop_first=True) X = HRT_onehot.drop(['AHD'], axis=1) y = (HRT['AHD'] == "Yes").astype(int) from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(X, y, train_size=0.7, random_state=2020) # ## Exercice 2 - Arbres de classification (avec élagage) from sklearn.tree import DecisionTreeClassifier # #### Définition du modèle et entraînement clf_tree = DecisionTreeClassifier(random_state=2020, ccp_alpha=0.05) clf_tree.fit(X_train, y_train) # #### Prédictions (train et val) y_train_pred_proba_tree = clf_tree.predict_proba(X_train)[:,1] y_val_pred_proba_tree = clf_tree.predict_proba(X_val)[:,1] # #### Aire sous la courbe from sklearn.metrics import roc_auc_score print(f'AUC Train = {roc_auc_score(y_train, y_train_pred_proba_tree)}') print(f'AUC Val = {roc_auc_score(y_val, y_val_pred_proba_tree)}') # ## Exercice 3 - Bagging from sklearn.ensemble import BaggingClassifier # [class sklearn.ensemble.BaggingClassifier(base_estimator=None, n_estimators=10, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=None, random_state=None, verbose=0)](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingClassifier.html) # #### Définition du modèle et entraînement base_tree = DecisionTreeClassifier(random_state=2020, ccp_alpha=0.01) clf_bag = BaggingClassifier(base_estimator=base_tree, n_estimators=1000, random_state=2020) clf_bag.fit(X_train, y_train) # #### Prédictions (train et val) y_train_pred_proba_bag = clf_bag.predict_proba(X_train)[:,1] y_val_pred_proba_bag = clf_bag.predict_proba(X_val)[:,1] # #### Aire sous la courbe print(f'AUC Train = {roc_auc_score(y_train, y_train_pred_proba_bag)}') print(f'AUC Val = {roc_auc_score(y_val, y_val_pred_proba_bag)}') # ## Exercice 4 - Forêts aléatoires from sklearn.ensemble import RandomForestClassifier # [class sklearn.ensemble.RandomForestClassifier(n_estimators=100, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None)](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) # #### Définition du modèle et entraînement clf_rf = RandomForestClassifier(random_state=2020, ccp_alpha=0.001) clf_rf.fit(X_train, y_train) # #### Prédictions (train et val) y_train_pred_proba_rf = clf_rf.predict_proba(X_train)[:,1] y_val_pred_proba_rf = clf_rf.predict_proba(X_val)[:,1] # #### Aire sous la courbe print(f'AUC Train = {roc_auc_score(y_train, y_train_pred_proba_rf)}') print(f'AUC Val = {roc_auc_score(y_val, y_val_pred_proba_rf)}') # ## Exercice 5 - AdaBoost from sklearn.ensemble import AdaBoostClassifier # [class sklearn.ensemble.AdaBoostClassifier(base_estimator=None, n_estimators=50, learning_rate=1.0, algorithm='SAMME.R', random_state=None)](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) # #### Définition du modèle et entraînement clf_ada = AdaBoostClassifier(n_estimators=100, random_state=2020) clf_ada.fit(X_train, y_train) # #### Prédiction (train et val) y_train_pred_proba_ada = clf_ada.predict_proba(X_train)[:,1] y_val_pred_proba_ada = clf_ada.predict_proba(X_val)[:,1] # #### Aire sous la courbe print(f'AUC Train = {roc_auc_score(y_train, y_train_pred_proba_ada)}') print(f'AUC Val = {roc_auc_score(y_val, y_val_pred_proba_ada)}') # ## Exercice 6 - Gradient Boosting from sklearn.ensemble import GradientBoostingClassifier # [class sklearn.ensemble.GradientBoostingClassifier(loss='deviance', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, min_impurity_split=None, init=None, random_state=None, max_features=None, verbose=0, max_leaf_nodes=None, warm_start=False, presort='deprecated', validation_fraction=0.1, n_iter_no_change=None, tol=0.0001, ccp_alpha=0.0)](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html) # #### Définition du modèle et entraînement clf_gb = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=3, random_state=2020) clf_gb.fit(X_train, y_train) # #### Prédiction (train et val) y_train_pred_proba_gb = clf_gb.predict_proba(X_train)[:,1] y_val_pred_proba_gb = clf_gb.predict_proba(X_val)[:,1] # #### Aire sous la courbe print(f'AUC Train = {roc_auc_score(y_train, y_train_pred_proba_gb)}') print(f'AUC Val = {roc_auc_score(y_val, y_val_pred_proba_gb)}') # ## Exercice 7 - XGBoost # #!pip install xgboost import xgboost as xgb # [XGBoost Scikit-learn API](https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn) # #### Définition du modèle et entraînement clf_xgb = xgb.XGBClassifier(objective='binary:logistic', colsample_bytree=0.3, learning_rate=1.1, max_depth=5, reg_alpha=0.1, n_estimators=100) clf_xgb.fit(X_train, y_train) # #### Prédictions (train et val) y_train_pred_proba_xgb = clf_xgb.predict_proba(X_train)[:,1] y_val_pred_proba_xgb = clf_xgb.predict_proba(X_val)[:,1] # #### Aire sous la courbe print(f'AUC Train = {roc_auc_score(y_train, y_train_pred_proba_xgb)}') print(f'AUC Val = {roc_auc_score(y_val, y_val_pred_proba_xgb)}') # ## Exercice 8 - LightGBM # !pip install lightgbm import lightgbm as lgb # #### Définition du modèle et entraînement clf_lgbm = lgb.LGBMClassifier(num_leaves=6, learning_rate=0.1, n_estimators=200) clf_lgbm.fit(X_train, y_train) # #### Prédictions (train et val) y_train_pred_proba_lgbm = clf_lgbm.predict_proba(X_train)[:,1] y_val_pred_proba_lgbm = clf_lgbm.predict_proba(X_val)[:,1] # #### Aire sous la courbe print(f'AUC Train = {roc_auc_score(y_train, y_train_pred_proba_lgbm)}') print(f'AUC Val = {roc_auc_score(y_val, y_val_pred_proba_lgbm)}') # ## Exercice 9 - Évaluation des modèles import matplotlib.pyplot as plt from sklearn.metrics import roc_curve # + fpr_tree, tpr_tree, thresholds = roc_curve(y_val, y_val_pred_proba_tree) fpr_bag, tpr_bag, thresholds = roc_curve(y_val, y_val_pred_proba_bag) fpr_rf, tpr_rf, thresholds = roc_curve(y_val, y_val_pred_proba_rf) fpr_ada, tpr_ada, thresholds = roc_curve(y_val, y_val_pred_proba_ada) fpr_gb, tpr_gb, thresholds = roc_curve(y_val, y_val_pred_proba_gb) fpr_xgb, tpr_xgb, thresholds = roc_curve(y_val, y_val_pred_proba_xgb) fpr_lgbm, tpr_lgbm, thresholds = roc_curve(y_val, y_val_pred_proba_lgbm) fig = plt.figure(1, figsize=(12, 12)) plt.plot([0, 1], [0, 1], 'k--') plt.plot(fpr_tree, tpr_tree, label='Decision Tree') plt.plot(fpr_bag, tpr_bag, label='Bagging') plt.plot(fpr_rf, tpr_rf, label='Random Forest') plt.plot(fpr_ada, tpr_ada, label='AdaBoost') plt.plot(fpr_gb, tpr_gb, label='Gradient Boosting') plt.plot(fpr_xgb, tpr_xgb, label='XGBoost') plt.plot(fpr_lgbm, tpr_lgbm, label='LightGBM') plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve') plt.legend() # - # ## Exercice 10 - Importance des variables explicatives imp = clf_xgb.feature_importances_ fig = plt.figure(2, figsize=(12, 12)) plt.barh(X.columns, imp)
nbs/16-bagging-forets-aleatoires-boosting/16-tp-solution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## USDE Project: A simple example of RNN # Suppose there is a pub in which every day it is served only one type of food. The possible types of food are three and are: apple pie, burger and chicken. <br> # We want to build an algorithm capable to predict the course of the next day basing on the time series of the courses in the previous days. To do so, we will use a recurrent neural network. <br> # In particular, we invent a deterministic law that establish the dish for the next day and then we check if our network is able to learn this rule. <br> # Suppose the rule is the following: <br> # 1) the courses alternate following this specific order: apple pie, burger and chicken <br> # 2) The weather can be sunny or rainy. If it is sunny, the course for the next day remains the same, otherwise it changes. <br> # The situation is synthetized in the picture. # ![image.png](attachment:image.png) # First of all, I generate the time series # + # I import the required libraries import pandas as pd import random n = 800 # number of days in the time series seed = 10 # for reproducibility, I set the seed random.seed(seed) time_series = [['apple pie', 'sunny']] # I set the weather and the course for the first day for i in range(n): # I generate the data for the other days # if the previous days was rainy, I change the course following the order if time_series[i][1]=='rainy': if time_series[i][0]=='apple pie': course = 'burger' elif time_series[i][0]=='burger': course = 'chicken' else: course = 'apple pie' # otherwise, the course remains the same else: course = time_series[i][0] # I set randomly the weather for the day rand = random.randint(0,1) if rand==0: weather = 'sunny' else: weather = 'rainy' # I add the day to the time series time_series.append([course, weather]) # finally, I create a pandas dataframe df = pd.DataFrame(time_series, columns=['Course', 'Weather']) df # - # I have the time series in a Pandas dataframe in which the index represents the day. <br> # Now the goal is to train a neural network to forecast the course of the next day. This is a classification problem for time series, in which the possible labels are the three courses. <br> # In order to train the neural network, it is first necessary to pre-process the data with a proper encoding of the features. <br> # “Weather” is a binary variable, so it is sufficient to encode the labels “sunny” and “rainy” with “0” and “1”. <br> # Concerning “Course”, I can proceed with label encoding or one-hot encoding. Label encoding consists of mapping the labels to integer numbers and it is the right choice when dealing with ordered variables. In our example, we know that there is an order relation in the labels. However, this order is part of the “ground truth” that we want our algorithm to learn and we must consider it unknown a priori. <br> # For this reason, we will proceed with one-hot encoding, that is suitable for more general situation # # + df_encoded = pd.get_dummies(df.Course, prefix='Course') # one-hot encoding for variable "Course" df_encoded['Weather'] = df['Weather'].astype('category').cat.codes # I add to "df_encoded" the label encoding of the # binary variable "Weather" df_encoded # - # Now I proceed with the actual training of the neural network. <br> # First of all, I have to decide the number of time steps to be taken into account by the algorithm for its predictions. We know that the course of the day is deterministically determined by the course and the weather of the previous day, but, again, this is the ground truth and it is not known a priori. <br> # A reasonable choice for the problem would be taking into account all the days of the previous week. <br> # I put the pre-processed data in a Numpy array and in another one the associated targets labels. Then, on these arrays, I will run a Keras function to create the various temporal windows to train the network. # + import numpy as np past_days = 7 # I set the number of days the algorithm will use for the predicitons input_data = np.array(df_encoded)[:-past_days] # I revome from the dataset the last n=past_days days # (for them I don't have the labels to predict) targets = np.array(df_encoded.drop('Weather', axis=1))[past_days:] # I create the targets array # removing the first n=past_days days # (for them I don't have all the information required # to make predictions) # - # Now I can run the Keras pre-processing function on the arrays. <br> # I also divide the data in training and validation set. I will use the validation set to decide when to stop the iterations of gradient descent and to see how the algorithm behave with data that weren’t use for the training (being a simple deterministic rule, we would like to obtain the 100% accuracy on both the sets). <br> # As in common use, I will reserve 70% of the data for training set and 30% for validation set # + import tensorflow as tf valid_limit = round(n*0.70) dataset = tf.keras.preprocessing.timeseries_dataset_from_array(input_data[:valid_limit], targets[:valid_limit], sequence_length=past_days) dataset_valid = tf.keras.preprocessing.timeseries_dataset_from_array(input_data[valid_limit:], targets[valid_limit:], sequence_length=past_days) # - # Now I create the network. Being a simple task, I will use only two layers: <br> # - A simpleRNN layer with 32 neurons, that is the simplest structure endowed with memory cells <br> # - A final layer with softmax activation function, being our task a classification problem <br> # # This is an extremely simple network with only 1283 parameters, but it should be enough for our problem. # # + #Recurrent Neural Network num_classes = 3 num_features = 4 model = tf.keras.models.Sequential([ # Shape [batch, time, features] tf.keras.Input(shape=(past_days, num_features)), tf.keras.layers.SimpleRNN(32), tf.keras.layers.Dense(num_classes, activation='softmax') ]) # - # I set the ultimate settings for the training: the learning rate and the metrics I will use to monitor the learning and stop the gradient descent before overfitting. # + # learning rate lr = 1e-3 optimizer = tf.keras.optimizers.Adam(learning_rate=lr) # Validation metrics metrics = ['accuracy'] # Compile Model model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=metrics) model.summary() # - # Finally, I proceed with the training # + early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5) model.fit(dataset, validation_data=dataset_valid, epochs=500, callbacks=[early_stopping]) # - # The algorithm reaches the max number of iterations (500 iterations). <br> # At the end of the training, the network has a 100% of accuracy both on training and validation set: this Recurrent Neural Network was able to learn the rule also with a very simple structure and few parameters. #
rnn_food-at-the-pub/rnn_food-at-the-pub.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true # <h1>Table of Contents<span class="tocSkip"></span></h1> # <div class="toc"><ul class="toc-item"><li><span><a href="#Objective-Function" data-toc-modified-id="Objective-Function-1">Objective Function</a></span></li><li><span><a href="#Quadratic-Unconstrained-Binary-Optimization-(QUBO)" data-toc-modified-id="Quadratic-Unconstrained-Binary-Optimization-(QUBO)-2">Quadratic Unconstrained Binary Optimization (QUBO)</a></span><ul class="toc-item"><li><span><a href="#Task" data-toc-modified-id="Task-2.1">Task</a></span></li></ul></li></ul></div> # - # In the previous section, we went through some combinatorial optimization problems and their significance. In order to find optimal solutions to those problems, the first step is to formulate them in terms of an objective function. # # # Objective Function # # An objective function gives a mathematical description of a problem. We should minimize this objective function to find an optimal solution to our problem. In most cases, the lower the value of the objective function, the better is our obtained solution. # # An objective function can be formulated in two ways: # # - Quadratic Unconstrained Binary Optimization (QUBO) # - Ising Model # # Conversion between these two formulations is possible. We will go through that later on in the material. Now let's take a closer look at the QUBO formulation. # # # Quadratic Unconstrained Binary Optimization (QUBO) # # A QUBO problem is defined using a square matrix $Q$ and a vector $x$ where, # # - $Q$ is assumed to be either symmetric or in upper-triangular form # - $x$ is a vector of binary decision variables $0$ and $1$ which correspond to the boolean values `False` and `True` respectively # # Our aim is to minimize the objective function defined as # # $$f(x) = \sum\limits_{i} {Q_{i, i} x_i} + \sum\limits_{i < j} {Q_{i, j} x_i x_j}$$ # # where, # - The diagonal terms $Q_{i, i}$ are the linear coefficients # - The non-zero off-diagonal terms $Q_{i, j}$ are the quadratic coefficients # # The above objective function can be expressed in matrix form as # # $$\min\limits_{x \in \{0, 1\}^n} {x^T Q x}$$ # ## Task # # Find out what assignment of $x_1$ and $x_2$ minimizes the objective function # # $$f(x_1, x_2) = 5x_1 + 7x_1 x_2 - 3x_2$$ # # <div class="alert alert-block alert-info">You can adjust the sliders to set different values for $x_1$ and $x_2$. The lower the value of the objective function, the better is the solution.</div> # + # Run this cell to display the sliders from ipywidgets import interact def obj_fn(x1, x2): value = 5*x1 + 7*x1*x2 - 3*x2 return f"The value of the objective function is {value}." interact(obj_fn, x1=(0, 1), x2=(0, 1)); # - # [Click Here for the Solution](QUBO_Mathematical_Definition_Solution.ipynb)
notebooks/.ipynb_checkpoints/QUBO_Mathematical_Definition-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd vc = "Token Engineering" name = "ygg_anderson" # !ls data/alex_hours_csv/ dfmay13 = pd.read_csv('data/alex_hours_csv/13_5_2021.csv') dfmay14 = pd.read_csv('data/alex_hours_csv/14_5_2021.csv') dfmay15 = pd.read_csv('data/alex_hours_csv/15_5_2021.csv') dfmay16 = pd.read_csv('data/alex_hours_csv/16_5_2021.csv') dfmay17 = pd.read_csv('data/alex_hours_csv/17_5_2021.csv') dfmay18 = pd.read_csv('data/alex_hours_csv/18_5_2021.csv') dfmay19 = pd.read_csv('data/alex_hours_csv/19_5_2021.csv') dfmay20 = pd.read_csv('data/alex_hours_csv/20_5_2021.csv') dfmay21 = pd.read_csv('data/alex_hours_csv/21_5_2021.csv') dfmay22 = pd.read_csv('data/alex_hours_csv/22_5_2021.csv') dfmay23 = pd.read_csv('data/alex_hours_csv/23_5_2021.csv') # Filtering out logs where the voice channel isn't Token Engineering dfmay13 = dfmay13[dfmay13['Voice Channel'] == vc] dfmay14 = dfmay14[dfmay14['Voice Channel'] == vc] dfmay15 = dfmay15[dfmay15['Voice Channel'] == vc] dfmay16 = dfmay16[dfmay16['Voice Channel'] == vc] dfmay17 = dfmay17[dfmay17['Voice Channel'] == vc] dfmay18 = dfmay18[dfmay18['Voice Channel'] == vc] dfmay19 = dfmay19[dfmay19['Voice Channel'] == vc] dfmay20 = dfmay20[dfmay20['Voice Channel'] == vc] dfmay21 = dfmay21[dfmay21['Voice Channel'] == vc] dfmay22 = dfmay22[dfmay22['Voice Channel'] == vc] dfmay23 = dfmay23[dfmay23['Voice Channel'] == vc] # Filtering out logs where the user isn't the one specified dfmay13 = dfmay13[dfmay13['Username'] == name] dfmay14 = dfmay14[dfmay14['Username'] == name] dfmay15 = dfmay15[dfmay15['Username'] == name] dfmay16 = dfmay16[dfmay16['Username'] == name] dfmay17 = dfmay17[dfmay17['Username'] == name] dfmay18 = dfmay18[dfmay18['Username'] == name] dfmay19 = dfmay19[dfmay19['Username'] == name] dfmay20 = dfmay20[dfmay20['Username'] == name] dfmay21 = dfmay21[dfmay21['Username'] == name] dfmay22 = dfmay22[dfmay22['Username'] == name] dfmay23 = dfmay23[dfmay23['Username'] == name] # Since the timesteps are 2.5 mins apart, we multiply the number of rows times 2.5 # to get the total minutes spent that day # Divide by 60 to get hours day13 = (len(dfmay13.index) * 1.5) / 60 day14 = (len(dfmay14.index) * 1.5) / 60 day15 = (len(dfmay15.index) * 1.5) / 60 day16 = (len(dfmay16.index) * 1.5) / 60 day17 = (len(dfmay17.index) * 1.5) / 60 day18 = (len(dfmay18.index) * 1.5) / 60 day19 = (len(dfmay19.index) * 1.5) / 60 day20 = (len(dfmay20.index) * 1.5) / 60 day21 = (len(dfmay21.index) * 1.5) / 60 day22 = (len(dfmay22.index) * 1.5) / 60 day23 = (len(dfmay23.index) * 1.5) / 60 sum_hours = day13 + day14 + day15 + day16 + day17 + day18 + day19 + day20 + day21 + day22 + day23 print("May 13: " + str(day13)) print("May 14: " + str(day14)) print("May 15: " + str(day15)) print("May 16: " + str(day16)) print("May 17: " + str(day17)) print("May 18: " + str(day18)) print("May 19: " + str(day19)) print("May 20: " + str(day20)) print("May 21: " + str(day21)) print("May 22: " + str(day22)) print("May 23: " + str(day23)) print("Total: " + str(sum_hours)) sum_hours dfmay17.to_csv('may17.csv') # !pwd
pandas_practice/alexandra_hours.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import gspread from oauth2client.service_account import ServiceAccountCredentials import pandas as pd import requests from bs4 import BeautifulSoup import json # pull data from google sheets def sheetsync(datasheet,tab): scope = ['https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive'] credentials = ServiceAccountCredentials.from_json_keyfile_name( 'qd-cnov-database-2027ad01e8de.json', scope) gc = gspread.authorize(credentials) spreadsheetName = datasheet sheetName = tab # <--- please set the sheet name here. spreadsheet = gc.open(spreadsheetName) sheet = spreadsheet.worksheet(sheetName) data = sheet.get_all_values() headers = data.pop(0) df = pd.DataFrame(data, columns=headers) return df sheetsync('dataset','qd') # ### Crawl bulletin from the website # + def start_requests(url): print(url) r = requests.get(url) return r.content # 第一级 links def get_page(text): soup = BeautifulSoup(text, 'html.parser') links = soup.find_all('a', class_ = 'nrf') pages = [] for link in links: url = 'http://wsjsw.qingdao.gov.cn/'+link.get('href') pages.append(url) return pages # 第二级 links and get info def parse_page(text): soup = BeautifulSoup(text, 'html.parser') mydict = {} time = soup.find('div', class_ = 'fbbt-wenz').text.strip() if time > '2020-01-21': mydict['time'] = soup.find('div', class_ = 'fbbt-wenz').text mydict['parag'] = soup.find('div', class_ = 'neirong-wz').text else:return mydict return mydict def write_json(result): s = json.dumps(result, indent = 4, ensure_ascii=False) with open('data/bulletin.json', 'w', encoding = 'utf-8') as f: f.write(s) def crawl(url): text = start_requests(url) pageurls = get_page(text) for pageurl in pageurls: page = start_requests(pageurl) mydict = parse_page(page) result_list.append(mydict) def main(): url = 'http://wsjsw.qingdao.gov.cn/n28356065/n32563060/n32563061/index.html' crawl(url) for i in range(2, 6): url = 'http://wsjsw.qingdao.gov.cn/n28356065/n32563060/n32563061/index_{}.html'.format(i) crawl(url) # text = start_requests(url) # pageurls = get_page(text) # for pageurl in pageurls: # # page = start_requests(pageurl) # mydict = parse_page(page) # result_list.append(mydict) write_json(result_list) if __name__ == '__main__': result_list = [] main() # - # ### Get Diagonosed data df = pd.read_json('data/bulletin.json',encoding="utf8") df.head() # + def save_data(stype): """ stype: 'cured','diag' """ df = pd.read_json('data/bulletin.json',encoding="utf8") test = df.parag[0].split(':')[1].split('。')[0] if test: test = test.split('、') cured = {} for i,t in enumerate(test): if ('均已治愈出院') in t: cured[str(t[:3])]= t.split('例')[0][-1] test[i] = t.split('例')[0] elif('已治愈出院)') in t: cured[str(t[:3])]= t.split('例')[0][-1] test[i] = t.split('例')[0] elif('已治愈出院') in t: cured[str(t[:3])]= t.split('已治愈出院')[1][0] test[i] = t.split('例')[0] test = ''.join(test) #print(cured) # list of diagnosed data import re temp = re.findall(r'\d+', test) res = list(map(int, temp)) if stype == 'cured': save_cured(cured) elif stype == 'diag': save_diag(res) def save_cured(cured): with open('data/cured.json', 'w', encoding = 'utf-8') as f: json.dump(cured, f) def save_diag(res): district = ['市南区','市北区','李沧区','崂山区','城阳区','黄岛区','即墨市','胶州市','平度市','莱西市'] #diag = {district[i]: res[i] for i in range(len(district))} diag = [list(z) for z in zip(district,res)] #print(diag) with open('data/diagnosed.json', 'w', encoding = 'utf-8') as f: json.dump(diag, f) save_data('diag') save_data('cured') # - # ## get clinic location sheetsync('dataset','clinic') f = open("data/access.txt", "r") access = f.read() data = sheetsync('dataset','clinic') data.head() import json def get_lnglat(location): headers = {'User-Agent' : 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'} for i,loc in enumerate(data['机构地址']): pa = { 'address': loc, #'callback':'showLocation', 'output': 'json', 'ak': access } r = requests.get("http://api.map.baidu.com/geocoding/v3/", params=pa, headers= headers) rejs = json.loads(r.text) data['lng'][i]=rejs['result']['location']['lng'] data['lat'][i]=rejs['result']['location']['lat'] return data data # ## 定点医院名称,地址,经纬度 # + data = sheetsync('dataset','clinic') lng = data['lng'].astype(float).values.tolist() lat = data['lat'].astype(float).values.tolist() coord = [list(z) for z in zip(lng,lat)] result = {} for i,r in data.iterrows(): result[str(data['机构名称'][i])]= coord[i] # save to local with open('data/clinic-coord.json', 'w') as fp: json.dump(result, fp) # - # ## Save the datasheets to local data = sheetsync('dataset','qd') data.to_csv('data/dataset.csv') data.head()
ds/preparation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="0eiDYY8hPhh6" # # AlexNet # + [markdown] id="7FWcN0TTPhij" # ## Introduction # AlexNet是一個網路架構,AlexNet在曾經一個比賽中獲得了不錯的結果。 # # 與其他架構(VGG, GoogleNet, ResidualNet)一樣都很有名。 # + [markdown] id="H8cnW0FjPv0L" # ## Structure # # [CRPNCRPNCRCRCRPFRDFRDFS] # # - C: Convolution # - R: Relu # - P: Pooling # - N: Normalization # - D: Dropout # - F: Full COnnected # - S: Softmax # + [markdown] id="3oiI-fxaQe9V" # ## Calculate Size # # Image(227, 227, 3) # # Conv size format: # ( (Image size - Filter size + 2*pad) / stride ) + 1 # # Pooling format: # ( (Image size - Filter size + 2*pad) / stride ) + 1 # + [markdown] id="F2Mz17g4P9d_" # # MyNet
ipynb/model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import re import webbrowser from bs4 import BeautifulSoup as bs from urllib.request import urlopen # + # creating key-value pairs for genres and word_lists word_lists = {'angry_popMusic' : ['a', 'Angry', 'angry' , 'ANGRY' , 'MAD', 'ANGRY!', 'mad', 'pop', 'Pop', 'p', 'POP'], 'angry_countryMusic' : ['a', 'Angry', 'angry' , 'ANGRY' , 'MAD', 'ANGRY!', 'mad', 'country', 'Country', 'c', 'C', 'COUNTRY'], 'angry_edmMusic' : ['a', 'Angry', 'angry' , 'ANGRY' , 'MAD', 'ANGRY!', 'mad', 'edm', 'EDM', 'Edm', 'E' , 'e'], 'angry_rockMusic' : ['a', 'Angry', 'angry' , 'ANGRY' , 'MAD', 'ANGRY!', 'mad', 'Rock', 'rock', 'ROCK', 'r', 'R'], 'happy_popMusic' : ['h', 'happy', 'HAPPY' , 'Happy' , 'H', 'Excited', 'Cheerful', 'pop', 'Pop', 'p', 'POP'], 'happy_countryMusic' : ['h', 'happy', 'HAPPY' , 'Happy' , 'H', 'Excited', 'Cheerful', 'country', 'Country', 'c', 'C', 'COUNTRY'], 'happy_edmMusic' : ['h', 'happy', 'HAPPY' , 'Happy' , 'H', 'Excited', 'Cheerful', 'edm', 'EDM', 'Edm', 'E' , 'e'], 'happy_rockMusic' : ['h', 'happy', 'HAPPY' , 'Happy' , 'H', 'Excited', 'Cheerful', 'Rock', 'rock', 'ROCK', 'r', 'R'], 'sad_popMusic' : ['s', 'sad', 'Sad' , 'SAD' , 'Upset', 'upset', 'pop', 'Pop', 'p', 'POP'], 'sad_countryMusic' : ['s', 'sad', 'Sad' , 'SAD' , 'Upset', 'upset', 'country', 'Country', 'c', 'C', 'COUNTRY'], 'sad_edmMusic' : ['s', 'sad', 'Sad' , 'SAD' , 'Upset', 'upset', 'edm', 'EDM', 'Edm', 'E' , 'e'], 'sad_rockMusic' : ['s', 'sad', 'Sad' , 'SAD' , 'Upset', 'upset', 'Rock', 'rock', 'ROCK', 'r', 'R'], 'romantic_popMusic' : ['r', 'romantic', 'Romantic' , 'R' , 'ROMANTIC', 'love', 'L', 'pop', 'Pop', 'p', 'POP'], 'romantic_countryMusic' : ['r', 'romantic', 'Romantic' , 'R' , 'ROMANTIC', 'love', 'L', 'country', 'Country', 'c', 'C', 'COUNTRY'], 'romantic_edmMusic' : ['r', 'romantic', 'Romantic' , 'R' , 'ROMANTIC', 'love', 'L', 'edm', 'EDM', 'Edm', 'E' , 'e'], 'romantic_rockMusic' : ['r', 'romantic', 'Romantic' , 'R' , 'ROMANTIC', 'love', 'L', 'Rock', 'rock', 'ROCK', 'r', 'R']} # creating key-value pairs for genres and links playlist = {'angry_popMusic' : 'https://www.youtube.com/watch?v=p0BiGek2bKA&list=PLInXayGqDh8koPxaYfvYGF_U1BC_DHOC6&index=1', 'angry_countryMusic' : 'https://www.youtube.com/watch?v=WaSy8yy-mr8&list=RDQML0tO4Xq5Y0o&start_radio=1', 'angry_edmMusic' : 'https://www.youtube.com/watch?v=qMH0Xglh7GA&index=2&t=0s&list=PLInXayGqDh8m4IVtEKYBnJd-WCl9BdCXG', 'angry_rockMusic' : 'https://www.youtube.com/watch?v=xqds0B_meys&index=2&list=PLInXayGqDh8lgDOCHY2lhOzJOarft5H0V&t=0s', 'happy_popMusic' : 'https://www.youtube.com/watch?v=ZbZSe6N_BXs&index=2&list=PLInXayGqDh8l397QH20uVt4NEepCuiLEj&t=0s', 'happy_countryMusic' : 'https://www.youtube.com/watch?v=HgknAaKNaMM&t=0s&list=PLInXayGqDh8m9KpTmlUwuPvqO3zOptNT8&index=2', 'happy_edmMusic' : 'https://www.youtube.com/watch?v=MjEYCUJuh-g&t=0s&index=2&list=PLInXayGqDh8ne-J29qSOic-s97RdRVvmQ', 'happy_rockMusic' : 'https://www.youtube.com/watch?v=SAaO6XvUhd0&index=2&list=PLInXayGqDh8mLhLvdGH02EkuT5czERqz6&t=0s', 'sad_popMusic' : 'https://www.youtube.com/watch?v=ij_0p_6qTss&t=0s&index=2&list=PLInXayGqDh8nd9aY0xtfClhHH35qGyo5f', 'sad_countryMusic' : 'https://www.youtube.com/playlist?list=PLInXayGqDh8ki7V3vvz7KFnQUnTNCkEw_', 'sad_edmMusic' : 'https://www.youtube.com/playlist?list=PLInXayGqDh8n-TtrvpF9NcfQLn6wyi6lY', 'sad_rockMusic' : 'https://www.youtube.com/watch?v=4OjiOn5s8s8&list=PLInXayGqDh8mpkvc5w3Z5LapOYjsVs3ba&index=11&t=0s', 'romantic_popMusic' : 'https://www.youtube.com/watch?v=3AtDnEC4zak&index=2&list=PLInXayGqDh8kIJYRwpRiLI6VC9fcGPTrd&t=0s', 'romantic_countryMusic' : 'https://www.youtube.com/watch?v=7qaHdHpSjX8&index=16&t=0s&list=PLInXayGqDh8nPKAt_lBW7J4nGZ0OCCfyG', 'romantic_edmMusic' : 'https://www.youtube.com/watch?v=ku16PsxpZGU&t=0s&list=PLInXayGqDh8lbZYbU11VkskM5X6ElQWmH&index=2', 'romantic_rockMusic' : 'https://www.youtube.com/watch?v=iAP9AF6DCu4&t=0s&list=PLInXayGqDh8l9jKsHEtZg1ytzyuSafW3A&index=2' } # + playlist_df = pd.DataFrame() playlist_df['moods'] = playlist.keys() playlist_df['links'] = playlist.values() playlist_df['word_lists'] = word_lists.values() playlist_df['word_lists'] = playlist_df['word_lists'].apply(lambda x: ','.join(map(str, x))) playlist_df.head() # - # to display the name of the playlist 1st song def link(http): url = http page = urlopen(url) soup = bs(page, 'html.parser') return soup.title.get_text() count = 0 while count < 2: mood = input("Enter how you are feeling(your mood)\n>>") music = input("Enter the type of music you want to hear, either pop, country, EDM, Classical, or Rock\n>>") for x in range(0, len(playlist_df)): if re.search(r"\b%s\b" % mood, playlist_df.word_lists[x]) and re.search(r"\b%s\b" % music, playlist_df.word_lists[x]): webbrowser.open(playlist_df.links[x]) http = playlist_df.links[x] print(link(http)) # to display the name of the playlist count = 5 break else: pass count = count + 1 url = http page = urlopen(url) soup = bs(page, 'lxml') soup # + active="" # # code below also works to fetch all songs in playlist but has unwanted info at the end # h4 = soup.find_all("h4") # for h in h4: # print(h.text) # - for i, li in enumerate(soup.select('li.yt-uix-scroller-scroll-unit')): print('{}. {}'.format(i + 1, li['data-video-title'])) print('https://www.youtube.com' + li.a['href']) print('-' * 80) for i, tr in enumerate(soup.select('tr.yt-uix-tile')): print('{}. {}'.format(i + 1, tr['data-title'])) print('https://www.youtube.com' + tr.a['href']) print('-' * 80)
playlist_extraction_with_Beautiful_Soup_per_user_mood&music_choice (2).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import cv2 import matplotlib.pyplot as plt # + data_path = './Data' data_dir_list = os.listdir(data_path) print(data_dir_list) save_path='./PreImage' img_data_list=[] face_haar_cascade=cv2.CascadeClassifier('Cascade Classifier/haarcascade_frontalface_default.xml') for dataset in data_dir_list: img_list=os.listdir(data_path+'/'+ dataset) print ('Loaded the images of dataset-'+'{}\n'.format(dataset)) print (len(img_list)) for img in img_list: input_img=cv2.imread(data_path + '/'+ dataset + '/'+ img ) roi_gray = cv2.cvtColor(input_img, cv2.COLOR_BGR2GRAY) faces_detected=face_haar_cascade.detectMultiScale(roi_gray,scaleFactor=1.3,minNeighbors=5) for(x,y,w,h) in faces_detected: cv2.rectangle(roi_gray,(x,y),(x+w,y+h),(0,255,0),thickness=0) crop_image=roi_gray[y:y+w,x:x+h] cv2.imwrite(str(w) + str(h) + '_faces.jpg', crop_image) img_data_list.append(crop_image) plt.imshow(crop_image) plt.title(str(dataset)) plt.show() # -
preprocess.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline # %config InlineBackend.figure_format = 'retina' from matplotlib import pyplot as plt from lifelines import CoxPHFitter import numpy as np import pandas as pd from lifelines.datasets import load_rossi plt.style.use('bmh') # - # ## Assessing Cox model fit using residuals (work in progress) # # This tutorial is on some common use cases of the (many) residuals of the Cox model. We can use resdiuals to diagnose a model's poor fit to a dataset, and improve an existing model's fit. # + df = load_rossi() df['age_strata'] = pd.cut(df['age'], np.arange(0, 80, 5)) df = df.drop('age', axis=1) cph = CoxPHFitter() cph.fit(df, 'week', 'arrest', strata=['age_strata', 'wexp']) # - cph.print_summary() cph.plot(); # ### Martingale residuals # # Defined as: # # $$ \delta_i - \Lambda(T_i) \\ = \delta_i - \beta_0(T_i)\exp(\beta^T x_i)$$ # # where $T_i$ is the total observation time of subject $i$ and $\delta_i$ denotes whether they died under observation of not (`event_observed` in _lifelines_). # # From [1]: # # > Martingale residuals take a value between $[1,−\inf]$ for uncensored observations and $[0,−\inf]$ for censored observations. Martingale residuals can be used to assess the true functional form of a particular covariate (Thernau et al. (1990)). It is often useful to overlay a LOESS curve over this plot as they can be noisy in plots with lots of observations. Martingale residuals can also be used to assess outliers in the data set whereby the survivor function predicts an event either too early or too late, however, it's often better to use the deviance residual for this. # # From [2]: # # > Positive values mean that the patient died sooner than # expected (according to the model); negative values mean that # the patient lived longer than expected (or were censored). r = cph.compute_residuals(df, 'martingale') r.head() r.plot.scatter( x='week', y='martingale', c=np.where(r['arrest'], '#008fd5', '#fc4f30'), alpha=0.75 ) # ### Deviance residuals # # One problem with martingale residuals is that they are not symetric around 0. Deviance residuals are a transform of martingale residuals them symetric. # # - Roughly symmetric around zero, with approximate standard deviation equal to 1. # - Positive values mean that the patient died sooner than expected. # - Negative values mean that the patient lived longer than expected (or were censored). # - Very large or small values are likely outliers. # r = cph.compute_residuals(df, 'deviance') r.head() r.plot.scatter( x='week', y='deviance', c=np.where(r['arrest'], '#008fd5', '#fc4f30'), alpha=0.75 ) r = r.join(df.drop(['week', 'arrest'], axis=1)) plt.scatter(r['prio'], r['deviance'], color=np.where(r['arrest'], '#008fd5', '#fc4f30')) r = cph.compute_residuals(df, 'delta_beta') r.head() r = r.join(df[['week', 'arrest']]) r.head() plt.scatter(r['week'], r['prio'], color=np.where(r['arrest'], '#008fd5', '#fc4f30')) # [1] https://stats.stackexchange.com/questions/297740/what-is-the-difference-between-the-different-residuals-in-survival-analysis-cox # # [2] http://myweb.uiowa.edu/pbreheny/7210/f15/notes/11-10.pdf
docs/jupyter_notebooks/Cox residuals.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Sravani-Samsani/letsupgrade-python-b7/blob/master/day2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="0Qh73JhGN-xs" colab_type="text" # # List # + id="TeehHEGsOHtM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b061fa6b-375b-40e7-aa0b-05008431b093" lst=[10,20,"jyo",'a',20] print(lst) # + id="Z1v8Tr5MODRx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="48493de2-49f0-4a35-ff0a-133c40c01093" #append() lst.append("jai") print(lst) # + id="_r0m46MWOmLM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d4dd4e0d-4e7e-41e5-8a99-1e2abfdcf759" #clear() del lst[:] print(lst) # + id="9QId_4kePRgK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c0bca3d8-172d-4ee7-9c80-148b94a8cbfc" #copy() lst=[10,[20,30,8],"jyo"] a=lst print(a) # + id="-rnJIQrHPzBJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="219e87f1-c1f1-4d27-c26f-1a7ea0cd9dbc" #shallow copy() import copy a=[10,20,30] b=copy.copy(a) print(a) print(b) # + id="efR70NGMRD3k" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="1451d13c-b3a5-4fe5-9fe7-f233e7cfac20" #deep copy() import copy a=[10,20,'a',[1,9,7]] b=copy.deepcopy(a) a[3][1]=5 print(a) print(b) # + id="CQOEK5NrRzsD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="753e036f-83a9-4972-e453-aefd0b2dab29" #count() a=[9,5,8,9] count=a.count(9) print(count) # + id="kYnxtyzCq7zL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d643f48e-7931-4151-e35c-8c60a73b00cb" #extend() a=["python","JAVA",23,5.78] b=["test",5,40.56] a.extend(b) print(a) # + id="Jc-YtdCB6xbg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="af34fb9b-b600-49fd-f272-6462ca97fc6a" #index() lst=[20,"srav",19,"jyo",60.78] index=lst.index("jyo") print("The index of jyo is:",index) # + id="PM5Vi0m0AHVT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9282dc95-20b2-46d4-b0cc-2d4bbf420fe1" #insert() lst=[10,[2,3,4],"srav"] lst.insert(2,"jyo") print(lst) # + id="-LcnIgPAC2P-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="20c63134-fb94-4de8-f4dd-655d9663d2a5" #POP() lst=[10,"ravi",78.90,[0,8]] return_value=lst.pop(3) print("returnedvalue:",return_value) print("updated List:",lst) # + id="Wah-1ejFEkly" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c7414e02-e95e-406c-b2c2-90a87726c641" #remove() lst=["srav",89,"jyo",56] lst.remove("srav") print("updated list:",lst) # + id="64hQbhO7HGlK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="adb7052d-8feb-4e50-914f-1c82c436c398" #reverse() lst=[10,"jyo",90.98] print("old list:",lst) a=lst[::-1] print("new list:",a) # + [markdown] id="PMgOjg9CMOtH" colab_type="text" # # dict # + id="JoK7OiL1MSVT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="8f0e0530-02c2-4ab2-efd5-03443f3dc803" my_dict = {'name': 'Jack', 'mob': 2687456} print(my_dict['name']) print(my_dict.get('mob')) print(my_dict) # + id="obFd8BpgQl5g" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="958da0f7-a547-49ba-f8c1-6a53d7b0a4e9" #clear() d={"bird":"pegion","colour":"white"} d.clear() print(d) # + id="tYy3rbCRRkG_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="7d9af66a-9341-4208-9bb7-6a1b6a9628bf" #copy() d={"bird":"pegion","colour":"white"} new=d.copy() print("original:",d) print("new:",new) # + id="cETQEYwtSlKk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="3f356990-8746-4d27-af0a-3e44a0c4c3ff" d={"bird":"pegion","colour":"white"} new=d d.clear() print("original:",d) print("new:",new) # + id="scdH3HrvSQ54" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="017bd49c-d4c2-4348-8846-3c78fbcadce6" d={"bird":"pegion","colour":"white"} new=d.copy() d.clear() print("original:",d) print("new:",new) # + id="5Kq94rZ0S5Tb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="21d001fd-66bd-4d8b-e0ce-13b0e1d6a5fe" #from keys() keys = {'a', 'e', 'i', 'o', 'u' } value = [1] vowels = dict.fromkeys(keys, value) print(vowels) # updating the value value.append(2) print(vowels) # + id="xwJeW5OabD6W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="fc9a6a51-92b8-4bb0-fcd5-f056cfde82cf" #get() d={"bird":"pegion","colour":"white"} print('bird:', d.get('bird')) print('colour: ', d.get('colour')) print('breed: ', d.get('breed')) # + id="PbxNFI5Bc6Qo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="ca285dae-1209-4423-f85e-19bb30747bea" #items() sales = { 'apple': 2, 'orange': 3, 'grapes': 4 } items = sales.items() print('Original items:', items) # delete an item from dictionary del[sales['apple']] print('Updated items:', items) # + id="30DnTELsjWK8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="e9eee779-0bae-4126-f847-756caf91cd40" #keys() a = {'thing': 'chair', 'colour': "black" } print('Before dictionary is updated') keys = a.keys() print(keys) # adding an element to the dictionary a.update({'material': "wood"}) print('\nAfter dictionary is updated') print(keys) # + id="iWIQaGHHmHNV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="6cfc99d4-79b9-4b35-be31-d031efe64a39" #pop() sales = { 'apple': 2, 'orange': 3, 'grapes': 4 } element = sales.pop('apple') print('The popped element is:', element) print('The dictionary is:', sales) # + id="p-KYz29pkJS7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="a2fca2f2-1762-410c-b655-40adfb0bc966" #popitem() a = {'thing': 'chair', 'colour': "black",'material': "wood" } #material is inserted last,so it will be removed result = a.popitem() print('Return Value = ', result) print('a = ', a) # inserting a new element pair a['no.of chairs'] = 5 print(a) # + id="j1EytWugmpa2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="35e5e088-1a91-4961-dce0-f65f7a696a48" #default() person = {'name': 'jyo'} # key is not in the dictionary salary = person.setdefault('salary') print('person = ',person) print('salary = ',salary) # key is not in the dictionary # default_value is provided age = person.setdefault('age', 40) print('person = ',person) print('age = ',age) # + id="vtAxnTkwnVfv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="1158ceab-66e7-43e8-c4b1-604aea83b835" #update() d = {1: "one", 2: "three"} d1 = {2: "two"} # updates the value of key 2 d.update(d1) print(d) d1 = {3: "three"} # adds element with key 3 d.update(d1) print(d) # + id="FC1iTki_nc5i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="7bb56e01-e97a-4a4a-f1d8-6970ba7f967a" #values() sales = { 'apple': 2, 'orange': 3, 'grapes': 4 } values = sales.values() print('Original items:', values) # delete an item from dictionary del[sales['apple']] print('Updated items:', values) # + [markdown] id="xmQF2MeIpOtk" colab_type="text" # # Set # + id="T-bar39HpYLD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3118f70d-4840-424f-ac57-b9d3bca41c28" my_set = {1, 2, 3, 4, 3, 2} print(my_set) # + id="hxLybKzKp8Sn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="dc9fc00b-8346-4206-adc2-87ed431f2f37" #remove() language = {'English', 'French', 'German'} language.remove('German') print('Updated language set:', language) # + id="DzWM0jB2s3P9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="dbee1703-d9c6-4920-ba5c-19808276cc6b" #add() vowels = {'a', 'e', 'u'} # a tuple ('i', 'o') tup = ('i', 'o') # adding tuple vowels.add(tup) print('Vowels are:', vowels) # adding same tuple again vowels.add(tup) print('Vowels are:', vowels) # + id="cLc9-Muiut_H" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="ddc0083f-fd2d-4833-f5cb-74373708e5cc" #copy() numbers = {1, 2, 3, 4} new_numbers = numbers.copy() new_numbers.add(5) print('numbers: ', numbers) print('new_numbers: ', new_numbers) # + id="rreCm_XwvcfG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="24d3e416-0811-4120-d2d1-894343f5e681" #clear() numbers = {1, 2, 3, 4} new_numbers = numbers.copy() numbers.clear() print('numbers: ', numbers) print('new_numbers: ', new_numbers) # + id="1mZwLfsAwvIy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="91271a29-ffaf-41cc-bfb0-75906b8ec747" #difference() A = {'a', 'b', 'c', 'd'} B = {'c', 'f', 'g'} # Equivalent to A-B print(A.difference(B)) # Equivalent to B-A print(B.difference(A)) # + id="N3dnwyjLxJa-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="fa255074-e651-40dc-ffee-e0a74c37ab6b" #setdifference_update() A = {'a', 'c', 'g', 'd'} B = {'c', 'f', 'g'} result = A.difference_update(B) print('A = ', A) print('B = ', B) print('result = ', result) # + id="h9Fxi2XFx-GA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2ccdd3ea-aac5-40cd-b7fa-f54aa381e475" #discard() numbers = {2, 3, 4, 5} numbers.discard(3) print('numbers = ', numbers) # + id="ZuxsbX3xyfw9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="75395ebd-1a79-4767-898d-6845efdd6a8c" #intersection() A = {2, 3, 5, 4} B = {2, 5, 100} C = {2, 3, 8, 9, 10} print(B.intersection(A)) print(B.intersection(C)) print(A.intersection(C)) print(C.intersection(A, B)) # + id="uy0bUIGjy1ib" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="20e39265-6d77-48c2-e425-65f262b3ea7f" #intersection_update() A = {1, 2, 3, 4} B = {2, 3, 4, 5, 6} C = {4, 5, 6, 9, 10} result = C.intersection_update(B, A) print('result =', result) print('C =', C) print('B =', B) print('A =', A) # + id="s_IV-gIOzIXH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="0ee7fffb-d6ff-46b4-a9eb-44a0d7cbe67f" #isdisjoint() A = {'a', 'b', 'c', 'd'} B = ['b', 'e', 'f'] C = '5de4' D ={1 : 'a', 2 : 'b'} E ={'a' : 1, 'b' : 2} print('Are A and B disjoint?', A.isdisjoint(B)) print('Are A and C disjoint?', A.isdisjoint(C)) print('Are A and D disjoint?', A.isdisjoint(D)) print('Are A and E disjoint?', A.isdisjoint(E)) # + id="0J3aqC-a0m4P" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="c6c4567e-ec82-4f54-ca13-d708d676fe17" #issubset() A = {1, 2, 3} B = {1, 2, 3, 4, 5} C = {1, 2, 4, 5} # Returns True print(A.issubset(B)) # Returns False # B is not subset of A print(B.issubset(A)) # Returns False print(A.issubset(C)) # Returns True print(C.issubset(B)) # + id="1JUqNXGl09DP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="f06a4f4a-de2e-4e99-98ee-5126f8eafe3b" #pop() A ={'a', 'b', 'c', 'd'} print('Return Value is', A.pop()) print('A = ', A) # + id="jtFLuQ3Z1pbF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="3cb22d91-1d6e-4b4a-c007-39f568ce163d" #symmetric_difference() A = {'a', 'b', 'c', 'd'} B = {'c', 'd', 'e' } C = {} print(A.symmetric_difference(B)) print(B.symmetric_difference(A)) print(A.symmetric_difference(C)) print(B.symmetric_difference(C)) # + id="0RlnqcmQ13t4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="81960381-87af-4897-9e99-2fe1b3b50352" #symmetric_difference_update() A = {'a', 'c', 'd'} B = {'c', 'd', 'e' } result = A.symmetric_difference_update(B) print('A =', A) print('B =', B) print('result =', result) # + id="42_xJz_Z2Dwx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="015ad046-5491-4578-d69d-f22fe66a4a2b" #union() A = {'a', 'c', 'd'} B = {'c', 'd', 2 } C = {1, 2, 3} print('A U B =', A.union(B)) print('B U C =', B.union(C)) print('A U B U C =', A.union(B, C)) print('A.union() =', A.union()) # + id="y-yJ2mge2ZJ7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="5cdca9a0-2ae0-415b-a752-0b3e9a8ab845" A = {'a', 'c', 'd'} B = {'c', 'd', 2 } C = {1, 2, 3} print('A U B =', A| B) print('B U C =', B | C) print('A U B U C =', A | B | C) # + id="cu0tSbhd2Gh_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="8f72f6c0-3dbd-4694-ee14-2b9d674aa9bc" #update() A = {'a', 'b'} B = {1, 2, 3} result = A.update(B) print('A =', A) print('result =', result) # + [markdown] id="WWnuDsj-3K-V" colab_type="text" # # Tuples # + id="etZL3EDU3QMo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="15df2013-7b78-4a57-e0a6-db26c683b18a" t=("srav",90.35,"jyo") print(t) # + id="__zV52eT4HM4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b6b56ecc-8cdb-4013-df14-1c7c33fad049" #index() t=("srav",90.35,"jyo",[5,7]) print(t[3][1]) # + id="fYFMcrZB4-gs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c9e10aa5-f69b-4155-cab6-2884c12f6f5e" #negativeindex() t=("srav",90.35,"jyo",[5,7]) print(t[-1]) # + id="AdAgdmAV5N7d" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="ce51f3e6-de62-4e51-af17-628e3e5a45b2" #count() vowels = ('a', 'e', 'i', 'o', 'i', 'u') count = vowels.count('i') # print count print('The count of i is:', count) # count element 'p' count = vowels.count('p') # print count print('The count of p is:', count) # + [markdown] id="crJZrLuf6ijj" colab_type="text" # # Boolean # + id="Y8qstdFJ6mHz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="6c217e8c-fb08-4995-da00-86e0251e55c5" print(10 > 9) print(10 == 9) print(10 < 9) # + id="zWjTA9JM7Bgq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="f294446f-6309-4445-b7d0-eeebd62565d8" print(bool("Hello")) print(bool(15))
day2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python36 # --- # + # %reload_ext autoreload # %autoreload 2 import numpy as np import pandas as pd import matplotlib from dsutil.plotting import add_grid,add_value_labels pd.set_option('display.max_columns',1000) import os import matplotlib.pyplot as plt # %matplotlib inline # - print('pandas: {}, numpy: {}, matplotlib: {}'.format( pd.__version__, np.__version__, matplotlib.__version__)) df = pd.DataFrame({ 'user_id':[1,2,1,3,3,], 'content_id':[1,1,2,2,2], 'tag':['cool','nice','clever','clever','not-bad'] }) df df.groupby("content_id")['tag'].apply(lambda tags: ','.join(tags)).to_frame().reset_index() df.groupby("content_id")["user_id"].nunique().to_frame().reset_index().rename(columns={"user_id":'unique_users'}) # ### sort groupby groups df = pd.DataFrame({ 'value':[20.45,22.89,32.12,111.22,33.22,100.00,99.99], 'product':['table','chair','chair','mobile phone','table','mobile phone','table'] }) df df.groupby('product')['value'].sum().to_frame().reset_index() df.groupby('product')['value'].sum().to_frame().reset_index().sort_values(by='value') type(df.groupby('product')['value']) # ## plot group size plt.clf() df.groupby('product').size().plot(kind='bar') plt.xticks(rotation=0) plt.show() # ## plot sum by group plt.clf() df.groupby('product').sum().plot(kind='bar') plt.xticks(rotation=0) plt.show() # ## plot group average with error bars df = pd.DataFrame({ 'product':['table','table','table','mobile phone','mobile phone','mobile phone','chair','chair','chair'], 'purchase_price':[28.45, 25.89,32.12,99.99,120.00,170.00,12.22,28.22,5.00] }) df[['product','purchase_price']] df.groupby('product').agg([np.mean,np.std]) # + v=df.groupby('product').agg([np.mean,np.std]) v.columns = [col[-1].strip() for col in v.columns.values] for index,row in v.iterrows(): name = row.name mean = row['mean'] stddev = row['std'] v['mean'] = v['mean'].apply(lambda v: '{:.2f}'.format(v)) v['std'] = v['std'].apply(lambda v: '{:.2f}'.format(v)) v.reset_index() # + plt.clf() ax = plt.gca() # plot the means df.groupby('product').mean().plot(kind='bar',color='lightblue',ax=ax) # generate a dataframe with means and standard deviations grouped_df=df.groupby('product').agg([np.mean,np.std]) # flatten column names grouped_df.columns = [col.strip() for col in v.columns.values] # iterrows is usually very slow but since this is a grouped # dataframe, there wonly be many rows for i,(index,row) in enumerate(grouped_df.iterrows()): name = row.name mean = row['mean'] stddev = row['std'] # plot the vertical line ax.vlines(x=i,ymin=mean-stddev,ymax=mean+stddev) plt.xticks(rotation=0) add_grid() plt.ylabel('Average purchase price') plt.xlabel(None) plt.gca().legend_.remove() plt.show() # - # ### flattening df = pd.DataFrame({ 'value':[20.45,22.89,32.12,111.22,33.22,100.00,99.99], 'product':['table','chair','chair','mobile phone','table','mobile phone','table'] }) df grouped_df = df.groupby('product').agg({'value':['min','max','mean']}) grouped_df grouped_df.columns = ['_'.join(col).strip() for col in grouped_df.columns.values] grouped_df.reset_index() # ### iterate over groups df for key,group_df in df.groupby('product'): print("the group for product '{}' has {} rows".format(key,len(group_df))) # ## group by and change aggregation column name df df.groupby('product')['value'].sum().to_frame().reset_index() df.groupby('product')['value'].sum().reset_index(name='value_sum') # ## get group by key df # + # grouped_df is a DataFrameGroupBy containing each individual group as a dataframe grouped_df = df.groupby('product') # you get can a dataframe containing the values for a single group # using .get_group('group_key') grouped_df.get_group('chair') # - # ## group into list df.groupby('product')['value'].apply(lambda group_series: sorted(group_series.tolist())).reset_index(name='values') # ## custom aggregation function # + df = pd.DataFrame({ 'value':[20,22,32,111,33,100,99], 'product':['table','chair','chair','mobile phone','table','mobile phone','table'] }) df # + def count_even_numbers(series): return len([elem for elem in series if elem % 2 == 0 ]) df.groupby('product')['value'].apply(count_even_numbers).reset_index(name='num_even_numbers') # - # ## stratified sampling # + df = pd.DataFrame({ 'price':[20,22,32,111,33,100,99], 'product':['table','chair','chair','table','table','chair','table'] }) df.sort_values(by='product') # - df.groupby("product").apply( lambda group_df: group_df.sample(2) ).reset_index(drop=True)
python3/notebooks/pandas-groupby-post/main.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="BFmLoRtQYFwy" colab_type="text" # | Name | Description | Date # | :- |-------------: | :-: # |<font color=red>__<NAME>__</font>| __Multi-variate LASSO regression with CV__. | __On 11th of August 2019__ # + [markdown] id="p37bE1xFYE2q" colab_type="text" # # Multi-variate Rregression Metamodel with DOE based on random sampling # * Input variable space should be constructed using random sampling, not classical factorial DOE # * Linear fit is often inadequate but higher-order polynomial fits often leads to overfitting i.e. learns spurious, flawed relationships between input and output # * R-square fit can often be misleding measure in case of high-dimensional regression # * Metamodel can be constructed by selectively discovering features (or their combination) which matter and shrinking other high-order terms towards zero # # ** [LASSO](https://en.wikipedia.org/wiki/Lasso_(statistics)) is an effective regularization technique for this purpose** # # #### LASSO: Least Absolute Shrinkage and Selection Operator # $$ {\displaystyle \min _{\beta _{0},\beta }\left\{{\frac {1}{N}}\sum _{i=1}^{N}(y_{i}-\beta _{0}-x_{i}^{T}\beta )^{2}\right\}{\text{ subject to }}\sum _{j=1}^{p}|\beta _{j}|\leq t.} $$ # + [markdown] id="n_wzFNRRYE2r" colab_type="text" # ### Import libraries # + id="WJDlN0r5YE2r" colab_type="code" colab={} import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline # + [markdown] id="Os6CNmN1YE2u" colab_type="text" # ### Global variables # + id="BSz_IItUYE2u" colab_type="code" colab={} N_points = 20 # Number of sample points # start with small < 40 points and see how the regularized model makes a difference. # Then increase the number is see the difference noise_mult = 50 # Multiplier for the noise term noise_mean = 10 # Mean for the Gaussian noise adder noise_sd = 10 # Std. Dev. for the Gaussian noise adder # + [markdown] id="YCsUyXWCYE2w" colab_type="text" # ### Generate feature vectors based on random sampling # + id="gfxRpDa0YE2x" colab_type="code" colab={} X=np.array(10*np.random.randn(N_points,5)) # + id="QdY1Uh3bYE2z" colab_type="code" colab={} df=pd.DataFrame(X,columns=['Feature'+str(l) for l in range(1,6)]) # + id="-N2gRJ6UYE21" colab_type="code" colab={} outputId="4bbf2c8a-280a-4e4e-e067-41cd3a88ac5a" df.head() # + [markdown] id="CjulX-MoYE24" colab_type="text" # ### Plot the random distributions of input features # + id="tu_PkWJTYE24" colab_type="code" colab={} outputId="8dedee2a-d69e-47e8-bb94-f29e94ede573" for i in df.columns: df.hist(i,bins=5,xlabelsize=15,ylabelsize=15,figsize=(8,6)) # + [markdown] id="lHQ0Mxa2YE26" colab_type="text" # ### Generate the output variable by analytic function + Gaussian noise (our goal will be to *'learn'* this function) # + [markdown] id="4EGoeTfdYE27" colab_type="text" # #### Let's construst the ground truth or originating function as follows: # # $ y=f(x_1,x_2,x_3,x_4,x_5)= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+\psi(x)\ :\ \psi(x) = {\displaystyle f(x\;|\;\mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}\;e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}}$ # + id="xxhMmHXbYE27" colab_type="code" colab={} df['y']=5*df['Feature1']**2+13*df['Feature2']+0.1*df['Feature3']**2*df['Feature1'] \ +2*df['Feature4']*df['Feature5']+0.1*df['Feature5']**3+0.8*df['Feature1']*df['Feature4']*df['Feature5'] \ +noise_mult*np.random.normal(loc=noise_mean,scale=noise_sd) # + id="DJt-VV9mYE29" colab_type="code" colab={} outputId="45c9651a-0b1c-4886-8a91-e971fedce79f" df.head() # + [markdown] id="5HTprL2tYE2_" colab_type="text" # ### Plot single-variable scatterplots # ** It is clear that no clear pattern can be gussed with these single-variable plots ** # + id="nUQD3L-3YE3A" colab_type="code" colab={} outputId="62079d0d-9717-43df-aba7-5e5c73586323" for i in df.columns: df.plot.scatter(i,'y', edgecolors=(0,0,0),s=50,c='g',grid=True) # + [markdown] id="sUczrdsWYE3C" colab_type="text" # ### Standard linear regression # + id="0_J0Lb8JYE3C" colab_type="code" colab={} from sklearn.linear_model import LinearRegression # + id="cYG4bnXDYE3E" colab_type="code" colab={} linear_model = LinearRegression(normalize=True) # + id="06tHly6xYE3G" colab_type="code" colab={} X_linear=df.drop('y',axis=1) y_linear=df['y'] # + id="ifkoIFlaYE3J" colab_type="code" colab={} outputId="9b583554-af36-4dfa-a3a8-9cb11042b405" linear_model.fit(X_linear,y_linear) # + id="v1XU7Z_aYE3L" colab_type="code" colab={} y_pred_linear = linear_model.predict(X_linear) # + [markdown] id="Q6rnEHFwYE3N" colab_type="text" # ### R-square of simple linear fit is very bad, coefficients have no meaning i.e. we did not 'learn' the function # + id="8NJUHi-IYE3O" colab_type="code" colab={} RMSE_linear = np.sqrt(np.sum(np.square(y_pred_linear-y_linear))) # + id="f3D8E4IMYE3Q" colab_type="code" colab={} outputId="27980f4f-0fae-443c-cd0e-e446fdbf7c1a" print("Root-mean-square error of linear model:",RMSE_linear) # + id="CgU_JxqCYE3T" colab_type="code" colab={} outputId="7314a830-9aaf-44d9-a66f-2401994a866a" coeff_linear = pd.DataFrame(linear_model.coef_,index=df.drop('y',axis=1).columns, columns=['Linear model coefficients']) coeff_linear # + id="ogiTlxlTYE3W" colab_type="code" colab={} outputId="f6ad8029-3654-422c-9f76-97b55ff7ee46" print ("R2 value of linear model:",linear_model.score(X_linear,y_linear)) # + id="RxD8zkb8YE3Z" colab_type="code" colab={} outputId="333be002-d97c-469e-f6b7-035f57cbd897" plt.figure(figsize=(12,8)) plt.xlabel("Predicted value with linear fit",fontsize=20) plt.ylabel("Actual y-values",fontsize=20) plt.grid(1) plt.scatter(y_pred_linear,y_linear,edgecolors=(0,0,0),lw=2,s=80) plt.plot(y_pred_linear,y_pred_linear, 'k--', lw=2) # + [markdown] id="BSgaMY_4YE3c" colab_type="text" # ### Create polynomial features # + id="41c4SgwPYE3c" colab_type="code" colab={} from sklearn.preprocessing import PolynomialFeatures # + id="l7gH7LiGYE3d" colab_type="code" colab={} poly1 = PolynomialFeatures(3,include_bias=False) # + id="YqUlLA3yYE3i" colab_type="code" colab={} outputId="efae85bc-deab-488d-8501-857ec8a21759" X_poly = poly1.fit_transform(X) X_poly_feature_name = poly1.get_feature_names(['Feature'+str(l) for l in range(1,6)]) print("The feature vector list:\n",X_poly_feature_name) print("\nLength of the feature vector:",len(X_poly_feature_name)) # + id="7rUDMmNJYE3j" colab_type="code" colab={} outputId="3861b544-e31f-4587-fd11-db4c89dd8eeb" df_poly = pd.DataFrame(X_poly, columns=X_poly_feature_name) df_poly.head() # + id="V8TwIP0JYE3k" colab_type="code" colab={} outputId="e5ccf20c-9b6f-4ad0-fd2f-e7402233a0f9" df_poly['y']=df['y'] df_poly.head() # + id="QGR_ELKlYE3m" colab_type="code" colab={} X_train=df_poly.drop('y',axis=1) y_train=df_poly['y'] # + [markdown] id="hZgFlfVuYE3n" colab_type="text" # ### Polynomial model without regularization and cross-validation # + id="ayazUno5YE3o" colab_type="code" colab={} poly2 = LinearRegression(normalize=True) # + id="833WBkY1YE3q" colab_type="code" colab={} outputId="d4610f15-68f7-4dc7-f163-72228c3a5c20" model_poly=poly2.fit(X_train,y_train) y_poly = poly2.predict(X_train) RMSE_poly=np.sqrt(np.sum(np.square(y_poly-y_train))) print("Root-mean-square error of simple polynomial model:",RMSE_poly) # + [markdown] id="0thQAOeRYE3s" colab_type="text" # ### The non-regularized polunomial model (notice the coeficients are not learned properly) # ** Recall that the originating function is: ** # $ y= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+noise $ # + id="yGz2IJaOYE3s" colab_type="code" colab={} outputId="aee673dc-10a8-4373-d784-c3bd5a4b58a5" coeff_poly = pd.DataFrame(model_poly.coef_,index=df_poly.drop('y',axis=1).columns, columns=['Coefficients polynomial model']) coeff_poly # + [markdown] id="uN_sv9RjYE3u" colab_type="text" # #### R-square value of the simple polynomial model is perfect but the model is flawed as shown above i.e. it learned wrong coefficients and overfitted the to the data # + id="qc3OI0BoYE3v" colab_type="code" colab={} outputId="23c35998-9fbc-469e-aa66-33820291f948" print ("R2 value of simple polynomial model:",model_poly.score(X_train,y_train)) # + [markdown] id="tqXwtxw_YE3w" colab_type="text" # ### Polynomial model with cross-validation and LASSO regularization # ** This is an advanced machine learning method which prevents over-fitting by penalizing high-valued coefficients i.e. keep them bounded ** # + id="ES6SfkylYE3x" colab_type="code" colab={} from sklearn.linear_model import LassoCV # + id="FDXqeiKwYE3y" colab_type="code" colab={} model1 = LassoCV(cv=10,verbose=0,normalize=True,eps=0.001,n_alphas=100, tol=0.0001,max_iter=5000) # + id="8IHeTjSSYE30" colab_type="code" colab={} outputId="c8dc5b3b-6f5d-42cb-959f-b89dda9b1551" model1.fit(X_train,y_train) # + id="tftIFgavYE32" colab_type="code" colab={} y_pred1 = np.array(model1.predict(X_train)) # + id="IFnwUQklYE34" colab_type="code" colab={} outputId="3400bd38-31f1-4d4b-9c64-91d5f1c53f68" RMSE_1=np.sqrt(np.sum(np.square(y_pred1-y_train))) print("Root-mean-square error of Metamodel:",RMSE_1) # + id="PDlQ-uUsYE36" colab_type="code" colab={} outputId="9afe3674-a23b-41d0-c996-2c3d02c49fbc" coeff1 = pd.DataFrame(model1.coef_,index=df_poly.drop('y',axis=1).columns, columns=['Coefficients Metamodel']) coeff1 # + id="yfx7MJJ7YE37" colab_type="code" colab={} outputId="3c2b8e57-9e51-4d12-8807-bc1ec1f5279a" model1.score(X_train,y_train) # + id="WnsFD3_RYE39" colab_type="code" colab={} outputId="1b69da99-2141-4fce-999a-a5cbff86dbc2" model1.alpha_ # + [markdown] id="HVCZqyWgYE3_" colab_type="text" # ### Printing only the non-zero coefficients of the regularized model (notice the coeficients are learned well enough) # ** Recall that the originating function is: ** # $ y= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+noise $ # + id="-IWvuftpYE3_" colab_type="code" colab={} outputId="985704d9-6056-4d31-f2cf-d398a74afa35" coeff1[coeff1['Coefficients Metamodel']!=0] # + id="0oqmTssGYE4D" colab_type="code" colab={} outputId="962c8d85-2005-4ae1-b1da-e95cd8430367" plt.figure(figsize=(12,8)) plt.xlabel("Predicted value with Regularized Metamodel",fontsize=20) plt.ylabel("Actual y-values",fontsize=20) plt.grid(1) plt.scatter(y_pred1,y_train,edgecolors=(0,0,0),lw=2,s=80) plt.plot(y_pred1,y_pred1, 'k--', lw=2)
Function Approximation by Neural Network/Multi-variate LASSO regression with CV.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Template Pattern # 1. Usefull for encapsulating algorithms that follow similar steps, like a recipe # 1. Behavioral Pattern # 1. Defines algorithm's skeleton # 1. Order of methods is fixed inside the Template Method # 1. Best Code Reuse # 1. Allows hooking into algorithm # 1. But useless in case the algorithms differ # 1. **Comparable to Builder**, which is all about construction of objects, whereas Template is all about definition of objects behavior # # > **Constitutes 3 Methods** # > - **Abstract** - Must be implemented by subclasses # > - **Concrete** - Common enough, that subclass will be okay using the default implementation, can be overriden # > - **Hooks** - Do nothing, but can be overriden to do something # # Travel from abc import ABCMeta, abstractmethod class ITransport(metaclass=ABCMeta): def __init__(self, destination): self._destination = destination def take_trip(self): # Template Method self.start_engine() self.leave_terminal() self.entertainment() self.travel_to_destination() self.arrive_at_destination() @abstractmethod def start_engine(self): # Abstract, must be implemented in subclass pass def leave_terminal(self): # Concrete, but overridable print("Leaving Terminal") @abstractmethod def travel_to_destination(self): # Abstract, must be implemented in subclass print("Travelling ...") def entertainment(self): # Hook, concrete but has no implementation pass def arrive_at_destination(self): # Concrete, but overridable print(f"Arriving at {self._destination}") class Airplane(ITransport): def start_engine(self): print("Starting the Rolls-Royce Gas-Turbine Engines") def leave_terminal(self): print("Leaving Terminal") print("Taxiing the Runway") def travel_to_destination(self): print("Flying ...") def entertainment(self): print("Playing in-flight movie") def arrive_at_destination(self): print(f"Landing at {self._destination}") class Bus(ITransport): def start_engine(self): print("Starting the Cummins Diesel Engine") def travel_to_destination(self): print("Driving ...") def travel(destination, transport): print(f"\n=============== Taking the {transport.__name__} to {destination} ===============\n") transport(destination).take_trip() travel('New York', Bus) travel('Amsterdam', Airplane)
behavioral/template.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.4 64-bit # name: python374jvsc74a57bd07945e9a82d7512fbf96246d9bbc29cd2f106c1a4a9cf54c9563dadf10f2237d4 # --- # ### GOAL # # ![](_df_goal.png) # # To save it as a file in local # ## Working on s import requests #Traer el fichero de internet o acceder a algun lugar de internet import #trabaja con s import pandas as pd #libreria para data mining/data wrangling (data wrangling es traerlo de internet y mineria de datos limpiar los datos y ponerlos bonitos) # ## 1. # # Read an online (not always needed). # # If it is a local --> 'with open()' # # + tags=[] r = requests.get(url='https://mdn.github.io/learning-area/javascript/oojs//superheroes.') _readed = r.() print(type(_readed)) #las claves de mi diccionario van a ser mis columnas _readed # + tags=[] print(_readed) # - # ------------------ # ## 2. # # Save in a local file called "data." with open('data.', 'w+') as vr: #esto es como vuelco un archivo de internet en una carpeta local se guarda en la carpeta local donde estoy .dump(_readed, vr) # ------------------ # ## 3. # # Save with indent with open('data_indented.', 'w+') as outfile: #asi para que sea mas legible si no se lee muy mal en el archivo que he creado (data_indented) "w+" .dump(_readed, outfile, indent=4) # ------------------ # ## 4. # # Read local # # + tags=[] with open('data_indented.', 'r+') as outfile: #permisos de lectura "r+" _readed = .load(outfile) print(type(_readed)) _readed # - # ## 5. # # Transform to pandas DataFrame. Two ways: df = pd.DataFrame(_readed) # From dict -> directamente desde internet df df_ = pd.read_("data_indented.") #-> desde un df_ # ------------------ # ## 6. # # ### Data Mining & Data Wrangling # As you can see, there are s inside the original . For that, we have to modify the data to be able to use it correctly (data wrangling). # # How do you solve this issue? Research about this and try a solution. #accedemos a la columna members de dataframe type(df_["members"]) # hay 3 filas xq tenemos 3 s dentro de la lista y como tienen las mismas claves salen 3 filas con la clave name, age, secretidentity powers. df_members_ = pd.DataFrame(_readed["members"]) df_members_ # -------------------------------------- # **Concatenamos por columnas los dataframes** final_df = pd.concat([df_, df_members_], axis=1) final_df # #### Borramos la columna members final_df = final_df.drop(["members"], axis=1) final_df # -------------------- # # # EXTRA # # -------------------- # # Para guardar en un archivo local final_df.to_("_name.") # Nota: #LEER es importante, para poder trabajar si nos da problemas cuando sea in diccionario # # 1. Con dumps, cargamos el contenido de un diccionario a formato string # 2. Con loads, cargamos el contenido de un string a formato formal. # https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_.html # Para guardar en un archivo local con indentación import _result = final_df.to_(orient="records") #primero paso a jsn parsed = .loads(_result) #luego lo paso a parse with open("final_.", 'w+') as outfile: #y que me lo guarde .dump(parsed, outfile, indent=4) pd.read_("final_.") # + # con panda podemos guardar archivos en muchos tipos de archivos, excel csv etc. # - final_df.to_csv("datos_finales.csv") final_df.to_excel("datos_finales.xlsx")
week4_EDA_np_pd_json_apis_regex/day3_numpy_pandas_III_json/theory_paths_con_panda_y_normales/python/json/data_wrangling_json.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Instructions to run this notebook # # In this notebook, we present the comparisons for AC-MNIST: Anti-causal colored MNIST. # Run all the cells sequentially from top to bottom; we have commented the cells to help the reader. # ## Libraries import tensorflow as tf import numpy as np import argparse import IPython.display as display import matplotlib.pyplot as plt from tensorflow import keras from sklearn.model_selection import train_test_split from sklearn.preprocessing import OneHotEncoder from sklearn.utils import shuffle import pandas as pd tf.compat.v1.enable_eager_execution() import cProfile from sklearn.model_selection import train_test_split import copy as cp from sklearn.model_selection import KFold from datetime import date import time from data_construct import * ## contains functions for constructing data from IRM_methods import * ## contains IRM and ERM methods # ## Sample complexity on AC-CMNIST # + n_trial =10 n_tr_list = [1000, 5000, 10000, 30000, 60000] # list of training sample sizes k=0 K = len(n_tr_list) ERM_model_acc = np.zeros((K,n_trial)) ERM_model_acc_nb = np.zeros((K,n_trial)) IRM_model_acc = np.zeros((K,n_trial)) IRM_model_acc_v = np.zeros((K,n_trial)) ERM_model_acc1 = np.zeros((K,n_trial)) ERM_model_acc1_nb = np.zeros((K,n_trial)) IRM_model_acc1 = np.zeros((K,n_trial)) IRM_model_acc1_v = np.zeros((K,n_trial)) IRM_model_ind_v = np.zeros((K,n_trial)) ERM_model_acc_av = np.zeros(K) ERM_model_acc_av_nb = np.zeros(K) IRM_model_acc_av = np.zeros(K) IRM_model_acc_av_v = np.zeros(K) ERM_model_acc_av1 = np.zeros(K) ERM_model_acc_av1_nb = np.zeros(K) IRM_model_acc_av1 = np.zeros(K) IRM_model_acc_av1_v = np.zeros(K) list_params = [] for n_tr in n_tr_list: print ("tr" + str(n_tr)) # print ("start") t_start = time.time() for trial in range(n_trial): print ("trial " + str(trial)) n_e=2 p_color_list = [0.2, 0.1] p_label_list = [0.25]*n_e D = assemble_data_mnist_child(n_tr) # initialize mnist digits data object D.create_training_data(n_e, p_color_list, p_label_list) # creates the training environments p_label_test = 0.25 # probability of switching pre-label in test environment p_color_test = 0.9 # probability of switching the final label to obtain the color index in test environment D.create_testing_data(p_color_test, p_label_test, n_e) # sets up the testing environment (num_examples_environment,length, width, height) = D.data_tuple_list[0][0].shape # attributes of the data num_classes = len(np.unique(D.data_tuple_list[0][1])) # number of classes in the data model_erm = keras.Sequential([ keras.layers.Flatten(input_shape=(length,width,height)), keras.layers.Dense(390, activation = 'relu',kernel_regularizer=keras.regularizers.l2(0.0011)), keras.layers.Dense(390, activation='relu',kernel_regularizer=keras.regularizers.l2(0.0011)), keras.layers.Dense(2, activation='softmax') ]) num_epochs = 100 batch_size = 512 learning_rate = 4.9e-4 erm_model1 = standard_erm_model(model_erm, num_epochs, batch_size, learning_rate) erm_model1.fit(D.data_tuple_list) erm_model1.evaluate(D.data_tuple_test) print ("Training accuracy:" + str(erm_model1.train_acc)) print ("Testing accuracy:" + str(erm_model1.test_acc)) ERM_model_acc[k][trial] = erm_model1.test_acc ERM_model_acc1[k][trial] = erm_model1.train_acc gamma_list = [10000,33000, 66000, 100000.0] index=0 best_err = 1e6 train_list =[] val_list = [] test_list = [] for gamma_new in gamma_list: model_irm = keras.Sequential([ keras.layers.Flatten(input_shape=(length,width,height)), keras.layers.Dense(390, activation = 'relu',kernel_regularizer=keras.regularizers.l2(0.0011)), keras.layers.Dense(390, activation='relu',kernel_regularizer=keras.regularizers.l2(0.0011)), keras.layers.Dense(num_classes) ]) batch_size = 512 steps_max = 1000 steps_threshold = 190 ## threshold after which gamma_new is used learning_rate = 4.9e-4 irm_model1 = irm_model(model_irm, learning_rate, batch_size, steps_max, steps_threshold, gamma_new) irm_model1.fit(D.data_tuple_list) irm_model1.evaluate(D.data_tuple_test) error_val = 1-irm_model1.val_acc train_list.append(irm_model1.train_acc) val_list.append(irm_model1.val_acc) test_list.append(irm_model1.test_acc) if(error_val<best_err): index_best =index best_err = error_val index= index+1 print ("Training accuracy:" + str(train_list[index_best])) print ("Validation accuracy:" + str(val_list[index_best])) print ("Testing accuracy:" + str(test_list[index_best])) IRM_model_acc_v[k][trial] = test_list[index_best] IRM_model_acc1_v[k][trial] = train_list[index_best] IRM_model_ind_v[k][trial] = index_best IRM_model_acc_av_v[k] = np.mean(IRM_model_acc_v[k]) list_params.append([n_tr,"IRMv_test", np.mean(IRM_model_acc_v[k]),np.std(IRM_model_acc_v[k])]) ERM_model_acc_av[k] = np.mean(ERM_model_acc[k]) list_params.append([n_tr,"ERM_test", np.mean(ERM_model_acc[k]),np.std(ERM_model_acc[k])]) IRM_model_acc_av1_v[k] = np.mean(IRM_model_acc1_v[k]) list_params.append([n_tr,"IRMv_train", np.mean(IRM_model_acc1_v[k]),np.std(IRM_model_acc1_v[k])]) ERM_model_acc_av1[k] = np.mean(ERM_model_acc1[k]) list_params.append([n_tr, "ERM_train", np.mean(ERM_model_acc1[k]),np.std(ERM_model_acc1[k])]) k=k+1 t_end = time.time() print("total time: " + str(t_end-t_start)) results = pd.DataFrame(list_params, columns= ["Sample","Method", "Performance", "Sdev"]) ideal_error = np.ones(5)*0.25 print ("end") # - # ## plot the results plt.figure() plt.xlabel("Number of samples", fontsize=16) plt.ylabel("Test error", fontsize=16) plt.plot(n_tr_list, 1-ERM_model_acc_av, "-r", marker="+", label="ERM") plt.plot(n_tr_list, 1-IRM_model_acc_av_v, "-b", marker="s",label="IRMv1") plt.plot(n_tr_list, ideal_error, "-g", marker="x", label="Optimal invariant") plt.legend(loc="upper left", fontsize=18) plt.ylim(-0.01,0.8) results
ERM-IRM/IRM_AC_CMNIST_final.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="9_Igu3_P7fxV" # ## Fine-tune **[`German BERT`](https://www.deepset.ai/german-bert)** on German Legal Data # # --- # **Important**: To successfully execute this notebook, make sure you have access to a GPU. # # **Dataset**: https://github.com/elenanereiss/Legal-Entity-Recognition/tree/master/data # + [markdown] id="KsjC8sN-qUa4" # ### Install and import required packages # + colab={"base_uri": "https://localhost:8080/"} id="60uDEtc10LTN" outputId="44f043f1-c4af-426c-87ac-04cf586fb5db" # !pip install keras # !pip install scikit-learn # !pip install transformers # !pip install torch torchvision torchaudio # + id="6339b2cd-dc1d-4504-bb3d-11f2375cc674" import csv import pickle import pandas as pd import numpy as np import torch from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler from keras.preprocessing.sequence import pad_sequences import transformers from transformers import BertTokenizer, BertConfig from transformers import get_linear_schedule_with_warmup from transformers import BertForTokenClassification, AdamW from sklearn.model_selection import train_test_split from sklearn.metrics import multilabel_confusion_matrix from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="iJlsQfj3oLUW" outputId="89f9ae87-ce83-4af9-de48-92add8100838" torch.__version__ # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="921YJhIm0Tzp" outputId="2c169ccf-c30b-437f-9d7c-ce791b89bd5f" device = torch.device('cuda') n_gpu = torch.cuda.device_count() torch.cuda.get_device_name(0) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="d288dfb7-d29e-4ea2-b26d-4106ae2e36bf" outputId="a95c5a95-76bd-43e0-aa68-71622afd7e05" transformers.__version__ # + [markdown] id="HjypuaJMrB6e" # ### Set-up data # + colab={"base_uri": "https://localhost:8080/", "height": 201} id="090065c1-1595-4092-8448-d2bd38a41a1c" outputId="bdcd4176-cfd8-42dc-ef7e-e8c17d5c570d" data = pd.read_csv('court_data.csv', sep='|', quoting=csv.QUOTE_NONE).fillna(method='ffill') data.tail(5) # + [markdown] id="lEJs_lEQrxco" # ### Set-up data iterator # + [markdown] id="7BW8dHEZ4SF-" # The class **`GetSentence`** returns a list of tokenized sentence and its corresponding labels. # + id="d5659e33-701f-45fb-8a7b-34952c5126a2" class GetSentence(object): def __init__(self, data): self.n_sent = 1 self.data = data self.empty = False agg = lambda s: [(w, t) for w, t in zip(s['word'].values.tolist(), s['tag'].values.tolist())] self.grouped = self.data.groupby('sentence_number').apply(agg) self.sentences = [s for s in self.grouped] def get_next(self): try: s = self.grouped['{}'.format(self.n_sent)] self.n_sent += 1 return s except: return None # + id="4e4903ca-a764-4707-b0f9-9e7425c67c2f" getter = GetSentence(data) # + colab={"base_uri": "https://localhost:8080/"} id="816603d0-cc8d-4b72-a995-44a488056659" outputId="1e312398-870d-4e3a-eee2-aea271dfdee2" sentences = [[word[0] for word in sentence] for sentence in getter.sentences] sentences[0] # + colab={"base_uri": "https://localhost:8080/"} id="d22fdc36-f2c5-498c-bf52-add1dee8fe55" outputId="6bde0083-e04d-43c5-9d77-9c956738053e" labels = [[s[1] for s in sentence] for sentence in getter.sentences] print(labels[0]) # + [markdown] id="TbeLFeTOsXDj" # ### Set of unique tags and its indices # + id="c5b1d924-4e5d-4a4d-ae95-dd06a27ccc3b" tag_values = list(set(data['tag'].values)) tag_values.append('PAD') tag2idx = {t: i for i, t in enumerate(tag_values)} # + [markdown] id="S645qam-sit2" # Save **`tag_values`** as it will be required for later use. # + id="ysygt8pbu4SY" t_values = open("tag_values.pkl", "wb") pickle.dump(tag_values, t_values) t_values.close() # + [markdown] id="pQ1g-VO-s-Lf" # ### Set-up BERT tokenizer from pre-trained **`bert-base-german-cased`** # + colab={"base_uri": "https://localhost:8080/", "height": 162, "referenced_widgets": ["ea67678edff04c4bae2d16685019e388", "d8063e3b2ce74d88a6ee76e2eb374c08", "cf23bbaa269c4140a7c4b2f03b6fe129", "91f92d528872463faf922b67d4f04843", "b808334ea7074334ac496201fe7983b8", "db7404baf0df442ebd3d552299fc2c40", "0da9cfa444d9431f9e4af04ecda9eaf7", "a49f32a2a95041e8addc4cf61ee285d6", "d2c9b4da73e445c8a2a7cef1d1e0a6d6", "0c37edd5fa00484485f0b891275cc0cd", "17a4860b5c334a9a9816f16c9e540d47", "<KEY>", "0ea4e6c53c504a95bde6cf2aff60ce24", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "0a0ac7e623834325830806e9a780c3b0", "ca6504116ee849d098c7404eab27ba85", "<KEY>", "0ab85e4295174870a5541fe8fb0f6439"]} id="885ae799-c5f1-4edb-9020-b0106fa53eb4" outputId="9ac21cdb-eddb-4a11-96b2-62de2c40bd48" tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased', do_lower_case=False) # + [markdown] id="21XtumKDtKXp" # As with `tag_values`, we will also require **`tokenizer`** for later use. # + id="gZExDq5xYBFV" save_tokenizer = open("tokenizer.pkl", "wb") pickle.dump(tokenizer, save_tokenizer) save_tokenizer.close() # + [markdown] id="_Zfk56-zuOiw" # Since BERT uses **WordPiece**, we also have to make our sentences to similar format. # # The following function accepts **`sentences`** and **`labels`**, and iterates through every single one of them. # # Our **`tokenizer`** is applied to every single word from each sentence of **`sentences`**. While doing this, we have to make each sub-word from word has the same label. # + id="bb56f8db-617d-42d6-b738-49442b271af2" def tokenize_preserve_labels(sentence, text_labels): tokenized_sentence = [] labels = [] for word, label in zip(sentence, text_labels): tokenized_word = tokenizer.tokenize(word) n_subwords = len(tokenized_word) tokenized_sentence.extend(tokenized_word) labels.extend([label] * n_subwords) return tokenized_sentence, labels # + colab={"base_uri": "https://localhost:8080/"} id="d5c72c90-d085-4210-b7b3-615b7ba3a026" outputId="6f0c7cd0-3d9b-46c7-fc11-72f533ccee99" # %%time tokenized_texts_labels = [tokenize_preserve_labels(sent, labels) for sent, labels in zip(sentences, labels)] # + [markdown] id="6i8uiY4zvsdN" # Extract **tokens** and **labels** from **`tokenized_texts_labels`**. # + id="6fcb10a0-3573-4588-8c66-4877c3a4ac75" tokenized_texts = [token_label_pair[0] for token_label_pair in tokenized_texts_labels] labels = [token_label_pair[1] for token_label_pair in tokenized_texts_labels] # + [markdown] id="OG7bd0TAwJ3B" # ### Apply padding and generate **`attention_mask`** # + id="7d7822b5-c6d2-430f-af5e-8b0dde24ce67" MAX_LEN = 75 BATCH_SIZE = 64 # + id="b2ebe348-5d60-4b5b-91e4-1dbe3f990108" input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts], maxlen=MAX_LEN, dtype='long', value=0.0, truncating='post', padding='post') # + id="a1a8800f-5260-4f48-b562-64448f1925e1" tags = pad_sequences([[tag2idx.get(l) for l in lab] for lab in labels], maxlen=MAX_LEN, value=tag2idx['PAD'], padding='post', dtype='long', truncating='post') # + id="c8a4196b-d5b9-4422-a304-cc42cd0303c8" attention_mask = [[float(i != 0.0) for i in ii] for ii in input_ids] # + [markdown] id="Lh1AVCfH0cRs" # ### Prepare training and testing data # + [markdown] id="H6JKcK_02Ln-" # Split data and attention mask. # + id="beb85d4d-fc6c-46c1-9528-cf6b510d2dde" X_train, X_test, y_train, y_test = train_test_split(input_ids, tags, random_state=42, test_size=0.1) tr_mask, val_mask, _, _ = train_test_split(attention_mask, input_ids, random_state=42, test_size=0.1) # + id="033526d0-ce2c-450b-a807-bf9112ff7444" X_train, X_test, y_train, y_test = torch.tensor(X_train), torch.tensor(X_test), torch.tensor(y_train), torch.tensor(y_test) tr_mask, val_mask = torch.tensor(tr_mask), torch.tensor(val_mask) # + [markdown] id="zikhaibD2OId" # Create data-loaders. # + id="94639695-e977-4e7a-820a-0135dfa74c8f" train_data = TensorDataset(X_train, tr_mask, y_train) train_sampler = RandomSampler(train_data) train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=BATCH_SIZE) valid_data = TensorDataset(X_test, val_mask, y_test) valid_sampler = SequentialSampler(valid_data) valid_dataloader = DataLoader(valid_data, sampler=valid_sampler, batch_size=BATCH_SIZE) # + [markdown] id="nZFHGBn71ndF" # ### Pull and fine-tune **`bert-base-german-cased`** model # + colab={"base_uri": "https://localhost:8080/", "height": 216, "referenced_widgets": ["ead333525fcc4534ab7ce6bb8a828add", "3ca198d81a814391b45672b3fbf5cb70", "<KEY>", "<KEY>", "242defaf6f9c42bfa548ec67c27efa3c", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "549ea47dba7343ceb80b52b64231906d", "240e9994af274a7c806360535c56f977", "<KEY>", "de3803d547a6450d8eeabf7bacab9e34", "<KEY>", "<KEY>", "d841d68e45764d85b3e7c29243421230"]} id="da724751-3626-43b3-a979-435040595400" outputId="c1b4591b-7500-435f-b5ba-1ee0f2dc2abf" model = BertForTokenClassification.from_pretrained('bert-base-german-cased', num_labels=len(tag2idx), output_attentions=False, output_hidden_states=False) # + id="PQEHRctF0xmz" model.cuda(); # + id="1db826e5-d274-41f0-864e-5d983479b9fe" FULL_FINETUNING = True if FULL_FINETUNING: param_optimizer = list(model.named_parameters()) no_decay = ['bias', 'gamma', 'beta'] optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0} ] else: param_optimizer = list(model.classifier.named_parameters) optimizer_grouped_parameters = [{'params': [p for n, p in param_optimizer]}] # + id="7ae22876-5c98-4a7e-b548-d8851b25e7e4" optimizer = AdamW(optimizer_grouped_parameters, lr=3e-5, eps=1e-8) # + [markdown] id="zuy0Po8W2cI-" # ### Training and evaluation # + id="c2d8bbff-7dd7-4f76-8e5b-333633ca9eff" EPOCHS = 3 MAX_GRAD_NORM = 1.0 total_steps = len(train_dataloader) * EPOCHS scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=0, num_training_steps=total_steps) # + colab={"base_uri": "https://localhost:8080/"} id="9352258c-a388-4f0d-88f7-b1d38404e120" outputId="41d7fbf9-27e6-4e4c-ab73-323a935543e4" # %%time loss_values, validation_loss_values = [], [] for e in range(EPOCHS): print(f'- Epoch 0{e+1} -') model.train() total_loss = 0 for step, batch in enumerate(train_dataloader): batch = tuple(t.to(device) for t in batch) b_input_ids, b_input_mask, b_labels = batch model.zero_grad() outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) loss = outputs[0] loss.backward() total_loss += loss.item() torch.nn.utils.clip_grad_norm_(parameters=model.parameters(), max_norm=MAX_GRAD_NORM) optimizer.step() scheduler.step() avg_train_loss = total_loss / len(train_dataloader) print('Average train loss:\t{:.5f}'.format(avg_train_loss)) loss_values.append(avg_train_loss) model.eval() eval_loss, eval_accuracy = 0, 0 predictions, true_labels = [], [] for batch in valid_dataloader: batch = tuple(t.to(device) for t in batch) b_input_ids, b_input_mask, b_labels = batch with torch.no_grad(): outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) logits = outputs[1].detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() eval_loss += outputs[0].mean().item() predictions.extend([list(p) for p in np.argmax(logits, axis=2)]) true_labels.extend(label_ids) eval_loss = eval_loss / len(valid_dataloader) validation_loss_values.append(eval_loss) print('Validation loss:\t{:.5f}'.format(eval_loss)) pred_tags = [tag_values[p_i] for p, l in zip(predictions, true_labels) for p_i, l_i in zip(p, l) if tag_values[l_i] != 'PAD'] valid_tags = [tag_values[l_i] for l in true_labels for l_i in l if tag_values[l_i] != 'PAD'] print('Validation accuracy:\t{:.5f}'.format(accuracy_score(pred_tags, valid_tags))) print('Validation precision:\t{:.5f}'.format(precision_score(pred_tags, valid_tags, average='micro'))) print('Validation recall:\t{:.5f}'.format(recall_score(pred_tags, valid_tags, average='micro'))) print('Validation f1-score:\t{:.5f}\n'.format(f1_score(pred_tags, valid_tags, average='micro'))) # + [markdown] id="LARGmk0j2oZu" # Calculate confusion matrix to identify **TP**, **TN**, **FP**, and **FN**. This is required to calculate **Micro- precision**, **recall**, and **F1-Score**. # + id="qnT6goDTWV9J" tags = list(set(valid_tags)) # + id="znd6ZEEwAR4N" matrix = multilabel_confusion_matrix(valid_tags, pred_tags, labels=tags) # + id="shQoT-pHBHJq" tags_eval = {} for t, m in zip(tags, matrix): tag = t.split('-')[-1] if tag not in tags_eval: tags_eval[tag] = [[], [], [], []] # tp, tn, fp, fn tn, fp = m[0] fn, tp = m[1] tags_eval[tag][0].append(tp) tags_eval[tag][1].append(tn) tags_eval[tag][2].append(fp) tags_eval[tag][3].append(fn) # + [markdown] id="NVxotdCp3VPg" # Map fine-grained classes to actual classes. # + id="JGfL2S8HPyZ-" classes = {'Person': 'PER', 'Judge': 'RR', 'Lawyer': 'AN', 'Country': 'LD', 'City': 'ST', 'Street': 'STR', 'Landscape': 'LDS', 'Organization': 'ORG', 'Company': 'UN', 'Institution': 'INN', 'Court': 'GRT', 'Brand': 'MRK', 'Law': 'GS', 'Ordinance': 'VO', 'European legal norm': 'EUN', 'Regulation': 'VS', 'Contract': 'VT', 'Court decision': 'RS', 'Legal literature': 'LIT'} # + [markdown] id="9ArYsu9J3hfu" # Calculate Micro averaged performance metrics. # + id="fz0lulc3RErG" for c in classes: t = classes[c] v = tags_eval[t] precision = sum(v[0])/(sum(v[0]) + sum(v[2])) recall = sum(v[0])/(sum(v[0]) + sum(v[3])) f1 = 2 * ((precision * recall) / (precision + recall)) classes[c] = [round(precision*100, 2), round(recall*100, 2), round(f1*100, 2)] # + colab={"base_uri": "https://localhost:8080/"} id="OqWmU1tlRrOT" outputId="639c5da7-71bb-4a5f-d0ae-f924c8854812" classes # + [markdown] id="Fh7_VLa33ria" # Finally, save our model for later use. # + id="fzNxrAsHyVwI" torch.save(model.state_dict(), "model.pt")
german_bert_ner.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- """ This Python notebook uses Yelp's API to gather data for cafes by latitude and longitude. This is used for scraping the Chicago Area. Problems with this method: Incredibly inefficient (requests are bringing in similar results) Slow. Unpredictable. """ # + from yelpapi import YelpAPI import json import pandas as pd from pandas.io.json import json_normalize import numpy api_key = '<KEY>' yelp_api = YelpAPI(api_key) # + # Ballparked coordinates of 4 vertices of Zip Code 60637. upper_left_coordinate = [41.800792, -87.628796] # [latitude, longitude] upper_right_coordinate = [41.800792, -87.574454] bottom_left_coordinate = [41.764310, -87.628796] bottom_right_coordinate = [41.764310, -87.574454] VDistance = abs(upper_left_coordinate[0] - bottom_left_coordinate[0]) HDistance = abs(upper_left_coordinate[1] - upper_right_coordinate[1]) # GPS distance to meters # Converted using website below # http://boulter.com/gps/distance/?from=41.800792+-87.628796&to=41.800792+-87.574454&units=k HDistance_m = 4520 # Distance from upperleft to upperright in meters. number_of_circles = 9 # Yelp only takes integer values for radius circle_radius = round(HDistance_m / number_of_circles) circle_radius # - # sample search search_result = yelp_api.search_query( latitude=upper_right_coordinate[0], longitude=upper_right_coordinate[1], radius=circle_radius, limit=50) df = pd.DataFrame.from_dict(search_result['businesses'], orient='columns') df.head() def MoveMap(upper_left_coordinate, upper_right_coordinate, bottom_left_coordinate, number_of_circles, circle_radius, category): ''' Start at the upper left corner of the grid. Get the information within a certain radius. Move to the left. Repeat until the upper right point is reached. Move back to to the longitude of upper left point. Move downwards. Repeat. ''' distance_between_circles_h = abs( upper_right_coordinate[1] - upper_left_coordinate[1]) / number_of_circles distance_between_circles_v = abs( upper_left_coordinate[0] - bottom_left_coordinate[0]) / number_of_circles latitude = upper_left_coordinate[0] longitude = upper_left_coordinate[1] df = pd.DataFrame() for v_step in range(number_of_circles): for h_step in range(number_of_circles): search_result = yelp_api.search_query( term=category, latitude=latitude, longitude=longitude, radius=circle_radius, limit=50) normalize = pd.DataFrame.from_dict( json_normalize(search_result['businesses']), orient='columns') # df = df.append (pd.DataFrame.from_dict(dfadd, orient='columns')) df = df.append(normalize) longitude += distance_between_circles_h longitude = upper_left_coordinate[1] latitude -= distance_between_circles_v return df.drop_duplicates(['id']).reset_index().drop('index',axis=1) data_in_60637 = MoveMap(upper_left_coordinate, upper_right_coordinate, bottom_left_coordinate, number_of_circles=9, circle_radius=circle_radius, category='') data_in_60637.head() # + def get_yelp_data_by_location(location, number_of_calls): ''' Given a location (ex. ZipCode), use Yelp API to retrieve data. Repeat by number of calls. Returns a dataframe. ''' df = pd.DataFrame() for call in range (number_of_calls): search_result = yelp_api.search_query( location=location, limit=50) normalize = pd.DataFrame.from_dict( json_normalize(search_result['businesses']), orient='columns') df = df.append(normalize) return df location_data = get_yelp_data_by_location ( location='60637', number_of_calls=2) location_data.drop_duplicates(['id']).head() # Calling any amount of times in the same location produces same results # - data_in_60637.to_csv ('Businesses in 60637.csv') cafe_in_60637 = MoveMap(upper_left_coordinate, upper_right_coordinate, bottom_left_coordinate, number_of_circles=9, circle_radius=circle_radius, category='cafe') cafe_in_60637.head() # + # All these numbers are overestimates. # Ballparked coordinates of 4 vertices of Chicago chi_u_l = [42.031355, -87.946627] # [latitude, longitude] chi_u_r = [42.031355, -87.512777] chi_b_l = [41.633678, -87.946627] chi_b_r = [41.633678, -87.512777] """ About 36 km from u_l to u_r About 45 km from u_l to b_l Area is about: 1620 km^2 (1.62e+9 m^2) Based on Yelp(https://www.yelp.com/search?find_desc=cafe&find_loc=Chicago%2C+IL&ns=1): There are about 2500 cafes in the Chicago area (possibly up to 7500 in the grid I chose because the area is less than 3 times bigger). That means about 1 cafe every 216000 m^2 (result seems off), or 50 cafes every 10800000 m^2. Then the radius I should pick is 1900 meters. 45/1.9 = 24, so the number of circles should be 24 """ cafe_in_chicago = MoveMap(chi_u_l, chi_u_r, chi_b_l, number_of_circles=24, circle_radius=1900, category='cafe') # - cafe_in_chicago.to_csv ('cafe_in_chicago.csv') cafe_in_chicago.head() # + """ Cutting down coordinates, increasing number of circles, and decreasing circle size """ chi_u_l = [42.030116, -87.946627] # [latitude, longitude] chi_u_r = [42.030116, -87.512777] chi_b_l = [41.633678, -87.946627] chi_b_r = [41.633678, -87.512777] cafe_in_chicago = MoveMap(chi_u_l, chi_u_r, chi_b_l, number_of_circles=45, circle_radius=1000, category='cafe') # - cafe_in_chicago.to_csv ('cafe_in_chicago.csv') cafe_in_chicago.head() cafe_in_chicago.shape[0] # 1595. The one before that had 1300 cafes.
Yelp/Scraping/API/Yelp API.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Batch Normalization – Practice # Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now. # # This is **not** a good network for classfying MNIST digits. You could create a _much_ simpler network and get _better_ results. However, to give you hands-on experience with batch normalization, we had to make an example that was: # 1. Complicated enough that training would benefit from batch normalization. # 2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization. # 3. Simple enough that the architecture would be easy to understand without additional resources. # This notebook includes two versions of the network that you can edit. The first uses higher level functions from the `tf.layers` package. The second is the same network, but uses only lower level functions in the `tf.nn` package. # # 1. [Batch Normalization with `tf.layers.batch_normalization`](#example_1) # 2. [Batch Normalization with `tf.nn.batch_normalization`](#example_2) # The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named `mnist`. You'll need to run this cell before running anything else in the notebook. import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False) # # Batch Normalization using `tf.layers.batch_normalization`<a id="example_1"></a> # # This version of the network uses `tf.layers` for almost everything, and expects you to implement batch normalization using [`tf.layers.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization) # We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function. # # This version of the function does not include batch normalization. """ DO NOT MODIFY THIS CELL """ def fully_connected(prev_layer, num_units): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu) return layer # We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network. # # This version of the function does not include batch normalization. """ DO NOT MODIFY THIS CELL """ def conv_layer(prev_layer, layer_depth): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu) return conv_layer # **Run the following cell**, along with the earlier cells (to load the dataset and define the necessary functions). # # This cell builds the network **without** batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training. # + """ DO NOT MODIFY THIS CELL """ def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly. correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]]}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate) # - # With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.) # # Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches. # # # # Add batch normalization # # We've copied the previous three cells to get you started. **Edit these cells** to add batch normalization to the network. For this exercise, you should use [`tf.layers.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization) to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference. # # If you get stuck, you can check out the `Batch_Normalization_Solutions` notebook to see how we did things. # **TODO:** Modify `fully_connected` to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps. def fully_connected(prev_layer, num_units, activation_fn, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None) batch_normalized_output = tf.layers.batch_normalization(layer, training=is_training) return activation_fn(batch_normalized_output) # **TODO:** Modify `conv_layer` to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps. def conv_layer(prev_layer, layer_depth, activation_fn, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False, activation=None) batch_normalized_output = tf.layers.batch_normalization(conv_layer, training=is_training) return activation_fn(batch_normalized_output) # **TODO:** Edit the `train` function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly. # + def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels is_training = tf.placeholder(tf.bool, name="is_training") inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i, tf.nn.relu, is_training) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100, tf.nn.relu, is_training) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss operation model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels, is_training: False}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly. correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]], is_training: False}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate) # - # With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: `Accuracy on 100 samples`. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference. # # # Batch Normalization using `tf.nn.batch_normalization`<a id="example_2"></a> # # Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things. # # This version of the network uses `tf.nn` for almost everything, and expects you to implement batch normalization using [`tf.nn.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization). # # **Optional TODO:** You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization. # # **TODO:** Modify `fully_connected` to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps. # # **Note:** For convenience, we continue to use `tf.layers.dense` for the `fully_connected` layer. By this point in the class, you should have no problem replacing that with matrix operations between the `prev_layer` and explicit weights and biases variables. def fully_connected(prev_layer, num_units, activation_fn, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None) gamma = tf.Variable(tf.ones([num_units])) beta = tf.Variable(tf.zeros([num_units])) pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False) pop_variance = tf.Variable(tf.ones([num_units]), trainable=False) epsilon = 1e-3 def batch_norm_training(): batch_mean, batch_variance = tf.nn.moments(layer, [0]) decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon) batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference) return activation_fn(batch_normalized_output) # **TODO:** Modify `conv_layer` to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps. # # **Note:** Unlike in the previous example that used `tf.layers`, adding batch normalization to these convolutional layers _does_ require some slight differences to what you did in `fully_connected`. def conv_layer(prev_layer, layer_depth, activation_fn, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 in_channels = prev_layer.get_shape().as_list()[3] out_channels = layer_depth*4 weights = tf.Variable( tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05)) bias = tf.Variable(tf.zeros(out_channels)) conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME') #conv_layer = tf.nn.bias_add(conv_layer, bias) gamma = tf.Variable(tf.ones([out_channels])) beta = tf.Variable(tf.zeros([out_channels])) pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False) pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False) epsilon = 1e-3 def batch_norm_training(): batch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False) decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(conv_layer, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): return tf.nn.batch_normalization(conv_layer, pop_mean, pop_variance, beta, gamma, epsilon) batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference) return activation_fn(batch_normalized_output) # **TODO:** Edit the `train` function to support batch normalization. You'll need to make sure the network knows whether or not it is training. # + def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels is_training = tf.placeholder(tf.bool, name="is_training") inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i, tf.nn.relu, is_training) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100, tf.nn.relu, is_training) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels, is_training: False}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly. correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]], is_training: False}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate) # - # Once again, the model with batch normalization should reach an accuracy over 90%. There are plenty of details that can go wrong when implementing at this low level, so if you got it working - great job! If not, do not worry, just look at the `Batch_Normalization_Solutions` notebook to see what went wrong.
Batch_Normalization_Exercises.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 15 minutes to QCoDeS # This short introduction is aimed mainly for beginners. Before you start with your first code using QCoDeS, make sure you have properly set up the Python environment for QCoDeS as explained in [this document](http://qcodes.github.io/Qcodes/start/index.html#installation). # ## Introduction # An experimental setup comprises of many instruments. We call an experimental setup as "station". A station is connected to many instruments or devices. QCoDeS provides a way to interact with all these instruments to help users # the measurements and store the data in a database. To interact (read, write, trigger, etc) with the instruments, we have created a [library of drivers](http://qcodes.github.io/Qcodes/api/generated/qcodes.instrument_drivers.html) for commonly used ones. These drivers implement the most needed functionalities of the instruments. # # An "Instrument" can perform many functions. For example, on an oscilloscope instrument, we first set a correct trigger level and other parameters and then obtain a trace. In QCoDeS lingo, we call "trigger_level" and "trace" as `parameter` of this `instrument`. An instrument at any moment will have many such parameters which together define the state of the instrument, hence a parameter can be thought of as a state variable of the instrument. QCoDeS provides a method to set values of these parameters (set trigger level) and get the values from them (obtain a trace). By this way, we can interact with all the needed parameters of an instrument and are ready to set up a measurement. # # QCoDeS has a similar programmatic structure, as well. QCoDeS structure comprises of a `Station` class which is a bucket of objects from `Instrument` class containing many objects from `Parameter` class. The value of these parameters are set and measured during a measurement. The `Measurement` class provides a context manager for registering the parameters and providing a link between different parameters. The measured data is stored in a database. # # Here, we will briefly discuss how you can set up your own experiment with the help of QCoDeS. # # ![SchematicOverviewQcodes](files/Schematic_Overview_Qcodes.png) # # # ## Imports # If you are using QCoDeS as your main data acquisition framework, a typical Python script at your disposal may look like: # + # %matplotlib inline import os from time import sleep import matplotlib.pyplot as plt import numpy as np import qcodes as qc from qcodes import ( Measurement, experiments, initialise_database, initialise_or_create_database_at, load_by_guid, load_by_run_spec, load_experiment, load_last_experiment, load_or_create_experiment, new_experiment, ) from qcodes.dataset.plotting import plot_dataset from qcodes.logger.logger import start_all_logging from qcodes.tests.instrument_mocks import DummyInstrument, DummyInstrumentWithMeasurement # - # We strongly recommend not to import unused packages to increase readability of your code. # ## Logging # In every measurement session, it is highly recommended to have QCoDeS logging turned on. This will allow you to have all the logs in case troubleshooting is required. To enable logging, we can either add the following single line of code at the beginnig of our scripts after the imports: start_all_logging() # or we can configure qcodes to automatically start logging on every import of qcodes, by running the following code once. (This will persist the current configuration in `~\qcodesrc.json`) from qcodes import config config.logger.start_logging_on_import = 'always' config.save_to_home() # You can find the log files at ".qcodes" directory, typically located at your home folder (e.g., see the corresponding path to the "Filename" key above). This path contains two log files: # - command_history.log: contains the commands executed. # # And in this particular case # - 191113-13960-qcodes.log: contains python logging information. The file is named as # \[date (YYMMDD)\]-\[process id\]-\[qcodes\].log. The display message from `start_all_logging()` function shows that the `Qcodes Logfile` is saved at `C:\Users\a-halakh\.qcodes\logs\191113-13960-qcodes.log` # ## Station creation # A station is a collection of all the instruments and devices present in your experiment. As mentioned earlier, it can be thought of as a bucket where you can add your `instruments`, `parameters` and other `components`. Each of these terms has a definite meaning in QCoDeS and shall be explained in later sections. Once a station is properly configured, you can use its instances to access these components. We refer to tutorial on [Station](http://qcodes.github.io/Qcodes/examples/Station.html) for more details. # We start with instantiating a station class which at the moment does not comprise of any instruments or parameters. station = qc.Station() # ### Snapshot # We can look at all the instruments and the parameters inside this station bucket using `snapshot` method. Since at the moment we have not added anything to our station, the snapshot will contain the names of the keys with no values: station.snapshot() # The [snapshot](http://qcodes.github.io/Qcodes/examples/DataSet/Working%20with%20snapshots.html) of the station is categorized as the dictionary of all the `instruments`,` parameters`, `components` and list of `default_measurement`. Once you have populated your station you may want to look at the snapshot again. # ## Instrument # # `Instrument` class in Qcodes is responsible for holding connections to hardware, creating a parameter or method for each piece of functionality of the instrument. For more information on instrument class we refer to the [detailed description here](http://qcodes.github.io/Qcodes/user/intro.html#instrument) or the corresponding [api documentation](http://qcodes.github.io/Qcodes/api/instrument/index.html). # Let us, now, create two dummy instruments and associate two parameters for each of them: # + # A dummy instrument dac with two parameters ch1 and ch2 dac = DummyInstrument('dac', gates=['ch1', 'ch2']) # A dummy instrument that generates some real looking output depending # on the values set on the setter_instr, in this case the dac dmm = DummyInstrumentWithMeasurement('dmm', setter_instr=dac) # - # Aside from the bare ``snapshot``, which returns a Python dictionary, a more readable form can be returned via: dac.print_readable_snapshot() dmm.print_readable_snapshot() # ### Add instruments into station # Every instrument that you are working with during an experiment should be added to the instance of the `Station` class. Here, we add the `dac` and `dmm` instruments by using ``add_component`` method: # #### Add components station.add_component(dac) station.add_component(dmm) # #### Remove component # We use the method `remove_component` to remove a component from the station. For example you can remove `dac` as follows: # station.remove_component('dac') station.components # Let us add the `dac` instrument back: station.add_component(dac) # #### Station snapshot # As there are two instruments added to the station object, the snapshot will include all the properties associated with them: station.snapshot() # #### Station Configurator # The instantiation of the instruments, that is, setting up the proper initial values of the corresponding parameters and similar pre-specifications of a measurement constitutes the initialization portion of the code. In general, this portion can be quite long and tedious to maintain. These (and more) concerns can be solved by a YAML configuration file of the `Station` object. We refer to the notebook on [station](http://qcodes.github.io/Qcodes/examples/Station.html#Default-Station) for more details. # ## Parameter # # A QCoDeS `Parameter` has the property that it is settable, gettable or both. Let us clarify this with an example of a real instrument, say an oscilloscope. An oscilloscope contains settings such as trigger mode, trigger level, source etc. Most of these settings can be set to a particular value in the instrument. For example, trigger mode can be set to 'edge' mode and trigger level to some floating number. Hence, these parameters are called settable. Similarly, the parameters that we are able to retrieve the values currently associated with them are called gettable. In this example notebook, we have a 'dac' instrument with 'ch1' and 'ch2' are added as its `Parameter`s. Similarly, we have a 'dmm' instrument with 'v1' and 'v2' are added as its `Parameter`s. We also note that, apart from the trivial use of `Parameter` as the standard parameter of the instrument, it can be used as a common variable to utilize storing/retrieving data. Furthermore, it can be used as a subclass in more complex design cases. # # QCoDeS provides following parameter classes built in: # # - `Parameter` : Represents a single value at a given time. Example: voltage. # - `ParameterWithSetpoints`: Represents an array of values of all the same type that are returned all at once. Example: voltage vs time waveform . We refer to the [notebook](http://qcodes.github.io/Qcodes/examples/Parameters/Simple-Example-of-ParameterWithSetpoints.html) in which more detailed examples concerning the use cases of this parameter can be found. # - `DelegateParameter`: It is intended for proxy-ing other parameters. You can use different label, unit, etc in the delegated parameter as compared to the source parameter. # - `MultiParameter`: Represents a collection of values with different meanings and possibly different dimensions. Example: I and Q, or I vs time and Q vs time. # # Most of the times you can use these classes directly and use the `get`, `set` functions to get or set the values to those parameters. But sometimes it may be useful to subclass the above classes, in that case you should define `get_raw` and `set_raw` methods rather then `get` or `set` methods. The `get_raw`, `set_raw` method is automatically wrapped to provide a `get`, `set` method on the parameter instance. Overwriting get in subclass of above parameters or the `_BaseParameter` is not allowed and will throw a runtime error. # # To understand more about parameters consult the [notebook on Parameter](http://qcodes.github.io/Qcodes/examples/index.html#parameters) for more details. # In most cases, a settable parameter accepts its value as a function argument. Let us set the a value of 1.1 for the 'ch1' parameter of the 'dac' instrument: dac.ch1(1.1) # Similarly, we ask the current value of a gettable parameter with a simple function call. For example, the output voltage of dmm can be read via dmm.v1() # Further information can be found in the [user guide](http://qcodes.github.io/Qcodes/user/intro.html#parameter) or [api documentation](http://qcodes.github.io/Qcodes/api/parameters/index.html) of parameter. # ## Initialise database and experiment # Before starting a measurement, we first initialise a database. The location of the database is specified by the configuration object of the QCoDeS installation. The database is created with the latest supported version complying with the QCoDeS version that is currently under use. If a database already exists but an upgrade has been done to the QCoDeS, then that database can continue to be used and it is going to be upgraded to the latest version automatically at first connection. # The initialisation of the database is achieved via: initialise_database() # As the result, a database according to the current QCoDeS configuration is created, which as per the default configuration, a database called "experiments.db" is created in the user's home folder. Let's check the database location and name: qc.config.core.db_location # Alternatively, if you already have a QCoDeS database which you would like to use for your measurement, it is sufficient to use initialise_or_create_database_at("~/experiments.db") # Note that it is user's responsibility to provide the correct path for the existing database. The notation of the path may differ with respect to the operating system. The method ``initialise_or_create_database_at`` makes sure that your QCoDeS session is connected to the referred database. If the database file does not exist, it will be created at the provided path: initialise_or_create_database_at("./my_data.db") # If we check the database location again, it should be changed to ``./my_data.db``, because under the hood, ``initialise_or_create_database_at`` connects to the database in the provided path by changing the `db_location` to that path: qc.config.core.db_location # ### Change location of database # In case you would like to change the location of the database directly, for example, to the current working directory, it is sufficient to assign the new path as the value of the corresponding key ``db_location``: cwd = os.getcwd() qc.config["core"]["db_location"] = os.path.join(cwd, 'testing.db') # Note that any change in the qcodes configuration in a Python kernel is a temporary change in that kernel (means it does not permanently change the configuration file unless it is saved in the file). Users should be careful changing the config file (refer to the end of the notebook to learn more about QCoDeS configuration). # ### Load or create experiment # After initialising the database we create the `Experiment` object. This object contains the name of the experiment and the sample, and the path of the database. You can use `load_or_create_experiment` to find and return an experiment with the given experiment and sample name if it already exists, or create one if not found. # # exp = load_or_create_experiment(experiment_name='dataset_context_manager', sample_name="no sample1") # The path of the database for `Experiment` is the defined path in the QCoDeS configuration. First, `Experiment` loads the database in that path (or it creates one if there is no database in that path), and then saves the created experiment in that database. Although loading/ creating database by `Experiment` is a user-friendly feature, we recommend users to initialise their database, as shown earlier, before loading/ creating their experiment, because it allows them to better control their experiments and databases for their measurement. # The method shown above to load or create the experiment is the most versatile one. However for specific cases, the following alternative methods can be used to create or load experiments: # + # load_experiment_by_name(experiment_name='dataset_context_manager',sample_name="no sample") # load_last_experiment() # load_experiment(1) # new_experiment(experiment_name='dataset_context_manager',sample_name="no sample") # - # ## Measurement # Qcodes `Measurement` module provides a context manager for registering parameters to measure and store results. The measurement is first linked to the correct experiment and to the station by passing them as arguments. If no arguments are given, the latest experiment and station are taken as defaults. A keyword argument `name` can also be set as any string value for `Measurement`. This set `name` argument will be used as the name of the resulting dataset. # # QCoDeS is capable of storing relations between the parameters, i.e., which parameter is independent and which parameter depends on another one. This capability is later used to make useful plots, where the knowledge of interdependencies is used to define the corresponding variables for the coordinate axes. The required (mandatory) parameters in the measurement are first registered. If there is an interdependency between any given two or more parameters, the independent one is declared as a 'setpoint'. In our example, ``dac.ch1`` is the independent parameter and ``dmm.v1`` is the dependent parameter whose setpoint is ``dac.ch1``. # + meas = Measurement(exp=exp, station=station, name='xyz_measurement') meas.register_parameter(dac.ch1) # register the first independent parameter meas.register_parameter(dmm.v1, setpoints=(dac.ch1,)) # now register the dependent oone meas.write_period = 2 with meas.run() as datasaver: for set_v in np.linspace(0, 25, 10): dac.ch1.set(set_v) get_v = dmm.v1.get() datasaver.add_result((dac.ch1, set_v), (dmm.v1, get_v)) dataset = datasaver.dataset # convenient to have for plotting # - # The ``meas.run()`` returns a context manager for the experiment run. Entering the context returns the ``DataSaver`` object to the `datasaver` variable. The ``DataSaver`` class handles the saving of data to the database using the method ``add_result``. The ``add_result`` method validates the sizes of all the data points and store them intermittently into a private variable. Within every write-period of the measurement, the data of the private variable is flushed to the database. # # ``meas.write_period`` is used to define the periods after which the data is committed to the database. We do not commit individual datapoints during measurement to the database but only after some amount of data is collected in stipulated time period (in this case for 2 seconds). The default value of write_period is 5 seconds. # ### Measurement without defining an Experiment # If we initialise a database but do not create/ load an experiment before running a `Measurement`, one of the two following outcomes would happen: # 1. if the initialised database does not contain any `Experiment`, then the `Measurement` will not run and an error related to the `Experiment` will be thrown; # 2. if the database already contains one/ more `Experiment`, then creating a `Measurement` object will automatically pick up the latest `Experiment` from the database, and the meaurement will be performed. # # Therefore, creating/ loading an `Experiment` is a prerequisite for running a `Measurement`. # ## Data exploration # ### List all the experiments in the database # The list of experiments that are stored in the database can be called back as follows: experiments() # While our example database contains only few experiments, in reality the database will contain several experiments containing many datasets. Seldom, you would like to load a dataset from a particular experiment for further analysis. Here we shall explore different ways to find and retrieve already measured dataset from the database. # ### List all the datasets in the database # Let us now retrieve the datasets stored within the current experiment via: exp.data_sets() # ### Load the data set using one or more specifications # The method ``load_by_run_spec`` can be used to load a run with given specifications such as 'experiment name' and 'sample name': dataset = load_by_run_spec(experiment_name='dataset_context_manager', captured_run_id=1) # While the arguments are optional, the function call will raise an error if more than one run matching the supplied specifications is found. If such an error occurs, the traceback will contain the specifications of the runs, as well. Further information concerning 'Uniquely identifying and loading runs' can be found in [this example notebook](DataSet/Extracting-runs-from-one-DB-file-to-another.ipynb#Uniquely-identifying-and-loading-runs). # # For more information on the `DataSet` object that `load_by_run_spec` returned, refer to [DataSet class walkthrough article](DataSet/DataSet-class-walkthrough.ipynb). # ### Plot dataset # We arrived at a point where we can visualize our data. To this end, we use the ``plot_dataset`` method with ``dataset`` as its argument: plot_dataset(dataset) # For more detailed examples of plotting QCoDeS datasets, refer to the following articles: # # - [Offline plotting tutorial](DataSet/Offline%20Plotting%20Tutorial.ipynb) # - [Offline plotting with categorical data](DataSet/Offline%20plotting%20with%20categorical%20data.ipynb) # - [Offline plotting with complex data](DataSet/Offline%20plotting%20with%20complex%20data.ipynb) # ### Get data of specific parameter of a dataset # If you are interested in numerical values of a particular parameter within a given dataset, the corresponding data can be retrieved by using `get_parameter_data` method: dataset.get_parameter_data('dac_ch1') dataset.get_parameter_data('dmm_v1') # We refer reader to [exporting data section of the performing measurements using qcodes parameters and dataset](DataSet/Performing-measurements-using-qcodes-parameters-and-dataset.ipynb#Accessing-and-exporting-the-measured-data) and [Accessing data in DataSet notebook](DataSet/Accessing-data-in-DataSet.ipynb) for further information on `get_parameter_data` method. # ### Export data to pandas dataframe # If desired, any data stored within a QCoDeS database can also be exported as pandas dataframes. This can be achieved via: df = dataset.to_pandas_dataframe_dict()['dmm_v1'] df.head() # ### Export data to xarray # It's also possible to export data stored within a QCoDeS database to an `xarray.DataArray`. This can be achieved via: xarray = dataset.to_xarray_dataarray_dict()['dmm_v1'] xarray.head() # We refer to [example notebook on working with pandas](DataSet/Working-With-Pandas-and-XArray.ipynb) and [Accessing data in DataSet notebook](DataSet/Accessing-data-in-DataSet.ipynb) for further information. # ### Explore the data using an interactive widget # Experiments widget presents the most important information at a glance, has buttons to plot the dataset and easily explore a snapshot, enabled users to add a note to a dataset. # # It is only available in the Jupyter notebook because it uses [`ipywidgets`](https://ipywidgets.readthedocs.io/) to display an interactive elements. # # Use it in the following ways: # ```python # # import it first # from qcodes.interactive_widget import experiments_widget # # # and then just run it # experiments_widget() # # # you can pass a specific database path # experiments_widget(db="path_of_db.db") # # # you can also pass a specific list of DataSets: # # say, you're only interested in datasets of a particular experiment # experiments = qcodes.experiments() # data_sets = experiments[2].data_sets() # experiments_widget(data_sets=data_sets) # # # you can change the sorting of the datasets # # by passing None, "run_id", "timestamp" as sort_by argument: # experiments_widget(sort_by="timestamp") # ``` # Here's a short video that summarizes the looks and the features: # # ![video demo about experiments widget should show here](../_static/experiments_widget.webp) # ## Things to remember # ### QCoDeS configuration # # QCoDeS uses a JSON based configuration system. It is shipped with a default configuration. The default config file should not be overwritten. If you have any modifications, you should save the updated config file on your home directory or in the current working directory of your script/notebook. The QCoDeS config system first looks in the current directory for a config file and then in the home directory for one and only then - if no config files are found - it falls back to using the default one. The default config is located in `qcodes.config`. To know how to change and save the config please refer to the [documentation on config](http://qcodes.github.io/Qcodes/user/configuration.html?). # ### QCoDeS instrument drivers # We support and provide drivers for most of the instruments currently in use at the Microsoft stations. However, if more functionalities than the ones which are currently supported by drivers are required, one may update the driver or request the features form QCoDeS team. You are more than welcome to contribute and if you would like to have a quick overview on how to write instrument drivers, please refer to the [example notebooks on writing drivers](http://qcodes.github.io/Qcodes/examples/index.html#writing-drivers). # ### QCoDeS measurements live plotting with Plottr # Plottr supports and is recommended for QCoDeS measurements live plotting. [How to use plottr with QCoDeS for live plotting](plotting/How-to-use-Plottr-with-QCoDeS-for-live-plotting.ipynb) notebook contains more information.
docs/examples/15_minutes_to_QCoDeS.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Notebook written by [<NAME>](https://github.com/zhedongzheng) # + from tqdm import tqdm import tensorflow as tf import numpy as np import sklearn import pprint import itertools import os import sys sys.path.append(os.path.dirname(os.getcwd())) from data import WN18 # - class Config: n_epochs = 30 batch_size = 150 embed_dim = 200 # + """ e: entity s: subject p: predicate o: object """ def glance_dict(d, n=5): return dict(itertools.islice(d.items(), n)) def read_triples(path): triples = [] with open(path, 'rt') as f: for line in f.readlines(): s, p, o = line.split() triples += [(s.strip(), p.strip(), o.strip())] return triples def load_triple(): WN18.download() triples_tr = read_triples('../data/WN18/wn18/train.txt') triples_va = read_triples('../data/WN18/wn18/valid.txt') triples_te = read_triples('../data/WN18/wn18/test.txt') triples_all = triples_tr + triples_va + triples_te return triples_all, triples_tr, triples_va, triples_te def build_vocab(triples): params = {} e_set = {s for (s, p, o) in triples} | {o for (s, p, o) in triples} p_set = {p for (s, p, o) in triples} params['e_vocab_size'] = len(e_set) params['p_vocab_size'] = len(p_set) e2idx = {e: idx for idx, e in enumerate(sorted(e_set))} p2idx = {p: idx for idx, p in enumerate(sorted(p_set))} return e2idx, p2idx, params def build_multi_label(triples_tr): sp2o = {} for (_s, _p, _o) in triples_tr: s, p, o = e2idx[_s], p2idx[_p], e2idx[_o] if (s,p) not in sp2o: sp2o[(s,p)] = [o] else: if o not in sp2o[(s,p)]: sp2o[(s,p)].append(o) return sp2o def get_y(triples_tr, e2idx, p2idx, sp2o): y = [] for (_s, _p, _o) in triples_tr: s, p, o = e2idx[_s], p2idx[_p], e2idx[_o] temp = np.zeros([len(e2idx)]) temp[sp2o[(s,p)]] = 1. y.append(temp) y = np.asarray(y) return y def next_train_batch(triples_tr, e2idx, p2idx, sp2o): for i in range(0, len(triples_tr), Config.batch_size): _triples_tr = triples_tr[i: i+Config.batch_size] x_s = np.asarray([e2idx[s] for (s, p, o) in _triples_tr], dtype=np.int32) x_p = np.asarray([p2idx[p] for (s, p, o) in _triples_tr], dtype=np.int32) y = get_y(_triples_tr, e2idx, p2idx, sp2o) yield (x_s, x_p, y) def train_input_fn(triples_tr, e2idx, p2idx, s2p2o): dataset = tf.data.Dataset.from_generator( lambda: next_train_batch(sklearn.utils.shuffle(triples_tr), e2idx, p2idx, s2p2o), (tf.int32, tf.int32, tf.float32), (tf.TensorShape([None]), tf.TensorShape([None]), tf.TensorShape([None, len(e2idx)]))) iterator = dataset.make_one_shot_iterator() x_s, x_p, y = iterator.get_next() return {'s': x_s, 'p': x_p}, y # + def o_next_batch(eval_triples, e2idx, p2idx): for (s, p, o) in tqdm(eval_triples, total=len(eval_triples), ncols=70): s_idx, p_idx = e2idx[s], p2idx[p] yield np.atleast_1d(s_idx), np.atleast_1d(p_idx) def o_input_fn(eval_triples, e2idx, p2idx): dataset = tf.data.Dataset.from_generator( lambda: o_next_batch(eval_triples, e2idx, p2idx), (tf.int32, tf.int32), (tf.TensorShape([None,]), tf.TensorShape([None,]))) iterator = dataset.make_one_shot_iterator() s, p = iterator.get_next() return {'s': s, 'p': p} def evaluate_rank(model, valid_triples, test_triples, all_triples, e2idx, p2idx): for eval_name, eval_triples in [('test', test_triples)]: _scores_o = list(model.predict( lambda: o_input_fn(eval_triples, e2idx, p2idx))) ScoresO = np.reshape(_scores_o, [len(eval_triples), len(e2idx)]) ranks_o, filtered_ranks_o = [], [] for ((s, p, o), scores_o) in tqdm(zip(eval_triples, ScoresO), total=len(eval_triples), ncols=70): s_idx, p_idx, o_idx = e2idx[s], p2idx[p], e2idx[o] ranks_o += [1 + np.argsort(np.argsort(- scores_o))[o_idx]] filtered_scores_o = scores_o.copy() rm_idx_o = [e2idx[fo] for (fs, fp, fo) in all_triples if fs == s and fp == p and fo != o] filtered_scores_o[rm_idx_o] = - np.inf filtered_ranks_o += [1 + np.argsort(np.argsort(- filtered_scores_o))[o_idx]] for setting_name, setting_ranks in [('Raw', ranks_o), ('Filtered', filtered_ranks_o)]: mean_rank = np.mean(1 / np.asarray(setting_ranks)) print('[{}] {} MRR: {}'.format(eval_name, setting_name, mean_rank)) for k in [1, 3, 5, 10]: hits_at_k = np.mean(np.asarray(setting_ranks) <= k) * 100 print('[{}] {} Hits@{}: {}'.format(eval_name, setting_name, k, hits_at_k)) # + def forward(features, params): e_embed = tf.get_variable('e_embed', [params['e_vocab_size'], Config.embed_dim], initializer=tf.contrib.layers.xavier_initializer()) p_embed = tf.get_variable('p_embed', [params['p_vocab_size'], Config.embed_dim], initializer=tf.contrib.layers.xavier_initializer()) s = tf.nn.embedding_lookup(e_embed, features['s']) p = tf.nn.embedding_lookup(p_embed, features['p']) logits = tf.matmul(s*p, e_embed, transpose_b=True) return logits def model_fn(features, labels, mode, params): logits = forward(features, params) if mode == tf.estimator.ModeKeys.TRAIN: tf.logging.info('\n'+pprint.pformat(tf.trainable_variables())) tf.logging.info('params: %d'%count_train_params()) loss_op = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_op = tf.train.AdamOptimizer().minimize( loss_op, global_step=tf.train.get_global_step()) return tf.estimator.EstimatorSpec(mode = mode, loss = loss_op, train_op = train_op) if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode, predictions = tf.sigmoid(logits)) def count_train_params(): return np.sum([np.prod([d.value for d in v.get_shape()]) for v in tf.trainable_variables()]) def prt_epoch(n_epoch): print() print("EPOCH %d"%(n_epoch+1)) print() # + triples_all, triples_tr, triples_va, triples_te = load_triple() e2idx, p2idx, params = build_vocab(triples_all) sp2o = build_multi_label(triples_tr) model = tf.estimator.Estimator(model_fn, params = params) for n_epoch in range(Config.n_epochs): prt_epoch(n_epoch) model.train(lambda: train_input_fn(triples_tr, e2idx, p2idx, sp2o)) evaluate_rank(model, triples_va, triples_te, triples_all, e2idx, p2idx,) # -
src_kg/link_prediction/main/wn18_distmult_1vn.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # #### New to Plotly? # Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/). # <br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online). # <br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! # #### Version Check # Note: Trisurfs are available in version <b>1.11.0+</b><br> # Run `pip install plotly --upgrade` to update your Plotly version import plotly plotly.__version__ # #### Torus # + import plotly.plotly as py import plotly.figure_factory as FF import plotly.graph_objs as go import numpy as np from scipy.spatial import Delaunay u = np.linspace(0, 2*np.pi, 20) v = np.linspace(0, 2*np.pi, 20) u,v = np.meshgrid(u,v) u = u.flatten() v = v.flatten() x = (3 + (np.cos(v)))*np.cos(u) y = (3 + (np.cos(v)))*np.sin(u) z = np.sin(v) points2D = np.vstack([u,v]).T tri = Delaunay(points2D) simplices = tri.simplices fig1 = FF.create_trisurf(x=x, y=y, z=z, simplices=simplices, title="Torus", aspectratio=dict(x=1, y=1, z=0.3)) py.iplot(fig1, filename="3dFolder/Torus") # - # #### Mobius Band # + import plotly.plotly as py import plotly.figure_factory as FF import plotly.graph_objs as go import numpy as np from scipy.spatial import Delaunay u = np.linspace(0, 2*np.pi, 24) v = np.linspace(-1, 1, 8) u,v = np.meshgrid(u,v) u = u.flatten() v = v.flatten() tp = 1 + 0.5*v*np.cos(u/2.) x = tp*np.cos(u) y = tp*np.sin(u) z = 0.5*v*np.sin(u/2.) points2D = np.vstack([u,v]).T tri = Delaunay(points2D) simplices = tri.simplices fig1 = FF.create_trisurf(x=x, y=y, z=z, colormap="Portland", simplices=simplices, title="Mobius Band") py.iplot(fig1, filename="Mobius-Band") # - # #### Boy's Surface # + import plotly.plotly as py import plotly.figure_factory as FF import plotly.graph_objs as go import numpy as np from scipy.spatial import Delaunay u=np.linspace(-np.pi/2, np.pi/2, 60) v=np.linspace(0, np.pi, 60) u,v=np.meshgrid(u,v) u=u.flatten() v=v.flatten() x = (np.sqrt(2)*(np.cos(v)*np.cos(v))*np.cos(2*u) + np.cos(u)*np.sin(2*v))/(2 - np.sqrt(2)*np.sin(3*u)*np.sin(2*v)) y = (np.sqrt(2)*(np.cos(v)*np.cos(v))*np.sin(2*u) - np.sin(u)*np.sin(2*v))/(2 - np.sqrt(2)*np.sin(3*u)*np.sin(2*v)) z = (3*(np.cos(v)*np.cos(v)))/(2 - np.sqrt(2)*np.sin(3*u)*np.sin(2*v)) points2D = np.vstack([u, v]).T tri = Delaunay(points2D) simplices = tri.simplices fig1 = FF.create_trisurf(x=x, y=y, z=z, colormap=['rgb(50, 0, 75)', 'rgb(200, 0, 200)', '#c8dcc8'], show_colorbar=True, simplices=simplices, title="Boy's Surface") py.iplot(fig1, filename="Boy's Surface") # - # #### Change Colorscale Variable # + import plotly.plotly as py import plotly.figure_factory as FF import plotly.graph_objs as go import numpy as np from scipy.spatial import Delaunay u = np.linspace(0, 2*np.pi, 20) v = np.linspace(0, 2*np.pi, 20) u,v = np.meshgrid(u,v) u = u.flatten() v = v.flatten() x = (3 + (np.cos(v)))*np.cos(u) y = (3 + (np.cos(v)))*np.sin(u) z = np.sin(v) points2D = np.vstack([u,v]).T tri = Delaunay(points2D) simplices = tri.simplices # define a function that calculates the distance # from the origin to use as the color variable def dist_origin(x, y, z): return np.sqrt((1.0 * x)**2 + (1.0 * y)**2 + (1.0 * z)**2) fig1 = FF.create_trisurf(x=x, y=y, z=z, color_func=dist_origin, colormap = [(0.4, 0.15, 0), (1, 0.65, 0.12)], show_colorbar=True, simplices=simplices, title="Torus - Origin Distance Coloring", aspectratio=dict(x=1, y=1, z=0.3)) py.iplot(fig1, filename="Torus - Origin Distance Coloring") # - # #### Diverging Colormap # + import plotly.plotly as py import plotly.figure_factory as FF import plotly.graph_objs as go import numpy as np from scipy.spatial import Delaunay u = np.linspace(-np.pi, np.pi, 30) v = np.linspace(-np.pi, np.pi, 30) u, v = np.meshgrid(u,v) u = u.flatten() v = v.flatten() x = u y = u*np.cos(v) z = u*np.sin(v) points2D = np.vstack([u,v]).T tri = Delaunay(points2D) simplices = tri.simplices # define a function for the color assignment def dist_from_x_axis(x, y, z): return x fig1 = FF.create_trisurf(x=x, y=y, z=z, colormap=['rgb(255, 155, 120)', 'rgb(255, 153, 255)', ], show_colorbar=True, simplices=simplices, title="Light Cone", showbackground=False, gridcolor='rgb(255, 20, 160)', plot_edges=False, aspectratio=dict(x=1, y=1, z=0.75)) py.iplot(fig1, filename="Light Cone") # - # #### Reference help(FF.create_trisurf) # + from IPython.display import display, HTML display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />')) display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">')) # ! pip install git+https://github.com/plotly/publisher.git --upgrade import publisher publisher.publish( 'trisurf.ipynb', 'python/trisurf/', 'Trisurf Plots', 'How to make tri-surf plots in Python with Plotly. Trisurfs are formed by replacing the boundaries of a compact surface by touching triangles.', title = 'Python Trisurf Plots | plotly', name = 'Trisurf Plots', has_thumbnail='true', thumbnail='thumbnail/tri-surf2.jpg', language='python', display_as='3d_charts', order=10, ipynb= '~notebook_demo/70') # -
_posts/python-v3/3d/tri-surf/trisurf.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction # Here is a simple approach to prediction of annual water usage where I practiced time-series forecasting. # # **Box-Jenkins Method is used for data analysis using non-seasonal ARIMA (non-stationary) model.** # # The dataset provides the annual water usage in Baltimore from 1885 to 1963, or 79 years of data. # # The values are in the units of liters per capita per day, and there are 79 observations. # # The dataset is credited to Hipel and McLeod, 1994. # # # The RMSE performance measure and walk-forward validation are used for model evaluation. # # Literature: # # [Time Series Analysis and Forecasting by Example](https://books.google.si/books/about/Time_Series_Analysis_and_Forecasting_by.html?id=Bqm5kJC8hgMC&printsec=frontcover&source=kp_read_button&redir_esc=y#v=onepage&q&f=false) # # [How to Remove Trends and Seasonality with a Difference Transform in Python](https://machinelearningmastery.com/remove-trends-seasonality-difference-transform-python/) # # [Autocorrelation in Time Series Data](https://www.influxdata.com/blog/autocorrelation-in-time-series-data/) # ## Import libraries # + from matplotlib import pyplot import matplotlib.cm as cm # %matplotlib inline import numpy as np from pandas import read_csv, concat, Grouper, DataFrame, datetime, Series from pandas.plotting import lag_plot, autocorrelation_plot import warnings from statsmodels.tsa.arima_model import ARIMA, ARIMAResults from statsmodels.graphics.tsaplots import plot_acf, plot_pacf from statsmodels.tsa.stattools import adfuller from sklearn.metrics import mean_squared_error # - # # Data preparation # ### Import data and split to train/test and validation set # + series = read_csv('water.csv', header=0, index_col=0, parse_dates=True, squeeze=True) split_point = len(series) - 10 # how may points for validation dataset, validation = series[0:split_point], series[split_point:] print('Dataset: %d time points \nValidation: %d time points' % (len(dataset), len(validation))) dataset.to_csv('dataset.csv', header=False) validation.to_csv('validation.csv', header=False) # - # ### Summary statistics # + # summary statistics of time series series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) print(series.describe()) # line plot pyplot.figure(num=0, figsize=(5*2.35,5), dpi=80, facecolor='w', edgecolor='k') series.plot() pyplot.xlabel('time (y)') pyplot.ylabel('water usage (lpcd)') pyplot.title('water usage over time') pyplot.grid(True) pyplot.show() # histogram plot pyplot.figure(num=1, figsize=(5*2,5), dpi=80, facecolor='w', edgecolor='k') #pyplot.figure(1) pyplot.subplot(121) series.hist() pyplot.xlabel('water usage (lpcd)') pyplot.ylabel('') pyplot.title('histogram') pyplot.grid(True) # density plot pyplot.subplot(122) series.plot(kind='kde') pyplot.xlabel('water usage (lpcd)') pyplot.ylabel('') pyplot.title('density plot') pyplot.grid(True) pyplot.tight_layout() pyplot.show() # + # boxplots of time series, the last decade is omitted pyplot.figure(num=2, figsize=(5,5), dpi=80, facecolor='w', edgecolor='k') series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) groups = series.groupby(Grouper(freq='10YS')) decades = DataFrame() for name, group in groups: if len(group.values) is 10: decades[name.year] = group.values decades.boxplot() pyplot.xlabel('time (decade)') pyplot.ylabel('water usage (lpcd)') pyplot.title('boxplot, groupd by decades') pyplot.show() # - # ## Persistence model - baseline RMSE # + # evaluate a persistence model # load data series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) # prepare data X = series.values X = X.astype('float32') train_size = int(len(X) * 0.50) train, test_baseline = X[0:train_size], X[train_size:] # walk-forward / Rolling Window / Rolling Forecast validation history = [x for x in train] predictions = list() for i in range(len(test_baseline)): # predict yhat = history[-1] predictions.append(yhat) # observation obs = test_baseline[i] history.append(obs) #print('Predicted=%.3f, Expected=%3.f' % (yhat, obs)) # report performance rmse = np.sqrt(mean_squared_error(test_baseline, predictions)) print('Peristence RMSE: %.3f' % rmse) # plot pyplot.figure(num=2, figsize=(5,5), dpi=80, facecolor='w', edgecolor='k') pyplot.plot(test_baseline) pyplot.plot(predictions, color='red') pyplot.xlabel('time') pyplot.ylabel('water usage (lpcd)') pyplot.title('persistence model') pyplot.grid(True) pyplot.show() # - # # Manually configure ARIMA # ## Detrend data by differencing # + # create and summarize a stationary version of the time series # create a differenced series def difference(dataset): diff = list() for i in range(1, len(dataset)): value = dataset[i] - dataset[i - 1] diff.append(value) return Series(diff) series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) X = series.values X = X.astype('float32') # difference data for detrending stationary = difference(X) stationary.index = series.index[1:] # check if stationary result = adfuller(stationary) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) print('Critical Values:') for key, value in result[4].items(): print('\t%s: %.3f' % (key, value)) # plot differenced data stationary.plot() pyplot.title('differenced data') pyplot.xlabel('time (y)') pyplot.ylabel('d(water usage (lpcd)) / dt') pyplot.show() # save stationary.to_csv('stationary.csv', header=False) # - # One step differencing (d=1) appears to be enough. # ## Autocorrelation and partial autoorrelation # #### estimates for lag *p* and order of MA model *q* # # p is the order (number of time lags) of the autoregressive model # # d is the degree of differencing (the number of times the data have had past values subtracted) # # q is the order of the moving-average model # + # ACF and PACF plots of the time series series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) pyplot.figure() pyplot.subplot(211) plot_acf(series, lags=20, ax=pyplot.gca()) pyplot.xlabel('lag (d)') pyplot.subplot(212) plot_pacf(series, lags=20, ax=pyplot.gca()) pyplot.xlabel('lag (d)') pyplot.tight_layout() pyplot.show() # - # A good starting point for the p is 4 and q as 1. # ### Evaluate a manually configured ARIMA model # + # load data series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) # prepare dataa X = series.values X = X.astype('float32') train_size = int(len(X) * 0.50) train, test = X[0:train_size], X[train_size:] # walk-forward validation history = [x for x in train] predictions = list() for i in range(len(test)): # predict model = ARIMA(history, order=(4,1,1)) model_fit = model.fit(disp=0) yhat = model_fit.forecast()[0] predictions.append(yhat) # observation obs = test[i] history.append(obs) #print('>Predicted=%.3f, Expected=%3.f' % (yhat, obs)) # report performance rmse = np.sqrt(mean_squared_error(test, predictions)) print('RMSE: %.3f' % rmse) # - # Worse performance than baseline (persistence) model! # # Grid search ARIMA parameters # ### Define functions # + # evaluate an ARIMA model for a given order (p,d,q) and return RMSE def evaluate_arima_model(X, arima_order): # prepare training dataset X = X.astype('float32') train_size = int(len(X) * 0.50) train, test = X[0:train_size], X[train_size:] history = [x for x in train] # make predictions predictions = list() for t in range(len(test)): model = ARIMA(history, order=arima_order) # model_fit = model.fit(disp=0) model_fit = model.fit(trend='nc', disp=0) # disable the automatic addition of a trend constant yhat = model_fit.forecast()[0] predictions.append(yhat) history.append(test[t]) # calculate out of sample error rmse = np.sqrt(mean_squared_error(test, predictions)) return rmse # evaluate combinations of p, d and q values for an ARIMA model def evaluate_models(dataset, p_values, d_values, q_values): dataset = dataset.astype('float32') best_score, best_order = float("inf"), None for p in p_values: for d in d_values: for q in q_values: order = (p,d,q) try: rmse = evaluate_arima_model(dataset, order) if rmse < best_score: best_score, best_order = rmse, order print('ARIMA%s - RMSE = %.3f' % (order, rmse)) except: continue print('\nBest: ARIMA%s - RMSE = %.3f' % (best_order, best_score)) return best_order # - # ### Run on dataset def grid_search(series): # evaluate parameters p_values = range(0, 3) d_values = range(0, 2) q_values = range(0, 3) warnings.filterwarnings("ignore") best_order = evaluate_models(series.values, p_values, d_values, q_values) return best_order # + # load dataset series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) # search best_order = grid_search(series) # - # ### Summarize residual errors - *walk-forward validation* def residual_stats(series, best_order, bias=0): print('-----------------------------') # prepare data X = series.values X = X.astype('float32') train_size = int(len(X) * 0.50) train, test = X[0:train_size], X[train_size:] # walk-forward validation history = [x for x in train] predictions = list() for i in range(len(test)): # predict model = ARIMA(history, order=best_order) model_fit = model.fit(trend='nc', disp=0) yhat = bias + float(model_fit.forecast()[0]) predictions.append(yhat) # observation obs = test[i] history.append(obs) #report performance rmse = np.sqrt(mean_squared_error(test, predictions)) print('RMSE: %.3f\n' % rmse) # errors residuals = [test[i]-predictions[i] for i in range(len(test))] residuals = DataFrame(residuals) residuals_mean = residuals.mean() print('RESIDUAL STATISTICS \n') print(residuals.describe()) # plot pyplot.figure(num=None, figsize=(5,5), dpi=80, facecolor='w', edgecolor='k') pyplot.subplot(211) residuals.hist(ax=pyplot.gca()) pyplot.xlabel('residual') pyplot.ylabel('') pyplot.title('histogram') pyplot.grid(True) pyplot.subplot(212) residuals.plot(kind='kde', ax=pyplot.gca()) pyplot.xlabel('residual') pyplot.ylabel('') pyplot.title('density plot') pyplot.grid(True) pyplot.tight_layout() pyplot.show() return residuals_mean # + series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) residuals_mean = residual_stats(series, best_order) # - # We can see that the # distribution has a right shift and that the mean is non-zero at around 1.0. This is perhaps a sign # that the predictions are biased. # # The distribution of residual errors is also plotted. The graphs suggest a Gaussian-like # distribution with a longer right tail, providing further evidence that perhaps a power transform # might be worth exploring. # # We could use this information to bias-correct predictions by adding the mean residual error # of 1.081624 to each forecast made. # ### Make bias corrected forecasts # _ = residual_stats(series, best_order, bias = residuals_mean) # Not much of an improvement after bias correction. # # Save finalized model to file # + # monkey patch around bug in ARIMA class def __getnewargs__(self): return ((self.endog),(self.k_lags, self.k_diff, self.k_ma)) ARIMA.__getnewargs__ = __getnewargs__ def save_model(best_order, model_name): # load data series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) # prepare data X = series.values X = X.astype('float32') # fit model model = ARIMA(X, order=best_order) model_fit = model.fit(trend='nc', disp=0) # bias constant bias = residuals_mean # save model model_fit.save(f'model_{model_name}.pkl') np.save(f'model_bias_{model_name}.npy', [bias]) # - save_model(best_order_box, model_name='simple') # # Load and evaluate the finalized model on the validation dataset def validate_models(model_names): # load train dataset dataset = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) X = dataset.values.astype('float32') history = [x for x in X] # load validation dataset validation = read_csv('validation.csv', header=None, index_col=0, parse_dates=True, squeeze=True) y = validation.values.astype('float32') # plot pyplot.figure(num=None, figsize=(5,5), dpi=80, facecolor='w', edgecolor='k') pyplot.plot(y, color=cm.Set1(0), label='actual') for ind, model_name in enumerate(model_names): # load model model_fit = ARIMAResults.load(f'model_{model_name}.pkl') bias = np.load(f'model_bias_{model_name}.npy') # make first prediction predictions = np.ones(len(y)) yhat = bias + float(model_fit.forecast()[0]) predictions[0] = yhat history.append(y[0]) #print('>Predicted=%.3f, Expected=%3.f' % (yhat, y[0])) # rolling forecasts for i in range(1, len(y)): # predict model = ARIMA(history, order=(2,1,0)) model_fit = model.fit(trend='nc', disp=0) yhat = bias + float(model_fit.forecast()[0]) predictions[i] = yhat # observation obs = y[i] history.append(obs) # print('>Predicted=%.3f, Expected=%3.f' % (yhat, obs)) rmse = np.sqrt(mean_squared_error(y, predictions)) print(f'RMSE {model_name}: %.3f' % rmse) pyplot.plot(predictions, color=cm.Set1(ind+1), label=f'{model_name} predict') pyplot.xlabel('time (d)') pyplot.ylabel('water usage (lpcd)') pyplot.title('Validation') pyplot.legend() pyplot.grid(True) pyplot.show() validate_models(model_names=['simple']) # # Comparison of detrend approaches # ### Linear detrend & Box-Cox transform # + from statsmodels.tsa.tsatools import detrend from scipy.stats import boxcox figsize = (8,4) series = read_csv('dataset.csv', header=0, index_col=0, parse_dates=True, squeeze=True) #print(series.describe()) pyplot.figure(num=None, figsize=figsize, dpi=80, facecolor='w', edgecolor='k') pyplot.subplot(121) series.plot(color='k', label='data') pyplot.subplot(122) series.plot(kind='kde', color='k', label='data') pyplot.xlabel('') pyplot.ylabel('density') pyplot.title('density plot') pyplot.legend() pyplot.show() # --- linear detrend --- series_linear = detrend(series) #print(result.describe()) pyplot.figure(num=None, figsize=figsize, dpi=80, facecolor='w', edgecolor='k') pyplot.subplot(121) series_linear.plot(color='k', label='linear detrend') pyplot.subplot(122) series_linear.plot(kind='kde', color='k', label='linear detrend') pyplot.xlabel('') pyplot.ylabel('density') pyplot.title('density plot') pyplot.legend() pyplot.show() # --- Box-Cox transform --- series_boxcox, lam = boxcox(series) series_boxcox = Series(series_new) # plot pyplot.figure(num=None, figsize=figsize, dpi=80, facecolor='w', edgecolor='k') pyplot.subplot(121) series_boxcox.plot(color='k', label='Box-Cox') pyplot.subplot(122) series_boxcox.plot(kind='kde', color='k', label='Box-Cox') pyplot.xlabel('') pyplot.ylabel('density') pyplot.title('density plot') pyplot.legend() pyplot.show() # - best_order_simple = grid_search(series) best_order_linear = grid_search(series_linear) best_order_boxcox = grid_search(series_boxcox) _ = residual_stats(series, best_order) _ = residual_stats(series_linear, best_order) _ = residual_stats(series_boxcox, best_order) # + save_model(best_order_simple, model_name='simple') save_model(best_order_linear, model_name='linear') save_model(best_order_boxcox, model_name='boxcox') validate_models(model_names = ['simple', 'linear', 'boxcox']) # - # Predictions with linear detrend and Box-Cox transform have lower RMSE, probably statistically insignificant.
Nonsesonal ARIMA - water usage in Boston.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Step 6: Analyze Model Performance # # Use the code below to run TensorFlow Model Analysis on the model in your pipeline. Start by importing and opening the metadata store. # + from __future__ import print_function import os import tfx_utils def _make_default_sqlite_uri(pipeline_name): return os.path.join(os.environ['HOME'], 'airflow/tfx/metadata', pipeline_name, 'metadata.db') def get_metadata_store(pipeline_name): return tfx_utils.TFXReadonlyMetadataStore.from_sqlite_db(_make_default_sqlite_uri(pipeline_name)) pipeline_name = 'taxi' # or taxi_solution pipeline_db_path = _make_default_sqlite_uri(pipeline_name) print('Pipeline DB:\n{}'.format(pipeline_db_path)) store = get_metadata_store(pipeline_name) # - # Now print out the model artifacts: store.get_artifacts_of_type_df(tfx_utils.TFXArtifactTypes.MODEL) # Now analyze the model performance: store.display_tfma_analysis(<insert model ID here>, slicing_column='trip_start_hour')
tfx/examples/airflow_workshop/notebooks/step6.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python Practice Lecture 10 MATH 342W Queens College - Multivariate Linear Regression with the Hat Matrix # ## Author: <NAME> # ## Date: March 3, 2021 # # ## Agenda: # * Multivariate Linear Regression with the Hat Matrix # # ## Multivariate linear regression with the Hat Matrix # # First let's do the null model to examine what the null hat matrix looks like. In this exercise, we will see that $g_0 = \bar{y}$ is really the OLS solution in the case of no features, only an intercept i.e. $b_0 = \bar{y}$. # # We'll load in the Boston Housing data. # + # Lines below are just to ignore warnings import warnings warnings.filterwarnings('ignore') # Importing dependencies from sklearn import datasets import pandas as pd import numpy as np # Load the Boston Housing dataset as bh bh = datasets.load_boston() # Initialize target variable y = bh.target # Create Boston Housing df df = pd.DataFrame(data = bh.data, columns = bh.feature_names) # Load the first 5 rows of df df.head() # - # Let's build a linear model of just the intercept column. # + # Importing dependencies from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score # RMSE, R^2 # Intitialize model model_intercept = LinearRegression() # define 1-vector ones = [[1] for i in range(len(df))] # convert to a numpy array ones = np.asarray(ones, dtype=np.float64) # Fit y on the 1-vector model_intercept.fit(ones, y) # get yhat yhat = model_intercept.predict(ones) # print first 20 predictions yhat[0:20] # - # print the mean of y print(np.mean(y)) # Let's do a simple example of projection. Let's project $y$ onto the intercept column, the column of all 1's. What do you think will happen? H = ones @ ones.transpose() / sum(ones**2) H[1:5, 1:5] # output shape of the Hat matrix H.shape # Output unique values of Hat matrix np.unique(H) # In fact print(1 / 506) # The whole matrix is just one single value for each element! What is this value? It's $\frac{1}{506}$ where 506 is $n$. So what's going to happen? # Getting y projected on ones y_proj_one = H @ y y_proj_one[0:20] # Projection onto the space of all ones makes the null model ($g = \bar{y}$). It's the same as the model of response = intercept + error, i.e. $y = \mu_y + \epsilon$. The OLS intercept estimate is clearly $\bar{y}$. Let's build a multivariate linear regression model with the `LinearRegression` sklearn module. # + # intitialize model model = LinearRegression(fit_intercept = True) # fit model.fit(df, y) # print intercept b0 print(model.intercept_) # print coefficients print(model.coef_) # - # Now we'll do the same using our linear algebra. # # First we'll add the intercept column, and then we'll convert the df to a NumPy array so that we can more easily use NumPy's linear algebra functionality. # + # Let's get our b vec X = df # add intercept column X.insert(0, 'INTERCEPT', [1 for i in range(len(X))]) # Convert X to numpy array (matrix) X = X.to_numpy() # linear algebra Xt = X.transpose() XtXinv = np.linalg.inv(Xt @ X) b = XtXinv @ Xt @ y b # - # We calculated the same intercept and coefficient values. Let's use the Hat matrix to calculate all predictions. X.transpose() # + # get Hat matrix H = X @ XtXinv @ Xt # The `.dot()` method works fine #H = X.dot(XtXinv.dot(Xt)) print(H.shape) # - # Calculate your predictions yhat = H @ y yhat[0:10] # Can you tell this is projected onto a 13 dimensionsal space from a 506 dimensional space? Not really... but it is... # # Now let's project over and over... (H @ H @ H @ H @ H @H @ H @ H @ H @ y)[0:10] # Same thing! Once you project, you're there, you can't project to another different space. That's the idempotency of $H$. # # Let's make sure that it really does represent the column space of $X$. Let's try to project different columns of $X$: # H @ intercept print(X[0:5, 0]) print((H @ X[:, 0])[0:5]) # H @ CRIM print(X[0:5, 1]) print((H @ X[:, 1])[0:5]) # H @ ZN print(X[0:5, 2]) print((H @ X[:, 2])[0:5]) # H @ INDUS print(X[0:5, 3]) print((H @ X[:, 3])[0:5]) # H @ CHAS print(X[0:5, 4]) print((H @ X[:, 4])[0:5]) # We can calculate the residual error using the Hat matrix as well. e = y - yhat e[0:5] I = np.identity(len(X)) e = (I - H) @ y e[0:5] # Let's do that projection over and over onto the complement of the column space of $X$: ((I - H) @ (I - H) @ (I - H) @ (I - H) @ (I - H) @ (I - H) @ y)[0:5]
Practice Notes/10 - Python Practice Lecture 10 MATH 342W Queens College - Multivariate Linear Regression with the Hat Matrix.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Google trends plots import pandas as pd import seaborn as sns % matplotlib inline # %config InlineBackend.figure_format = 'png' import matplotlib as mpl import datetime import time import matplotlib.pyplot as plt from dateutil.relativedelta import relativedelta mpl.rcParams['figure.dpi']= 300 conjuring = pd.read_csv("data/conjuring.csv") conjuring2 = pd.read_csv("data/conjuring2.csv") conjuring3 = pd.read_csv("data/conjuring3.csv") dunkirk = pd.read_csv("data/dunkirk.csv") conjuring.Week = pd.to_datetime(conjuring.Week) conjuring2.Month = pd.to_datetime(conjuring2.Month) conjuring3.Week = pd.to_datetime(conjuring3.Week) dunkirk.Week = pd.to_datetime(dunkirk.Week) releasedate = datetime.date(2013,7,19) dunkrelease = datetime.date(2017,7,21) conjuring["Youtube Searches"] = conjuring.Week.apply(lambda x: "Before Release" if x.date()<releasedate else "After Release") conjuring2["Youtube Searches"] = conjuring2.Month.apply(lambda x: "Before Release" if x.date()<releasedate else "After Release") conjuring3["Youtube Searches"] = conjuring3.Week.apply(lambda x: "Before Release" if x.date()<releasedate else "After Release") dunkirk["Youtube Searches"] = dunkirk.Week.apply(lambda x: "Before Release" if x.date()<dunkrelease else "After Release") sns.barplot(data=conjuring, x='Week', y='conjuring', hue='Youtube Searches', ci=None) sns.barplot(data=conjuring2, x='Month', y='conjuring', hue='Youtube Searches', ci=None) sns.barplot(data=conjuring3, x='Week', y='conjuring', hue='Youtube Searches', ci=None) releasedate - relativedelta(months=6) releasedate + relativedelta(months=12) dunkirk = datetime.date(2017,7,21) dunkirk - relativedelta(months=6) sns.barplot(data=dunkirk, x='Week', y='dunkirk', hue='Youtube Searches', ci=None) plt.title("dunkirk")
02-Movie_Opening_Gross_Prediction/Generating Example Google Trend Plots for Visualization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd # dataset is taken from Kaggle website (https://www.kaggle.com) df = pd.read_csv('./datasets/titanic.csv', index_col='PassengerId') df.head() df.head(10)[::-1] df.loc[::-1].head() df.tail() df.loc[::-1].reset_index(drop=True).head()
28-reverse-row-order.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # One Hot Vector Demo # - From Stanford Course: CS224D: Deep Learning for NLP # + # %matplotlib inline # Simple SVD word vectors in Python # Stanford Course: CS224D: Deep Learning for NLP # "I like deep learning. I like NLP. I enjoy flying" import numpy as np import matplotlib.pyplot as plt la = np.linalg # Building One-Hot-Vector words = ["I", "like", "enjoy", "deep", "learning", "NLP", "flying", "."] X = np.array([[0,2,1,0,0,0,0,0], [2,0,0,1,0,1,0,0], [1,0,0,0,0,0,1,0], [0,1,0,0,1,0,0,0], [0,0,0,1,0,0,0,1], [0,1,0,0,0,0,0,1], [0,0,1,0,0,0,0,1], [0,0,0,0,1,1,1,0]]) U, s, Vh = la.svd(X, full_matrices=False) # Printing first two columns of U # correspoinding to the 2 biggest singular values for i in range(len(words)): plt.text(U[i,0], U[i,1], words[i]) # -
Terms-and-Abbreviations/One_Hot_Vector_Demo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # Feature agglomeration # # # These images how similar features are merged together using # feature agglomeration. # # # + print(__doc__) # Code source: <NAME> # Modified for documentation by <NAME> # License: BSD 3 clause import numpy as np import matplotlib.pyplot as plt from sklearn import datasets, cluster from sklearn.feature_extraction.image import grid_to_graph digits = datasets.load_digits() images = digits.images X = np.reshape(images, (len(images), -1)) connectivity = grid_to_graph(*images[0].shape) agglo = cluster.FeatureAgglomeration(connectivity=connectivity, n_clusters=32) agglo.fit(X) X_reduced = agglo.transform(X) X_restored = agglo.inverse_transform(X_reduced) images_restored = np.reshape(X_restored, images.shape) plt.figure(1, figsize=(4, 3.5)) plt.clf() plt.subplots_adjust(left=.01, right=.99, bottom=.01, top=.91) for i in range(4): plt.subplot(3, 4, i + 1) plt.imshow(images[i], cmap=plt.cm.gray, vmax=16, interpolation='nearest') plt.xticks(()) plt.yticks(()) if i == 1: plt.title('Original data') plt.subplot(3, 4, 4 + i + 1) plt.imshow(images_restored[i], cmap=plt.cm.gray, vmax=16, interpolation='nearest') if i == 1: plt.title('Agglomerated data') plt.xticks(()) plt.yticks(()) plt.subplot(3, 4, 10) plt.imshow(np.reshape(agglo.labels_, images[0].shape), interpolation='nearest', cmap=plt.cm.spectral) plt.xticks(()) plt.yticks(()) plt.title('Labels') plt.show()
lab07/cluster/plot_digits_agglomeration.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: base # language: python # name: base # --- # + [markdown] colab_type="text" id="view-in-github" pycharm={"name": "#%% md\n"} # <a href="https://colab.research.google.com/github/bird-feeder/BirdFSD-YOLOv5/blob/main/notebooks/BirdFSDV1_YOLOv5_LS_Predict.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="5cuRcRCgNOmH" pycharm={"name": "#%% md\n"} # ## Setup # + id="v0Dga8pqNOmI" pycharm={"name": "#%%\n"} # ! rm -rf /root/.cache # ! git clone https://github.com/bird-feeder/BirdFSD-YOLOv5.git # %cd BirdFSD_YOLOv5 # ! python -m pip install -e . # ! pip install -r requirements.txt # ! mkdir ultralytics # ! git clone https://github.com/ultralytics/yolov5.git ultralytics/yolov5 # + [markdown] id="VEWpOVJrpEOa" pycharm={"name": "#%% md\n"} # **Send the `.env` file to the notebook:** # # - Using [`Croc`](https://github.com/schollz/croc): # # - Run the following command from the root folder where `BirdFSD-YOLOv5` is cloned: # # ```shell # croc send "BirdFSD-YOLOv5/.env" # ``` # # - Copy the passphrase part and paste it in the cell input below. # # - If not using `Croc`: # - Drag and drop the `.env` file inside the `BirdFSD-YOLOv5` directory in this notebook, and skip the next cell. # + cellView="form" id="jl_qyNpmp6Et" pycharm={"name": "#%%\n"} PASSPHRASE = '' #@param {type:"string"} if not PASSPHRASE: raise Exception('Paste the passphrase in the input bar!') if 'croc' in PASSPHRASE: PASSPHRASE = PASSPHRASE.split('croc ')[1] # ! curl https://getcroc.schollz.com | bash # ! croc --yes $PASSPHRASE print('\n>>>> Clear the passphrase input field!') # + [markdown] id="miY6XpOcNOmK" pycharm={"name": "#%% md\n"} # ## Download Model Weights # + id="COLfB1wvNOmK" pycharm={"is_executing": true, "name": "#%%\n"} # ! python birdfsd_yolov5/model_utils/download_weights.py --model-version latest --output 'best.pt' # + [markdown] id="sLNKu384NOmK" pycharm={"name": "#%% md\n"} # ## Predict # + id="zn46Z9VTNOmK" outputId="e19239f3-f388-4b22-ab5d-38b38c320ac2" pycharm={"name": "#%%\n"} # ! python birdfsd_yolov5/prediction/predict.py -h # + pycharm={"is_executing": true, "name": "#%%\n"} #@markdown `MODEL_VERSION` is required! MODEL_VERSION = '' #@param {'type': 'string'} #@markdown Leave `LS_PROJECT_IDS` empty to select all projects. LS_PROJECT_IDS = '' #@param {'type': 'string'} if not MODEL_VERSION: raise Exception('You need to specify a model version!') # ! python birdfsd_yolov5/prediction/predict.py \ # --weights 'best.pt' \ # --project-id "$LS_PROJECT_IDS" \ # --model-version "$MODEL_VERSION" \ # --if-empty-apply-label 'background' \ # --predict-all \ # --multithreading
notebooks/BirdFSDV1_YOLOv5_LS_Predict.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 2, "hidden": false, "row": 0, "width": 11}, "report_default": {"hidden": false}}}} nbpresent={"id": "7fbd286b-2306-43da-9e48-6d480907b81a"} # # Jupyter Notebooks # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": false}}}} nbpresent={"id": "2f705832-cecf-4725-9380-703263507684"} # A Notebook is made up of a series of executable cells. Cells can contain plain text: # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 14, "hidden": false, "row": 13, "width": 6}, "report_default": {"hidden": false}}}} # ## Today # - Cloudy with rain across Scotland and Northern Ireland, the rain heavy in the west with strong winds. # - Some early mist or fog across England and Wales, soon clearing to a mostly dry day with sunny spells; sunniest in the east. # # ## Tonight # - Further cloud and rain for Scotland and Northern Ireland, heavy in places. # - There'll be some clear spells further south, but despite this it will be very mild overnight. # # ## Tuesday # - Very warm and sunny across much of England, cloudier with possible heavy downpours in some western parts. # - Remaining unsettled across Scotland and Northern Ireland, sunny spells elsewhere. # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 7, "height": 4, "hidden": true, "row": 33, "width": 4}, "report_default": {}}}} # or Latex: # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 7, "height": 4, "hidden": true, "row": 29, "width": 4}, "report_default": {}}}} # $$c = \sqrt{a^2 + b^2}$$ # + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": false}}}} print('or Python.') # + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": false}}}} nbpresent={"id": "3ef7f792-1909-4ee5-980b-fab0b14407f3"} print('Each cell is separate from the others...') print('and can be re-executed on the fly') # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 4, "height": 4, "hidden": true, "row": 26, "width": 4}, "report_default": {"hidden": false}}}} nbpresent={"id": "cd22d5e9-5043-4e59-bcea-cc7905942e5b"} # ## Multimedia # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": false}}}} nbpresent={"id": "6207cc6f-e096-4d94-9ba7-796ab4c95f8f"} # You can include images, videos and charts in your notebook: # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 6, "height": 19, "hidden": false, "row": 2, "width": 6}, "report_default": {"hidden": false}}}} nbpresent={"id": "1c277640-3432-4ab6-b08c-1991d0923d47"} # ![](https://metofficenews.files.wordpress.com/2013/05/satellite-and-rain-14-may-2013.jpg) # + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 11, "hidden": false, "row": 2, "width": 5}, "report_default": {"hidden": false}}}} nbpresent={"id": "0a13dba0-e113-466a-8b7e-33ae83284533"} from IPython.display import YouTubeVideo YouTubeVideo("lqZ-uJIRv2w") # + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 4, "hidden": true, "row": 13, "width": 12}, "report_default": {"hidden": false}}}} nbpresent={"id": "4b2df5a7-7ba2-455e-93ad-0db27f0cdd70"} import matplotlib import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 6*np.pi, 500) plt.plot(x, np.sin(x**2)) plt.title('A simple chirp') plt.show() # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 4, "hidden": true, "row": 26, "width": 4}, "report_default": {"hidden": false}}}} nbpresent={"id": "08838b39-25a4-4627-b481-05542ad3c122"} # ## Data Access # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": false}}}} nbpresent={"id": "ff89a29d-8123-4cec-9c43-c10da0575628"} # There is currently 4TB of Mogreps-G data available directly from these notebooks. Just talk to the Informatics Lab if you'd like to find out more. # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": false}}}} nbpresent={"id": "e9936387-4b5e-4a8e-b32f-c4c7cd37e0b2"} # ## Sharing Notebooks # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": false}}}} nbpresent={"id": "b80d70e3-0048-4255-9f9a-11f5e9de9861"} # Each Notebook can be saved and shared easily. You can: # # - Compute the output in advance and share the notebook as a static webpage # - File > Download As > (HTML, PDF) # - Share the notebook itself, allowing someone else to experiment # - File > Download as > (Notebook, Python) # - Use the notebook as a dashboard # - View > Dashboard Layout + Dashboard Preview # + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 4, "hidden": false, "row": 27, "width": 12}, "report_default": {}}}} # ## Further info # # There are a lot of great examples out there: # * You can learn more about the Python programming language in the [official tutorial](https://docs.python.org/3/tutorial/) # * You can see example notebooks at [NBViewer](https://nbviewer.jupyter.org) # * Or the the [IPython examples](https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks)
1.0 Introduction/1.0.1 Notebook Intro.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="b134bf12-2617-4152-b68c-0ca4318974b6" # # ORBIT targeting oligo design # # # + id="qR-DaDmab2FX" outputId="9503760f-fad8-4162-e5fb-0630b996727a" colab={"base_uri": "https://localhost:8080/"} # !git clone https://github.com/scott-saunders/orbit.git # + id="_B7-rqe5b8te" outputId="81facbe0-f089-4f39-8417-ee3c789623bb" colab={"base_uri": "https://localhost:8080/"} # %cd orbit/targeting_oligo_design_app # + id="CdNVMPBGcB5W" outputId="b8d6f103-d272-4aa8-e2e2-eeabd02e4b23" colab={"base_uri": "https://localhost:8080/"} # !pip install -r requirements.txt # + id="c5cb4908-18b7-4e45-878d-ef535f657eed" outputId="2e9d0bbc-f374-4969-a4f3-72ebafdd6d60" colab={"base_uri": "https://localhost:8080/", "height": 17} import pandas as pd import numpy as np import Bio.SeqIO import panel as pn import orbit_tools as tools pn.extension(comms='colab') # + [markdown] id="cd99c3f5-c6ba-4a17-94d0-4ba6238fc774" # This notebook specifies a [panel](https://panel.holoviz.org/) app that makes it easy to design a targeting oligo for use with ORBIT genetics. To run this app you will need the following packages: # # * `numpy` # * `pandas` # * `Bio` # * `bokeh` # * `holoviews` # * `panel` # # Once these are installed, you should be able to run all cells in this notebook and start the app in a new window. See the [ORBIT website](https://github.com/scott-saunders/orbit) for more details and instructions about ORBIT itself. # # ------- # # First, let's import the *E. coli* K12 genome (GenBank accession number U00096.3) from a fasta file: # + id="43ab7dcb-fb80-4b1d-b203-5b3ef2fd5a39" outputId="33990894-8c1b-47bf-bac5-7c8aba3b20a8" colab={"base_uri": "https://localhost:8080/"} for record in Bio.SeqIO.parse('sequencev3.fasta', "fasta"): genome = str(record.seq) print("Length genome: {}".format(len(genome))) print("First 100 bases: {}".format(genome[:100])) # + [markdown] id="357e5516-668b-4ecc-9784-94dbb678de8c" # Next, let's import all of the annotated genes, downloaded from ecocyc. Here there are some simple transformations to make the dataframe nicer to work with. # + id="6c441a39-9e5e-4a1f-af44-eaa7ccefba11" outputId="c8583f07-90eb-4ba2-df6f-b05aeebf534a" colab={"base_uri": "https://localhost:8080/", "height": 419} df_genes = pd.read_csv("All_instances_of_Genes_in_Escherichia_coli_K-12_substr._MG1655.txt", sep = '\t') df_genes = df_genes.dropna() df_genes['Left-End-Position'] = df_genes['Left-End-Position'].astype(int) df_genes['Right-End-Position'] = df_genes['Right-End-Position'].astype(int) df_genes['left_pos'] = df_genes['Left-End-Position'] df_genes['right_pos'] = df_genes['Right-End-Position'] df_genes['center_pos'] = df_genes[['left_pos','right_pos']].apply(np.mean, axis = 1) df_genes = df_genes.drop(['Left-End-Position','Right-End-Position'],axis =1) df_genes['gene_label'] = df_genes.apply(lambda row: row.Genes + '\n', axis = 1) df_genes # + [markdown] id="c4702fee-3339-4c7a-9208-df258ca890d2" # You can see that for all 4,529 annotated genes we have the genomic coordinates, the name of the gene, and a brief description. # # Now we can declare the core of the panel app, which depends on 3 functions that exist in `orbit_tools.py`: # # * `plot_nearby()` simply takes some genomic coordinates and plots 1kb upstream and downstream from those positions, using holoviews. This plot is annotated with the gene information from df_genes. # * `get_target_oligo()` returns a targeting oligo that corresponds to the supplied genomic positions. This does a bit more than just return the correct region of the genome, because the targeting oligo needs to target the lagging strand, which must be properly found. # * `get_pos_details()` formats and returns some of the informative details that the more general get_target_oligo() function uses. # # The code below turns each of these functions into a "reactive" function that will respond to 4 interactive parameters: # # * `left_pos` - the left genomic coordinate # * `right_pos` - the right genomic coordinate # * `attB_dir` - the desired direction of the attB site # * `homology` - the total length of homology to use for the oligo (in nucleotides) # + id="ec7a0e24-f993-415a-acf<PASSWORD>84ae986<PASSWORD>" left_pos_widget = pn.widgets.TextInput(name = 'Left Position', value = '1000', width = 200) right_pos_widget = pn.widgets.TextInput(name = 'Right Position', value = '1000', width = 200) dir_widget = pn.widgets.Select(name = 'attB Direction', options = ['+','-'], value = '+', width = 100) homology_widget = pn.widgets.IntSlider(name = 'Homology', value = 52, start = 20, end = 200, width = 200) @pn.depends(left_pos_widget, right_pos_widget, homology_widget) def reactive_plot_nearby(left_pos_widget, right_pos_widget, homology_widget): return tools.plot_nearby(left_pos_widget, right_pos_widget, homology_widget, df_genes = df_genes) @pn.depends(left_pos_widget, right_pos_widget, dir_widget, homology_widget) def reactive_get_target_oligo(left_pos_widget, right_pos_widget, dir_widget, homology_widget): oligo = tools.get_target_oligo(left_pos_widget, right_pos_widget, genome, homology = homology_widget,attB_dir = dir_widget, verbose = False) copy_source_button = pn.widgets.Button(name="Copy targeting oligo", button_type="primary", width = 100) copy_source_code = "navigator.clipboard.writeText(source);" copy_source_button.js_on_click(args={"source": oligo}, code=copy_source_code) return pn.Column(str("5'_" + oligo + "_3'"), copy_source_button) @pn.depends(left_pos_widget, right_pos_widget, homology_widget, dir_widget) def reactive_get_pos_details(left_pos_widget, right_pos_widget, homology_widget, dir_widget): return tools.get_pos_details(left_pos_widget, right_pos_widget, homology_widget, dir_widget) # + [markdown] id="68f4f1d5-5724-4780-8b7b-3d4f21a929ab" # We can test how our parameter input widgets will look: # + id="09c5d570-c878-4a5f-b415-d8299db6b60b" outputId="9a42f8c3-197f-437b-f8b8-1f37bbc9b6dd" colab={"base_uri": "https://localhost:8080/", "height": 76} param_input = pn.Column( pn.Row(left_pos_widget, right_pos_widget, dir_widget, homology_widget) ) param_input # + [markdown] id="5daf1f48-c385-478d-8c33-a609479942b2" # Let's also write some instructions to go with the app. # + id="8ded74a4-e652-4d7d-8291-04b915c585f0" app_text = """This tool is currently implemented only for the *E. coli* K12 genome (GenBank accession number U00096.3). Please contact <NAME> for details or further questions (<EMAIL>). **Instructions:** 1. Find the genomic coordinates of the modification you would like. [Ecocyc](https://ecocyc.org/) is recommended for simple gene deletions. 2. Input these positions to the app as `Left Position` and `Right Position` below. Check that the intended locus shows up in the genome plot. 3. Choose which direction the attB sequence should go - either `+` or `-`. Typically this is the same direction as the gene of interest. 4. Decide how long your homology arms need to be and input with the `Homology` slider. Default is 52 bp total, which yields a 90 bp oligo (attB is 38 bp). 5. If the genome plot with the attB homology arms looks correct and the oligo sequence appears in the gray panel, then click `Copy targeting oligo` and order from IDT or an equivalent DNA supplier. -------- """ # + [markdown] id="f5c4a7a9-63d3-4b25-ac82-43e85eb1c678" # Finally, we can specify the panel widget as a few of these panel widgets just stacked on top of each other: # # * `param_input` the parameter input widgets from above # * `reactive_plot_neaby` the reactive genomic plot # * `reactive_get_pos_details` the reactive function to get oligo details # * `reactive_get_target_oligo` the reactive function to get the actual oligo sequence # + id="ae940841-3dc5-4574-988c-1d379d9e3432" orbit_app = pn.Column( "# ORBIT targeting oligo design", app_text, param_input, reactive_plot_nearby, pn.Column(reactive_get_pos_details, reactive_get_target_oligo ,background='WhiteSmoke') ) #orbit_app # + [markdown] id="b364951c-e971-48b2-8e1c-863706923dcb" # Then we can run the app. # # You can use the app in a full window by clicking "Mirror cell in tab" at the top right of the app's code cell. Then click "Change page layout" to make each tab full screen (button at top right of window). # + id="24914dfa-f1ad-4acb-b191-45223556efd9" outputId="179f8a1e-d417-4de5-fc34-d3b35d88d4ec" colab={"base_uri": "https://localhost:8080/", "height": 906} orbit_app # + id="0f7cd920-a89d-42c2-8f27-fb966ab9f625" # %load_ext watermark # + id="481b0dd4-4633-49b7-b0bc-0e1382018af6" outputId="c747dbff-d913-4749-af31-1e2b5d00cb28" # %watermark -v -p numpy,Bio,pandas,bokeh,holoviews,panel
targeting_oligo_design_app/ORBIT_targeting_oligo_app_gcolab.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="onC3nH3_uiA3" import pandas as pd import numpy as np import sklearn import matplotlib.pyplot as plt import seaborn as sns from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler from sklearn.metrics import classification_report from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split # + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 72} id="c0-SfMgJwg_l" outputId="cde34d86-c515-4639-934b-17a40f4b9d07" #load the data from google.colab import files # Use to load data on Google Colab uploaded = files.upload() # + colab={"base_uri": "https://localhost:8080/", "height": 405} id="2PS4hTw-zSo0" outputId="feef43d0-1a36-450f-8086-1cf5df24d268" #Load the data into the data frame df = pd.read_csv('WA_Fn-UseC_-Telco-Customer-moved-agitated.csv') df.head(7) # + colab={"base_uri": "https://localhost:8080/"} id="FVi1EmI8zjOh" outputId="c954d7d0-2629-485c-9441-66dc26986595" df.shape # + colab={"base_uri": "https://localhost:8080/"} id="nLEkFZpozn86" outputId="d20bb7b2-0361-4b75-8591-d780f2315e23" #Show all of the column names df.columns.values # + colab={"base_uri": "https://localhost:8080/"} id="yNjO_gGrz7zZ" outputId="d99bd84f-1b13-4e21-92fb-cfc2411d5ab3" #Check for na or missing data df.isna().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="nUlAliZf0F4r" outputId="dbc4f16e-bd5e-44c9-85bb-5bdd621b5ad8" #Show statistics on the current data df.describe() # + colab={"base_uri": "https://localhost:8080/"} id="QJvCNglb0POm" outputId="57a8431f-05de-4425-8457-de6b67614cb9" #Get the number of customers that moved df['Churn'].value_counts() # + colab={"base_uri": "https://localhost:8080/", "height": 350} id="YV_yG7dV0Yys" outputId="3219e74c-55c0-4cf9-d8d5-d9996c794a8c" #Visualize the count of customer churn sns.countplot(df['Churn']) # + colab={"base_uri": "https://localhost:8080/"} id="rGkHC0qS0dWK" outputId="ff4888d6-eb74-4982-9a7d-6da02ea9a7f0" #What percentage of customers are leaving ? retained = df[df.Churn == 'No'] churned = df[df.Churn == 'Yes'] num_retained = retained.shape[0] num_churned = churned.shape[0] #Print the percentage of customers that stayed and left print( num_retained / (num_retained + num_churned) * 100 , "% of customers stayed with the company.") #Print the percentage of customers that stayed and left print( num_churned / (num_retained + num_churned) * 100,"% of customers left the company.") # + colab={"base_uri": "https://localhost:8080/", "height": 296} id="zlc99K910fz5" outputId="13216240-e5ba-4f77-f2b5-7e5b7e30130a" #Visualize the churn count for both Males and Females sns.countplot(x='gender', hue='Churn',data = df) # + colab={"base_uri": "https://localhost:8080/", "height": 296} id="e2_ptLti0k_s" outputId="9fd151dc-2c33-47e1-a42f-7c2a464118d9" #Visualize the churn count for the internet service sns.countplot(x='InternetService', hue='Churn', data = df) # + colab={"base_uri": "https://localhost:8080/", "height": 442} id="1aiC6RSD0qnm" outputId="7237a459-6943-4549-fc55-f761eba80402" numerical_features = ['tenure', 'MonthlyCharges'] fig, ax = plt.subplots(1, 2, figsize=(28, 8)) df[df.Churn == 'No'][numerical_features].hist(bins=20, color="blue", alpha=0.5, ax=ax) df[df.Churn == 'Yes'][numerical_features].hist(bins=20, color="orange", alpha=0.5, ax=ax) # + id="kTajYFx10u_7" #Remove the unnecessary column customerID cleaned_df = df = df.drop('customerID', axis=1) # + colab={"base_uri": "https://localhost:8080/"} id="yj8kLTK60yot" outputId="ff9c928e-35f6-4ad6-ed79-fb26978dead0" #Look at the number of rows and cols in the new data set cleaned_df.shape # + colab={"base_uri": "https://localhost:8080/"} id="iPAtOmbV01o9" outputId="07721a77-2b10-489c-d9cd-af147422f866" #Convert all the non-numeric columns to numerical data types for column in cleaned_df.columns: if cleaned_df[column].dtype == np.number: continue cleaned_df[column] = LabelEncoder().fit_transform(cleaned_df[column]) # + colab={"base_uri": "https://localhost:8080/"} id="F7tugmgG05IN" outputId="eb98f124-304c-44eb-d6d7-6f6eb41eeca2" #Check the new data set data types cleaned_df.dtypes # + colab={"base_uri": "https://localhost:8080/", "height": 224} id="_aIy8JwF08du" outputId="f2e0ffc5-214d-44e0-dd82-a425c370641e" #Show the first 5 rows of the new data set cleaned_df.head() # + id="LylaRnsO5PEO" #Scale the cleaned data X = cleaned_df.drop('Churn', axis = 1) y = cleaned_df['Churn'] #Standardizing/scaling the features X = StandardScaler().fit_transform(X) # + id="nzWL_1745SML" #Split the data into 80% training and 20% testing x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # + colab={"base_uri": "https://localhost:8080/"} id="aipX9WdJ5VUs" outputId="6540e158-9582-4974-c140-92e65d04401c" #Create the model model = LogisticRegression() #Train the model model.fit(x_train, y_train) # + colab={"base_uri": "https://localhost:8080/"} id="eQXM9QXJ5YHQ" outputId="a8fbcd0d-9da5-4414-89aa-56bf848d88ed" predictions = model.predict(x_test) #printing the predictions print(predictions) # + colab={"base_uri": "https://localhost:8080/"} id="_Ijg3lpP5bAf" outputId="4df09d57-594e-498d-ef0f-8bb331faa55a" #Check precision, recall, f1-score print( classification_report(y_test, predictions) )
telecomproject.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="HUA4-hBYgnre" colab_type="text" # # IST 718 LAB 6 # + id="l-zlrRc8gm4B" colab_type="code" colab={} # # !pip install fbprophet # + id="Hd1POa6qgWDF" colab_type="code" colab={} # %matplotlib inline import pandas as pd from fbprophet import Prophet import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') # + id="ePlljZB5gYA3" colab_type="code" colab={} df = pd.read_csv('https://raw.githubusercontent.com/danielcaraway/data/master/Zip_Zhvi_SingleFamilyResidence.csv', encoding='latin') # + [markdown] id="oU_I_nytg9IM" colab_type="text" # ## PART ONE: Arkansas Metro Areas # + id="2r-XBhtogYtn" colab_type="code" colab={} hs = df[df['Metro'].str.contains('Hot Springs', na=False)] lr = df[df['Metro'].str.contains('Little Rock', na=False)] f = df[df['Metro'].str.contains('Fayetteville', na=False)] s = df[df['Metro'].str.contains('Searcy', na=False)] # + id="l-1hGOkthUZM" colab_type="code" colab={} def graph_prices_for(df, location_name): df_t = df.loc[:, '1996-04'::].T df_t['avg'] = df_t.mean(numeric_only=True, axis=1) df_t.reset_index(inplace=True) columns = ['index', 'avg'] df = pd.DataFrame(df_t, columns = columns) df = df.rename(index=str, columns={"avg": "y", "index": "ds"}) ax = df.set_index('ds').plot(figsize=(12, 8)) ax.set_ylabel('Home Prices in ' + location_name) ax.set_xlabel('Date') plt.show() # + id="wI-_nxujhnEt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 537} outputId="b480456f-1c2e-48b0-a423-6c8628954318" graph_prices_for(hs, "Hot Springs") # + id="ZfdgI2abiNhn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 537} outputId="3b929efe-b2e2-48fc-f075-87fcbb9aa8bd" graph_prices_for(lr, "Little Rock") # + id="GHweEGKUiQg8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 537} outputId="31c4c952-4210-433b-e727-8102201608bb" graph_prices_for(f, "Fayetteville") # + id="0qvizNkqiVLv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 537} outputId="10730fe8-b321-45b1-8383-a28ee08ae749" graph_prices_for(s, "Searcy") # + id="BEZZ1JxOiYYu" colab_type="code" colab={} # + id="Cb4XGq0vmJwV" colab_type="code" colab={} def transform(df, location_name): df_t = df.loc[:, '1996-04'::].T df_t['avg'] = df_t.mean(numeric_only=True, axis=1) df_t.reset_index(inplace=True) columns = ['index', 'avg'] df = pd.DataFrame(df_t, columns = columns) df['place'] = location_name return df # + id="nTpJ51ovmKVy" colab_type="code" colab={} hs_t = transform(hs, 'Hot Springs') lr_t = transform(lr, 'Little Rock') f_t = transform(f, 'Fayetteville') s_t = transform(s, 'Searcy') # + id="p1Ybli6cmuzs" colab_type="code" colab={} big_df = hs_t.append([lr_t,f_t,s_t]) # + id="eid35BmjnWcV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 486} outputId="e1f6abf7-2cbb-4e0b-dba4-36e290c5ca89" import seaborn as sns sns.lineplot(x="index", y="avg", hue="place", data=big_df) # + id="0_h-9TW1oMMF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="bf83cfe7-fcc1-47c5-9bfa-1bab7366d39a" big_df['year'] = big_df.apply(lambda x: x['index'].split('-')[0], axis=1) big_df['month'] = big_df.apply(lambda x: x['index'].split('-')[1], axis=1) big_df['day'] = '01' big_df['Date'] = pd.to_datetime(big_df[['year','month','day']]) # + id="n9hJGfifqXNe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 394} outputId="9f2489a0-2b3c-4184-f794-14a0fa4824d0" by_place = pd.DataFrame(big_df_year.groupby(['place','year'])['avg'].mean()) by_place.reset_index(inplace=True) sns.tsplot(data=by_place, time="year", condition="place", unit="place", value="avg") # + id="QRCWQNWcsL5i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 394} outputId="807e2ee1-5bca-4737-97cc-6008984ec38b" sns.tsplot(data=big_df, time="Date", condition="place", unit="place", value="avg") # + id="qzdOtDGOsmIo" colab_type="code" colab={}
assets/ist718lab6/code/og/IST718_LAB6_i2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (base) # language: python # name: base # --- # + import matplotlib.pyplot as plt import pandas as pd import numpy as np import matplotlib import matplotlib.figure from matplotlib.patches import Polygon from mpl_toolkits.basemap import Basemap # 准备行政区域确诊分布数据 last_day_corona_virus_of_china = pd.read_json('../data/last_day_corona_virus_of_china.json') current_confirmed_count = last_day_corona_virus_of_china.pivot_table(values='currentConfirmedCount', index='provinceName') data = current_confirmed_count.to_dict()['currentConfirmedCount'] # 绘制行政区域确诊分布数据 lat_min = 0 lat_max = 60 lon_min = 70 lon_max = 140 handles = [ matplotlib.patches.Patch(color='#ffaa85', alpha=1, linewidth=0), matplotlib.patches.Patch(color='#ff7b69', alpha=1, linewidth=0), matplotlib.patches.Patch(color='#bf2121', alpha=1, linewidth=0), matplotlib.patches.Patch(color='#7f1818', alpha=1, linewidth=0), ] labels = ['1-9人', '10-99人', '100-999人', '>1000人'] fig = plt.figure(figsize=(8, 10)) axes = fig.add_axes((0.1, 0.12, 0.8, 0.8)) # rect = l,b,w,h m = Basemap(llcrnrlon=lon_min, urcrnrlon=lon_max, llcrnrlat=lat_min, urcrnrlat=lat_max, resolution='l', ax=axes) m.readshapefile('china-shapefiles-master/china', 'province', drawbounds=True) m.readshapefile('china-shapefiles-master/china_nine_dotted_line', 'p', drawbounds=True) m.drawcoastlines(color='black') # 洲际线 m.drawcountries(color='black') # 国界线 m.drawparallels(np.arange(lat_min, lat_max, 10), labels=[1, 0, 0, 0]) # 画经度线 m.drawmeridians(np.arange(lon_min, lon_max, 10), labels=[0, 0, 0, 1]) # 画纬度线 for info, shape in zip(m.province_info, m.province): pname = info['OWNER'].strip('\x00') fcname = info['FCNAME'].strip('\x00') if pname != fcname: # 不绘制海岛 continue for key in data.keys(): if key in pname: if data[key] == 0: color = '#f0f0f0' elif data[key] < 10: color = '#ffaa85' elif data[key] < 100: color = '#ff7b69' elif data[key] < 1000: color = '#bf2121' else: color = '#7f1818' break poly = Polygon(shape, facecolor=color, edgecolor=color) axes.add_patch(poly) axes.legend(handles, labels, bbox_to_anchor=(0.5, -0.11), loc='lower center', ncol=4) axes.set_title("中国新冠疫情地图") fig.savefig('中国新冠疫情地图.png') plt.show() # -
data_visualization/3_Coronavirus_map_china.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # LAB 4c: Create Keras Wide and Deep model. # # **Learning Objectives** # # 1. Set CSV Columns, label column, and column defaults # 1. Make dataset of features and label from CSV files # 1. Create input layers for raw features # 1. Create feature columns for inputs # 1. Create wide layer, deep dense hidden layers, and output layer # 1. Create custom evaluation metric # 1. Build wide and deep model tying all of the pieces together # 1. Train and evaluate # # # ## Introduction # In this notebook, we'll be using Keras to create a wide and deep model to predict the weight of a baby before it is born. # # We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a wide and deep neural network in Keras. We'll create a custom evaluation metric and build our wide and deep model. Finally, we'll train and evaluate our model. # # Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4c_keras_wide_and_deep_babyweight.ipynb). # + [markdown] colab_type="text" id="hJ7ByvoXzpVI" # ## Load necessary libraries # - import datetime import os import shutil import matplotlib.pyplot as plt import numpy as np import tensorflow as tf print(tf.__version__) # Set your bucket: # + BUCKET = 'qwiklabs-gcp-02-15ad15b6da61'# REPLACE BY YOUR BUCKET os.environ['BUCKET'] = BUCKET # - # ## Verify CSV files exist # # In the seventh lab of this series [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them. # + jupyter={"outputs_hidden": false} TRAIN_DATA_PATH = "gs://{bucket}/babyweight/data/train*.csv".format(bucket=BUCKET) EVAL_DATA_PATH = "gs://{bucket}/babyweight/data/eval*.csv".format(bucket=BUCKET) # - # !gsutil ls $TRAIN_DATA_PATH # !gsutil ls $EVAL_DATA_PATH # ## Create Keras model # ### Lab Task #1: Set CSV Columns, label column, and column defaults. # # Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function. # * `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files # * `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary. # * `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column. # + # Determine CSV, label, and key columns # TODO: Create list of string column headers, make sure order matches. CSV_COLUMNS = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks"] # TODO: Add string name for label column LABEL_COLUMN = "weight_pounds" # Set default values for each CSV column as a list of lists. # Treat is_male and plurality as strings. DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]] # - # ### Lab Task #2: Make dataset of features and label from CSV files. # # Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors. # + def features_and_labels(row_data): """Splits features and labels from feature dictionary. Args: row_data: Dictionary of CSV column names and tensor values. Returns: Dictionary of feature tensors and label tensor. """ label = row_data.pop(LABEL_COLUMN) return row_data, label # features, label def load_dataset(pattern, batch_size=1, mode='eval'): """Loads dataset using the tf.data API from CSV files. Args: pattern: str, file pattern to glob into list of files. batch_size: int, the number of examples per batch. mode: 'eval' | 'train' to determine if training or evaluating. Returns: `Dataset` object. """ # TODO: Make a CSV dataset dataset = tf.data.experimental.make_csv_dataset( file_pattern=pattern, batch_size=batch_size, column_names=CSV_COLUMNS, column_defaults=DEFAULTS) # TODO: Map dataset to features and label dataset = dataset.map(map_func=features_and_labels) # features, label # Shuffle and repeat for training if mode == 'train': dataset = dataset.shuffle(buffer_size=1000).repeat() # Take advantage of multi-threading; 1=AUTOTUNE dataset = dataset.prefetch(buffer_size=1) return dataset # - # ### Lab Task #3: Create input layers for raw features. # # We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining: # * shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known. # * name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. # * dtype: The data type expected by the input, as a string (float32, float64, int32...) def create_input_layers(): """Creates dictionary of input layers for each feature. Returns: Dictionary of `tf.Keras.layers.Input` layers for each feature. """ # TODO: Create dictionary of tf.keras.layers.Input for each dense feature deep_inputs = { colname: tf.keras.layers.Input( name=colname, shape=(), dtype="float32") for colname in ["mother_age", "gestation_weeks"] } # TODO: Create dictionary of tf.keras.layers.Input for each sparse feature wide_inputs = { colname: tf.keras.layers.Input( name=colname, shape=(), dtype="string") for colname in ["is_male", "plurality"] } inputs = {**wide_inputs, **deep_inputs} return inputs # ### Lab Task #4: Create feature columns for inputs. # # Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN. # + def categorical_fc(name, values): """Helper function to wrap categorical feature by indicator column. Args: name: str, name of feature. values: list, list of strings of categorical values. Returns: Categorical and indicator column of categorical feature. """ cat_column = tf.feature_column.categorical_column_with_vocabulary_list( key=name, vocabulary_list=values) ind_column = tf.feature_column.indicator_column( categorical_column=cat_column) return cat_column, ind_column def create_feature_columns(nembeds): """Creates wide and deep dictionaries of feature columns from inputs. Args: nembeds: int, number of dimensions to embed categorical column down to. Returns: Wide and deep dictionaries of feature columns. """ # TODO: Create deep feature columns for numeric features deep_fc = { colname: tf.feature_column.numeric_column(key=colname) for colname in ["mother_age", "gestation_weeks"] } # TODO: Create wide feature columns for categorical features wide_fc = {} is_male, wide_fc["is_male"] = categorical_fc( "is_male", ["True", "False", "Unknown"]) plurality, wide_fc["plurality"] = categorical_fc( "plurality", ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"]) # TODO: Bucketize the float fields. This makes them wide age_buckets = tf.feature_column.bucketized_column( source_column=deep_fc["mother_age"], boundaries=np.arange(15, 45, 1).tolist()) wide_fc["age_buckets"] = tf.feature_column.indicator_column( categorical_column=age_buckets) gestation_buckets = tf.feature_column.bucketized_column( source_column=deep_fc["gestation_weeks"], boundaries=np.arange(17, 47, 1).tolist()) wide_fc["gestation_buckets"] = tf.feature_column.indicator_column( categorical_column=gestation_buckets) # TODO: Cross all the wide cols, have to do the crossing before we one-hot crossed = tf.feature_column.crossed_column( keys=[age_buckets, gestation_buckets], hash_bucket_size=1000) # TODO: Embed cross and add to deep feature columns deep_fc["crossed_embeds"] = tf.feature_column.embedding_column( categorical_column=crossed, dimension=nembeds) return wide_fc, deep_fc # - # ### Lab Task #5: Create wide and deep model and output layer. # # So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. We need to create a wide and deep model now. The wide side will just be a linear regression or dense layer. For the deep side, let's create some hidden dense layers. All of this will end with a single dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right. def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units): """Creates model architecture and returns outputs. Args: wide_inputs: Dense tensor used as inputs to wide side of model. deep_inputs: Dense tensor used as inputs to deep side of model. dnn_hidden_units: List of integers where length is number of hidden layers and ith element is the number of neurons at ith layer. Returns: Dense tensor output from the model. """ # Hidden layers for the deep side layers = [int(x) for x in dnn_hidden_units] deep = deep_inputs # TODO: Create DNN model for the deep side for layerno, numnodes in enumerate(layers): deep = tf.keras.layers.Dense( units=numnodes, activation="relu", name="dnn_{}".format(layerno+1))(deep) deep_out = deep # TODO: Create linear model for the wide side wide_out = tf.keras.layers.Dense( units=10, activation="relu", name="linear")(wide_inputs) # Concatenate the two sides both = tf.keras.layers.concatenate( inputs=[deep_out, wide_out], name="both") # TODO: Create final output layer output = tf.keras.layers.Dense( units=1, activation="linear", name="weight")(both) return output # ### Lab Task #6: Create custom evaluation metric. # # We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels. def rmse(y_true, y_pred): """Calculates RMSE evaluation metric. Args: y_true: tensor, true labels. y_pred: tensor, predicted labels. Returns: Tensor with value of RMSE between true and predicted labels. """ # TODO: Calculate RMSE from true and predicted labels return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2)) #pass # ### Lab Task #7: Build wide and deep model tying all of the pieces together. # # Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is NOT a simple feedforward model with no branching, side inputs, etc. so we can't use Keras' Sequential Model API. We're instead going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics. # + def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3): """Builds wide and deep model using Keras Functional API. Returns: `tf.keras.models.Model` object. """ # Create input layers inputs = create_input_layers() # Create feature columns wide_fc, deep_fc = create_feature_columns(nembeds) # The constructor for DenseFeatures takes a list of numeric columns # The Functional API in Keras requires: LayerConstructor()(inputs) # TODO: Add wide and deep feature colummns wide_inputs = tf.keras.layers.DenseFeatures( feature_columns=wide_fc.values(), name="wide_inputs")(inputs) #TODO, name="wide_inputs")(inputs) deep_inputs = tf.keras.layers.DenseFeatures( feature_columns=deep_fc.values(), name="deep_inputs")(inputs) #TODO, name="deep_inputs")(inputs) # Get output of model given inputs output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units) # Build model and compile it all together model = tf.keras.models.Model(inputs=inputs, outputs=output) # TODO: Add custom eval metrics to list model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"]) return model print("Here is our wide and deep architecture so far:\n") model = build_wide_deep_model() print(model.summary()) # - # We can visualize the wide and deep network using the Keras plot_model utility. tf.keras.utils.plot_model( model=model, to_file="wd_model.png", show_shapes=False, rankdir="LR") # ## Run and evaluate model # ### Lab Task #8: Train and evaluate. # # We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard. # + TRAIN_BATCH_SIZE = 100 NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around NUM_EVALS = 50 # how many times to evaluate # Enough to get a reasonable sample, but not so much that it slows down NUM_EVAL_EXAMPLES = 10000 # TODO: Load training dataset trainds = load_dataset( pattern=TRAIN_DATA_PATH, batch_size=TRAIN_BATCH_SIZE, mode='train') # TODO: Load evaluation dataset evalds = load_dataset( pattern=EVAL_DATA_PATH, batch_size=1000, mode='eval').take(count=NUM_EVAL_EXAMPLES // 1000) steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) logdir = os.path.join( "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir=logdir, histogram_freq=1) # TODO: Fit model on training dataset and evaluate every so often history = model.fit( trainds, validation_data=evalds, epochs=NUM_EVALS, steps_per_epoch=steps_per_epoch, callbacks=[tensorboard_callback]) # - # ### Visualize loss curve # + # Plot nrows = 1 ncols = 2 fig = plt.figure(figsize=(10, 5)) for idx, key in enumerate(["loss", "rmse"]): ax = fig.add_subplot(nrows, ncols, idx+1) plt.plot(history.history[key]) plt.plot(history.history["val_{}".format(key)]) plt.title("model {}".format(key)) plt.ylabel(key) plt.xlabel("epoch") plt.legend(["train", "validation"], loc="upper left"); # - # ### Save the model # + jupyter={"outputs_hidden": false} OUTPUT_DIR = "babyweight_trained_wd" shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EXPORT_PATH = os.path.join( OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S")) tf.saved_model.save( obj=model, export_dir=EXPORT_PATH) # with default serving function print("Exported trained model to {}".format(EXPORT_PATH)) # - # !ls $EXPORT_PATH # ## Lab Summary: # In this lab, we started by defining the CSV column names, label column, and column defaults for our data inputs. Then, we constructed a tf.data Dataset of features and the label from the CSV files and created inputs layers for the raw features. Next, we set up feature columns for the model inputs and built a wide and deep neural network in Keras. We created a custom evaluation metric and built our wide and deep model. Finally, we trained and evaluated our model. # Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
courses/machine_learning/deepdive2/structured/labs/4c_keras_wide_and_deep_babyweight.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from fit.datamodules.tomo_rec import Kanji_TRecFITDM from fit.utils.tomo_utils import get_polar_rfft_coords_2D, get_polar_rfft_coords_sinogram from fit.utils import denormalize, convert2DFT from fit.modules import TRecTransformerModule from matplotlib import pyplot as plt import torch import numpy as np from pytorch_lightning import Trainer, seed_everything from pytorch_lightning.callbacks import ModelCheckpoint import wget from os.path import exists # - seed_everything(22122020) # + dm = Kanji_TRecFITDM(root_dir='./datamodules/data/Kanji', batch_size=8, num_angles=33) # FIT: TRec + FBP vs FIT: TRec with_fbp = True dm.prepare_data() dm.setup() # - angles = dm.gt_ds.get_ray_trafo().geometry.angles det_len = dm.gt_ds.get_ray_trafo().geometry.detector.shape[0] img_shape = dm.gt_shape proj_r, proj_phi, src_flatten = get_polar_rfft_coords_sinogram(angles=angles, det_len=det_len) target_r, target_phi, dst_flatten, order = get_polar_rfft_coords_2D(img_shape=img_shape) n_heads = 8 d_query = 32 model = TRecTransformerModule(d_model=n_heads * d_query, sinogram_coords=(proj_r, proj_phi), target_coords=(target_r, target_phi), src_flatten_coords=src_flatten, dst_flatten_coords=dst_flatten, dst_order=order, angles=angles, img_shape=img_shape, detector_len=det_len, loss='prod', use_fbp=with_fbp, init_bin_factor=1, bin_factor_cd=5, lr=0.0001, weight_decay=0.01, attention_type='linear', n_layers=4, n_heads=n_heads, d_query=d_query, dropout=0.1, attention_dropout=0.1) trainer = Trainer(max_epochs=120, gpus=1, checkpoint_callback=ModelCheckpoint( dirpath=None, save_top_k=1, verbose=False, save_last=True, monitor='Train/avg_val_mse', mode='min', prefix='best_val_loss_' ), deterministic=True) # + # Uncomment the next line if you want to train your own model. # trainer.fit(model, datamodule=dm); # + if not exists('./models/trec_kanji/kanji_trec.ckpt'): wget.download('https://cloud.mpi-cbg.de/index.php/s/0ksUkIsWsQfzsXv/download', out='./models/trec_kanji/kanji_trec.ckpt') if not exists('./models/trec_kanji/kanji_trec_fbp.ckpt'): wget.download('https://cloud.mpi-cbg.de/index.php/s/GQWwEYarKse69W1/download', out='./models/trec_kanji/kanji_trec_fbp.ckpt') # - if with_fbp: path = './models/trec_kanji/kanji_trec_fbp.ckpt' else: path = './models/trec_kanji/kanji_trec.ckpt' model = TRecTransformerModule.load_from_checkpoint(path, sinogram_coords=(proj_r, proj_phi), target_coords=(target_r, target_phi), src_flatten_coords=src_flatten, dst_flatten_coords=dst_flatten, dst_order=order, angles=angles, strict=False) test_res = trainer.test(model, datamodule=dm) for x_fc, fbp_fc, y_fc, y_real, (amp_min, amp_max) in dm.test_dataloader(): break model.cpu(); pred_img, pred_img_before_conv = model.get_imgs(x_fc, fbp_fc, y_fc, amp_min, amp_max) # Before the projection we normalized the image, now we undo this for the visualization. # After denormalization we set all pixels outside of the projection-area to zero pred_img = denormalize(pred_img, dm.mean, dm.std) * dm.__get_circle__() y_real = denormalize(y_real, dm.mean, dm.std) * dm.__get_circle__() # + dft_fbp = convert2DFT(x=fbp_fc[:,model.dst_flatten_order], amp_min=amp_min, amp_max=amp_max, dst_flatten_order=model.dst_flatten_order, img_shape=model.hparams.img_shape) fbp_img = torch.roll(torch.fft.irfftn(model.mask * dft_fbp[0], s=2 * (model.hparams.img_shape,)), 2 * (model.hparams.img_shape // 2,), (0, 1)) fbp_img = (fbp_img - fbp_img.min())*255/(fbp_img.max() - fbp_img.min()) fbp_img = fbp_img * dm.__get_circle__() # - plt.figure(figsize=(15,5)) plt.subplot(1,3,1) plt.imshow(fbp_img, cmap='gray', vmin=y_real[0].min(), vmax=y_real[0].max()) plt.title('Filtered Backprojection'); plt.subplot(1,3,2) plt.imshow(pred_img[0].detach(), cmap='gray', vmin=y_real[0].min(), vmax=y_real[0].max()) plt.title('Prediction'); plt.subplot(1,3,3) plt.imshow(y_real[0], cmap='gray') plt.title('Ground Truth');
examples/Tomographic Reconstruction - Kanji Example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] hideCode=false hidePrompt=false # # Introduction # This Notebook contains my work for analyzing a 100km transect in the Southern Ocean. The goal is to identify Lee waves from the measurements and then estimate the energy fluxes and transport driven by these lee waves. For some of the more standard plots such as transect contours, I also experiment with my Ocean Toolbox package # # # + hideCode=false hidePrompt=false # Load Data and relevant modules # %matplotlib inline import numpy as np import scipy.signal as sig from scipy import interpolate import matplotlib.pyplot as plt import data_load import gsw import oceans as oc import cmocean import pandas as pd import internal_waves_calculations as iwc import warnings import seaborn as sns import ray_tracing as rt # Kill warnings (they look ugly but use them while testing new code) warnings.simplefilter("ignore") # Allow display of pandas data tables pd.options.display.max_columns = 22 # Load Data ladcp, ctd = data_load.load_data() strain = np.genfromtxt('strain.csv', delimiter=',') wl_max=1200 wl_min=500 ctd_bin_size=1500 ladcp_bin_size=1500 nfft = 2048 rho0 = 1025 # + [markdown] hideCode=false hidePrompt=false # # Extracting Wave Components # This section uses spectral analysis to estimate kinetic and potential energies lee waves. Using a minimum and maximum vertical wavelength as integration limits, I estimate the energetics of target wavelengths. These limits were determined qualitatively by estimative vertical size of features in profiles (how big are the wiggles). # # ## Internal Energy # The wave components are estimated through calcuation of the internal wave energy components (Kinetic and Potential). To do this, each profile is seperated into mean and wave-induced perturbations $(X = X + X')$. In order to do this a sliding vertical polynomial is fit to each profile and the resultant profile is subtracted out, leaving the perturbation profile. # # ### Kinetic Energy # The resultant velocity perturbation profiles (u and v) are binned into 1024 meter, half overlapping bins. The Power Spectral Density is calculated along each being and integrated between a target wavelength band. This band is chosen by examining the perturbation profiles and identifying coherent wave features. The final values are input into the kinetic energy equation. $\langle \rangle$ denotes integrated power spectral density. # # $$ KE = \frac{1}{2}\big[ \langle u'^{2} \rangle + \langle v'^{2} \rangle \big] $$ # # ### Potential Energy # A similar process is followed for estimating potential energy. However, 2 methods were implemented here, with varying results. The first was to estimate the isopycnal displacement $\eta$ from density perturbations as follows: # # $$\eta = \frac{\rho - \rho_{ref}}{\frac{\Delta \rho_{ref}}{\Delta z}}$$ # # $\rho_{ref}$ is estimated as a transect mean density profile. However, this showed erratic and unrealistic values most likely stemming from how $\frac{d\rho_{ref}}{dz}$ was estimated. The second method utlized the relationship of strain $\xi$ as the vertical gradient of $\eta$. Strain was calculated as # # $$ \xi = \frac{N^{2} - {N_{ref}}^2}{{N_{ref}}^2}$$ # # where $N^2$ is estimated using the adiabtic leveling method derived by Bray and Foffonof(insert year). $N_{ref}^2$ is estimated as the mean N2 profile of the transect. $\xi$ is multipled by the vertical grid spacing of the data, an estimate of $\Delta z$ to obtain $\eta$: # # $$ \eta = \xi * \Delta z $$ # # Once $\eta$ is obtained, the same processes for calculating Power Spectral Density used in kinetic energy calculations are used here with the potential energy equation: # # $$ PE = \frac{1}{2}N^2\langle \eta'^{2} \rangle $$ # # The energy components are combined for the total internal energy $(\frac{J}{m^s})$: # # $$ E = \rho(KE + PE) $$ # # Using the two energy components, the internal wave frequencies are estimated by: # # $$ \omega_{0} = f\sqrt{\frac{KE + PE}{KE - PE}}$$ # # Several other derivations are used and compared with similar results. An issue with this method is that when kinetic and potential energies are similar, error in energy density calculations may cause kinetic energy to be slightly less than potential, resulting in a null value from the square root operation. # We now have the intrinsic frequency as well as the vertical waveumnber $m$ which is estimated as the mean of integration limits. From here, $k_{h}$, the horizontal wave number is calculated from: # # $$ k_{h} = m\sqrt{\frac{f^2 - \omega^2}{\omega^2 - N^2}}$$ # # # + # Get Wave parameters using the methods above PE, KE, omega, m, kh, lambdaH, Etotal,\ khi, Uprime, Vprime, b_prime, ctd_bins,\ ladcp_bins, KE_grid, PE_grid, ke_peaks,\ pe_peaks, dist, depths, KE_psd, eta_psd, N2, N2mean = iwc.wave_components_with_strain(ctd,\ ladcp, strain, wl_min=wl_min, wl_max=wl_max, plots=False) # + [markdown] hideCode=false hidePrompt=false # # Plot and inspect some of the data # + hideCode=false hidePrompt=false m_plot = np.array([(1)/wl_max, (1)/wl_max, (1)/wl_min, (1)/wl_min]) plt.figure(figsize=[12,6]) plt.loglog(KE_grid, KE_psd.T, linewidth=.6, c='b', alpha=.1) plt.loglog(KE_grid, np.nanmean(KE_psd, axis=0).T, lw=1.5, c='k') ylims = plt.gca().get_ylim() ylim1 = np.array([ylims[0], ylims[1]]) plt.plot(m_plot[2:], ylim1, lw=1.5, c='k', alpha=.9, linestyle='dotted') plt.plot(m_plot[:2], ylim1, lw=1.5, c='k', alpha=.9, linestyle='dotted') plt.ylim(ylims) plt.ylabel('Kinetic Energy Density ($J/m^{3}$)') plt.xlabel('Vertical Wavenumber') plt.gca().grid(True, which="both", color='k', linestyle='dotted', linewidth=.2) plt.loglog(PE_grid, .5*np.nanmean(N2)*eta_psd.T, lw=.6, c='r', alpha=.1) plt.loglog(KE_grid, .5*np.nanmean(N2)*np.nanmean(eta_psd, axis=0).T, lw=1.5, c='k') plt.plot(m_plot[2:], ylim1, lw=1.5, c='k', alpha=.9, linestyle='dotted') plt.plot(m_plot[:2], ylim1, lw=1.5, c='k', alpha=.9, linestyle='dotted') plt.ylim(ylims) plt.gca().grid(True, which="both", color='k', linestyle='dotted', linewidth=.2) plt.ylabel('Energy Density ($J/m^{3}$)') plt.xlabel('Inverse wavelength :$1/\lambda$') plt.xlim(10**(-3.5), 10**(-1.1)) plt.title('Kinetic and Potential Energy Density') # + [markdown] hideCode=false hidePrompt=false # ## Decompose Horizontal Wave Vector # In order to properly run a ray tracing model, the horizontal wavenumber $k_h$ must be decomposed into its two components, $k$ and $l$. The horizontal azimuth, $\theta$, is the angle between the $k_h$ vector and the x-axis. using the relationships: # $$ tan(2\theta) = 2\mathbb{R} \bigg [\frac{u'^{*}v'}{u'u^{*} - v'v^{*}} \bigg ]$$ # # $$ k = (kh)cos(\theta) $$ # $$ l = (kh)sin(\theta) $$ # # where $*$ represents the complex conjugate and $u'$ and $v'$ represent the Fourier transform of velocity anomalies. # Fourier transforms are done along the same bins used in the energy calculations. **NOT SO SURE ABOUT HOW RIGHT THIS IS ** # # # + hideCode=false hidePrompt=false # Horizontal wave vector decomposition k = [] l = [] theta = [] dz = 8 for i in ladcp_bins: theta.append(iwc.horizontal_azimuth(Uprime[i,:], Vprime[i,:], dz,\ wl_min=wl_min, wl_max=wl_max, nfft=1024)) theta = np.vstack(theta) k = kh*np.cos(theta) l = kh*np.sin(theta) # - # ## Inspect wavenumbers in tables - $k_h$ display_table = pd.DataFrame(kh, index=np.squeeze(depths), columns=np.arange(1,22)) cmap = cmap=sns.diverging_palette(250, 5, as_cmap=True) display_table.style.background_gradient(cmap=cmap, axis=1)\ .set_properties(**{'max-width': '300px', 'font-size': '12pt'})\ .set_caption("Horizontal Wavenumber")\ .set_precision(3) # + [markdown] hideCode=false hidePrompt=false # ## Inspect wavenumbers in tables - $k$ # - display_table = pd.DataFrame(k, index=np.squeeze(depths), columns=np.arange(1,22)) display_table.style.background_gradient( axis=1)\ .set_properties(**{'max-width': '300px', 'font-size': '12pt'})\ .set_caption("Horizontal Wavenumber $k$")\ .set_precision(3) # ## Inspect wavenumbers in tables - $l$ # + hideCode=false hidePrompt=false display_table = pd.DataFrame(l, index=np.squeeze(depths), columns=np.arange(1,22)) display_table.style.background_gradient(cmap=cmap, axis=1)\ .set_properties(**{'max-width': '300px', 'font-size': '12pt'})\ .set_caption("Horizontal Wavenumber $l$")\ .set_precision(3) # - # ## Inspect Frequency $\omega_0$ display_table = pd.DataFrame(omega, index=np.squeeze(depths), columns=np.arange(1,22)) display_table.style.background_gradient(cmap=cmap, axis=1)\ .set_properties(**{'max-width': '300px', 'font-size': '12pt'})\ .set_caption("Horizontal Wavenumber $l$")\ .set_precision(3) # + [markdown] hideCode=false hidePrompt=false # ## Ray Tracing # In order to assess whether the observed waves are lee waves, and to study their propogation through the region, a ray tracing model is utlized. Using the ray equations following Olbers 1981, this model solves the equations backwards in time to locate the origin of the wave. This model also allows for testing wave propogation in a range of stratification and shear conditions. # # ### Using the wave model # The wave model generates a wave with a set of given parameters: $k, l, m, \omega_0, z_0$ and mean stratification and velocity profiles. The mean profiles are transect wide means. Using the ray_tracing module and a set of given parameters, a "wave object" is generated. Wave objects have the ray tracing model built into them so they can be called with a desired duration (in hours), time step (in seconds), and status update intervals. A bottom depth can be set which tells the model to end the run if the wave ray has reached this maximum depth. A run report is generated with the total distance (in x,y, and z) that has been traveled, the run duration, and final vertical wavenumber. The first set of model experiments assume a steady state, so that velocity and buoyancy frequency only varies in the vertical. It is therefore assumed that $k$ and $l$ do not vary. # + hideCode=false hidePrompt=false # Generate a wave l1 = 0.00012 k1 = 0.00012 m1 = -(2*np.pi)/1000 z0 = 1000 w0 = -0.000125 wave1 = rt.wave(k=k1, l=l1, m=m1, w0=w0, z0=z0) # check that the properties are loaded correctly by using the properties attribute wave1.properties() # - # ### Run model # The first run will be for 24 hours, at 1 second timesteps. The model results are stored as attributes on the wave object in numpy arrays. After running the model, the wave objects plotting attribute is used to observe the ray's propogation in the x and z direction, as well as how the vertical wavenumber changes as it moves through the velocity field # duration = 24 tstep = 10 status = 6 # intervals to give run status wave1.back3d(duration=duration, tstep=tstep, status=status, print_run_report=True) wave1.x_m_plot(cmap='Reds', line_colorbar=True) plt.title('Test Run') # ### Model Experiments # The primary factor that seems to be affecting the direction of propogation vertically is the frequency. This makes sense given the vertical group speed equation. To test this, several sets of model runs are exectuted, each varying a single component while holding the others constant. Lee waves that are near intertial frequency are supposed to propogate vertically (i think) so why is it that most of the observed frequencies are near inertial frequency? # # #### Frequency Variations # This set of experiements will vary the intrinsic frequency of a wave while holding the others constant. # # + # Frequency Variations f = -0.00011686983432556936 N = np.sqrt(np.nanmean(N2)) omegas = np.linspace(f, -N, num=50) waves = [rt.wave(k=k1, l=l1, m=m1, w0=omega1, z0=z0) for omega1 in omegas] duration = 48 tstep = 10 status = 6 seafloor = 4000 for wave in waves: wave.back3d(duration=duration, tstep=tstep, status=status, seafloor=seafloor, updates=False, print_run_report=False) # + # plot frequency variation wave_lines = [] plt.figure(figsize=[10,8]) for wave in waves: wave_lines.append(oc.colorline(wave.x_ray.flatten(), wave.z_ray.flatten(), wave.m_ray.flatten(), cmap=cmocean.cm.thermal, norm=None)) # Plot Rays plt.xlim(0,30) plt.ylim(500,4000) plt.gca().invert_yaxis() cb1 = plt.colorbar(wave_lines[0]) cb1.set_label('Inverse vertical wave numnber ($m^{-1}$)') plt.title('Frequency Tests $\omega_0$ = $N^2$ to $f$ \n Run Duration: {} Hours'.format(duration)) plt.xlabel('Distance (km)') plt.ylabel('Depth (m)') # - # #### Constant velocity profile (non shear dominated) waves1 = [rt.wave(k=k1, l=l1, m=m1, w0=omega1, z0=z0) for omega1 in omegas] meanU = np.nanmean(waves[0].U) meandU = np.nanmean(waves[0].dudz) meanV = np.nanmean(waves[0].V) meandv = np.nanmean(waves[0].dvdz) for wave in waves1: wave.U = meanU*(wave.U/wave.U) wave.dudz = meandU*(wave.dudz/wave.dudz) wave.V = meanV*(wave.V/wave.V) wave.dvdz = meandv*(wave.dvdz/wave.dvdz) wave.back3d(duration=duration, tstep=tstep, status=status, seafloor=seafloor, print_run_report=False, updates=False) # + # Plot frequency variation with constant U wave_lines = [] plt.figure(figsize=[10,8]) for wave in waves1: wave_lines.append(oc.colorline(wave.x_ray.flatten(), wave.z_ray.flatten(), wave.m_ray.flatten(), cmap=cmocean.cm.thermal, norm=None)) # Plot Rays plt.xlim(0,30) plt.ylim(500,4000) plt.gca().invert_yaxis() cb1 = plt.colorbar(wave_lines[0]) cb1.set_label('Inverse vertical wave numnber ($m^{-1}$)') plt.title('Frequency Tests $\omega_0$ = $N^2$ to $f$ \n Run Duration: {} Hours'.format(duration)) plt.xlabel('Distance (km)') plt.ylabel('Depth (m)') # + # Frequency Variation with constant N2 waves2 = [rt.wave(k=k1, l=l1, m=m1, w0=omega1, z0=z0) for omega1 in omegas] meanN2 = np.nanmean(waves[0].N2) for wave in waves2: wave.N2 = meanN2*(wave.N2/wave.N2) wave.back3d(duration=duration, tstep=tstep, status=status, seafloor=seafloor, print_run_report=False, updates=False) # + # Plot with constant buoyancy frequency wave_lines = [] plt.figure(figsize=[10,8]) for wave in waves2: wave_lines.append(oc.colorline(wave.x_ray.flatten(), wave.z_ray.flatten(), wave.m_ray.flatten(), cmap=cmocean.cm.thermal, norm=None)) # Plot Rays plt.xlim(0,30) plt.ylim(500,4000) plt.gca().invert_yaxis() cb1 = plt.colorbar(wave_lines[0]) cb1.set_label('Inverse vertical wave numnber ($m^{-1}$)') plt.title('Frequency Tests $\omega_0$ = $N^2$ to $f$ \n Run Duration: {} Hours'.format(duration)) plt.xlabel('Distance (km)') plt.ylabel('Depth (m)') # -
Lee_waves_v2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pathlib import argparse import glob import sys import numpy as np import cv2 import matplotlib.pyplot as plt # - def separate_direct_global(*, max_min_output, extension, image_dir, whiteout, beta, mcolor): # Variables black_bias = False if not image_dir.endswith('/'): image_dir = image_dir + '/' if not extension.startswith('.'): extension = '.' + extension # Mean each pattern images dirs = [str(f) for f in pathlib.Path(image_dir).iterdir() if f.is_dir()] for d in dirs: search_sequence = d + "/*" + extension files = glob.glob(search_sequence) img = np.zeros_like(cv2.imread(files[0], -1)).astype(float) for f in files: img = img + cv2.imread(f, -1) img = img / len(files) filename = d.split("/")[-1] cv2.imwrite(image_dir + filename + extension, img) # Get input filenames. search_sequence = image_dir + "*" + extension black_file = image_dir + "black" + extension files = glob.glob(search_sequence) if black_file in files: black_bias = True files.remove(black_file) for excp in ['white', 'direct', 'global', 'max', 'min']: filename = image_dir + excp + extension if filename in files: files.remove(filename) # If file does not exist, exit the program. if len(files) == 0: print ("No images...") sys.exit() # Load images img = cv2.imread(files[0], -1) max_img = img min_img = img for filename in files: img = cv2.imread(filename, -1) max_img = np.maximum(max_img, img) min_img = np.minimum(min_img, img) img_is_16bit = (max_img.itemsize != 1) # If all images are satulated, direct image should be white? if whiteout: if img_is_16bit: min_img[min_img==65535]=0 else: min_img[min_img==255]=0 # Separate into direct and global components if black_bias: # subtract black bias with underflow prevention black_img = cv2.imread(black_file, -1) max_img = np.maximum(max_img - black_img, 0) min_img = np.maximum(min_img - black_img, 0) direct_img = np.minimum((max_img - min_img) / (1 - beta), mcolor) # Prevent overflow global_img = np.minimum(2.0 * (min_img - beta * max_img) / (1 - beta**2), mcolor) if img_is_16bit: global_img = np.uint16(global_img) else: gloabl_img = np.uint8(global_img) # Save images cv2.imwrite(image_dir + "direct" + extension, direct_img) cv2.imwrite(image_dir + "global" + extension, global_img) if max_min_output: cv2.imwrite(image_dir + 'max' + extension, max_img) cv2.imwrite(image_dir + 'min' + extension, min_img) if __name__ == "__main__": parser = argparse.ArgumentParser(description='Separate into direct and global components.') parser.add_argument('-v', dest='max_min_output', default=False, action='store_true', help="Outputs max and min images.") parser.add_argument('-e', '--extension', default=".png", help="File extension of all images. default is .png") parser.add_argument('-d', '--dir', default="./", help="Source images' directory. default is the current directory."); parser.add_argument('-w', '--whiteout', default=False, action='store_true', help="Processing mode of saturated pixel. If this flag is specified, direct component becomes white, otherwise becomes black.") parser.add_argument('-b', '--beta', default=0, help="Leakage to not illuminated fraction") parser.add_argument('-m', '--mcolor', nargs=3, default=(255, 255, 255), help="max color") #args = parser.parse_args() #separate_direct_global(max_min_output=args.max_min_output, # extension=args.extension, image_dir=args.dir, # whiteout=args.whiteout, beta=args.beta, mcolor=args.mcolor) separate_direct_global(max_min_output=False, extension=".png", image_dir="images/e45450/green/pat3shift2", whiteout=True, beta=0.3, mcolor=(255, 255, 255))
.ipynb_checkpoints/separate_DG-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- x=10 # x is the indetifier and x=10 is variable assignment print(x) type(x) x=10 y=20.5 z="hello" type(x) type(y) type(z) x=10 y=30 x+y x="India" y="Kerala" x+","+y x="10" y="20" x+y x=int(input("Enter the first value")) y=int(input("Enter the second value")) z=x+y print(z) x=int(input("Enter the first value")) y=int(input("Enter the second value")) z=input("Enter the operation") if z=="Add": d=x+y print(d) if z=="Sub": d=x-y print(d) if z=="Multiply": d=x*y print(d) if z=="Divide": d=x/y print(d) marks=int(input("Enter your Mark")) if marks>50: print("you have passed") if marks<50: print("Better luck Next Time") sal=int(input("Enter your Salary")) if sal>250000: print("you are taxable") if sal<250000: print("you are not taxable") sal=int(input("Enter your Salary")) if sal>250000: x=(sal-250000)*10/100 print(x,"you are taxable") if sal<250000: print("you are not taxable") sal=int(input("Enter your Salary")) if sal>250000 and sal<500000: x=(sal-250000)*10/100 print(x,"you are taxable") if sal>500000 and sal<1000000: x=(sal-250000)*20/100 print(x,"you are taxable") if sal<250000: print("you are not taxable") x=0 while x<100: print(x) x=x+1 # + x=0 paper=0 while x<100: paper=x+paper x=x+1 print(paper) # - x=0 paper=0 while x<100: paper=x+paper x=x+1 print(paper) x=0 while x<10: print(x) if x==5: break; x=x+1 x=[5,6,89,"India",25] #list x x[0] x[-1] x[-2] x[-3] x[0:2] x[2:4] x[3]='usa' x x[-1]=35 x x.append(9) x x={5,6,89,'India',25,9} #tuple:- can't be edited x[0] x={5,6,89,3,4,1,5} x x=["India","usa","Finland","sri lanka","UAE"] for a in x: print(a) # + ## x=[P=10000,5yrs,9%] find the interest of 5yrs # - x=[10000,5,9] for i in x: d=x[0]*x[1]*x[2]/100 print(d) for x in "india": print(x) for x in "india": if x=="d": break; print(x) for x in "Aaron": if x=="o": break; print(x) for x in "Aaron": if x=="r": continue; print(x) x={1:"India",2:"Usa",3:"Sri lanka"} ##(1,2,3 is called as key)and they are called as dictionary x[1] x[3] import math as aaron aaron.pi aaron.cos(aaron.pi/2) import matplotlib.pyplot as plt x=[1,2,3,4,5,6,7,8,9] y=[1,5,7,2,3,4,7,5,3] plt.plot(x,y) plt.scatter(x,y) plt.plot(x,y) plt.xlabel("Days") plt.ylabel("Sales") plt.plot(x,y,marker='*',ms=20,mec="r",mfc="g") plt.xlabel("Days") plt.ylabel("Sales") plt.title('Sales vs days') def add(x,y): ## this called as a function print(x+y) add(2,3) def add(x,y): z=x+y return z add(30,20) def add(x,y): z=x+y return z def sub(x,y): z=x-y return z def mul(x,y): z=x*y return z def div(x,y): z=x/y return z x=int(input("Enter the first value")) y=int(input("Enter the second value")) z=input("Enter the operation") if z=="add": print(add(x,y)) if z=="sub": print(sub(x,y)) if z=="mul": print(mul(x,y)) if z=="div": print("div"(x,y))
Revision class 1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Conditional Probability Solution # First we'll modify the code to have some fixed purchase probability regardless of age, say 40%: # + from numpy import random random.seed(0) totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} totalPurchases = 0 for _ in range(100000): ageDecade = random.choice([20, 30, 40, 50, 60, 70]) purchaseProbability = 0.4 totals[ageDecade] += 1 if (random.random() < purchaseProbability): totalPurchases += 1 purchases[ageDecade] += 1 # - # Next we will compute P(E|F) for some age group, let's pick 30 year olds again: PEF = float(purchases[30]) / float(totals[30]) print("P(purchase | 30s): " + str(PEF)) # Now we'll compute P(E) PE = float(totalPurchases) / 100000.0 print("P(Purchase):" + str(PE)) # P(E|F) is pretty darn close to P(E), so we can say that E and F are likely indepedent variables.
ML Course/ConditionalProbabilitySolution.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.6.0 # language: julia # name: julia-0.6 # --- using Submodular # ### Set variables and set functions # A modular function # $$ # F(\alpha, A) = \sum\limits_{i \in A}\alpha_{i} # $$ # is easy to implement. n = 4 V = SetVariable(n) α = [1, 2, 3, 4] mod_func = modular(α, V) evaluate(mod_func, [1, 3]) get_elements(V) # ### Cardinality-based functions # A cardinality-based submodular function can be formed by applying any concave function to the cardinality of a set. # # For example, the following cardinality-based function is called the permutation function. (We will see why below.) # $$ # p(z) = -0.5z^2 + nz + 0.5z \\ # \text{perm_func}(A) = p(|A|) # $$ p(z) = -0.5*z^2 + n*z + 0.5 * z perm_func = compose(p, card(V)) # ### Associated polyhedra # Many polyhedra are associated with any submodular function. For example, the submodular polyhedron is defined as # $$ # \text{SubmodPoly}(F) = \{x \in \mathbb{R}^{|V|},\ \forall A \subseteq V,\ x(A) \leq F(A)\} # $$ # We can form the submodular polyhedron of the permutation function as follows: P = SubmodPoly(perm_func) # The vertices of the submodular polytope associated with the permutation function are the set of permutation vectors. println("[1, 2, 5, 3] is in P: $([1, 2, 5, 3] in P))") println("[1, 2, 3, 3] is in P: $([1, 2, 3, 3] in P))") # ### Lovasz extensions # Define the Lovasz extension $f: \mathbb{R}^n \to \mathbb{R}$ of a submodular function $F$ by # $$ # f(x) = \text{lovasz}(F)(x) = \sum\limits_{i = 1}^{n}x_{j_i}(F(\{j_{1}, j_{2}, \dots, j_{i}\}) - F(\{j_{1}, j_{2}, \dots, j_{i-1}\})), # $$ # where the permutation $(j_1, \ldots, j_n)$ is defined such that # $$ # x_{j_{1}} \geq x_{j_{2}} \geq \dots \geq x_{j_{n}}. # $$ x = Variable(n) f = lovasz(perm_func, x) evaluate(f, [3.5, 2.5, 1.5, 0.5]) # ### Writing down and solving problems centre = n * rand(n) g = norm(x - centre)^2 objective = g + f prob = SCOPEminimize(objective) solve!(prob)
examples/basic_usage.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Here we consider an example where $r_c^{\alpha\beta}$ is time dependent. # %%capture ## compile PyRoss for this notebook import os owd = os.getcwd() os.chdir('../../') # %run setup.py install os.chdir(owd) # + # %matplotlib inline import numpy as np import pyross import matplotlib.pyplot as plt from scipy.io import loadmat np.set_printoptions(precision=2) plt.rcParams.update({'font.size': 26}) # get population in 4 age-groups: 0-20,20-40,40-60,60-80 M0=16; Ni0 = pyross.utils.getPopulation("India", M0) M=4; Ni=np.zeros(M) for i in range(M): Ni[i] = np.sum(Ni0[i*4:(i+1)*4]) N = np.sum(Ni) # get contact matrix for M=4 CH0, CW0, CS0, CO0 = pyross.contactMatrix.India() CH, CW, CS, CO = pyross.utils.get_summed_CM(CH0, CW0, CS0, CO0, M, M0, Ni, Ni0) def get_data(contactMatrix, x0): M = 8 beta = 0.028 # probability of infection on contact gIa = 1./14 # removal rate of asymptomatic infectives gE = 1/4.72 # removal rate of exposeds gIs = 1./14 # removal rate of symptomatic infectives alpha = 0. # asymptomatic fraction fsa = 1 # Fraction by which symptomatic individuals do not self isolate parameters = {'alpha':alpha,'beta':beta, 'gIa':gIa,'gIs':gIs,'gE':gE,'fsa':fsa} model = pyross.deterministic.SEIR(parameters, M, Ni1) # start simulation Tf, Nf =300,300; data = model.simulator(x0, contactMatrix, Tf, Nf) return model.Is(data) # + # get new population for two kind of spreaders rN=0.2; brN=1-rN rC=0; M=8 Ni1 = np.zeros(M); Ni1[0:4] = rN*Ni; Ni1[4:8] = brN*Ni; # initial conditions Is_0 = np.zeros((M)); Is_0[0:4]=2; E_0 = np.zeros((M)); E_0[0:4]=4; x0 = np.concatenate(( Ni1-(Is_0 + E_0), E_0, Is_0*0, Is_0)) def contactMatrix(t): CMS = np.zeros((M, M)) rC = 0#np.exp(-0.4*(t-124)**2) CMS[0:4,0:4] = (CH + CW + CS + (1-rC/rN)*CO) CMS[4:8,0:4] = (CO)*rC/(rN) CMS[0:4,4:8] = (CO)*rC/(brN) CMS[4:8,4:8] = (CH + CW + CS + (1-rC/rN)*CO) return CMS Is1 = get_data(contactMatrix, x0) def contactMatrix(t): CMS = np.zeros((M, M)) rC = 0.1*np.exp(-(t-124)**2) CMS[0:4,0:4] = (CH + CW + CS + (1-rC/rN)*CO) CMS[4:8,0:4] = (CO)*rC/(rN) CMS[0:4,4:8] = (CO)*rC/(brN) CMS[4:8,4:8] = (CH + CW + CS + (1-rC/rN)*CO) return CMS Is2 = get_data(contactMatrix, x0) # + fig = plt.figure(num=None, figsize=(28, 8), dpi=80, facecolor='w', edgecolor='k') plt.plot(np.sum(Is1, axis=1)/N, '-', lw=4, color='#348ABD', label='Case 1', alpha=1); plt.plot(np.sum(Is2, axis=1)/N, '--', lw=4, color='#A60628', label='Case 2', alpha=0.8); plt.plot(0.1*np.exp(-(np.arange(300)-124)**2), '--', color='gray', label='event'); plt.legend();
examples/contactMatrix/ex11b-timeDependenSuperSpreaderEvent.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="35%" align="right" border="0"><br><br><br> # # Listed Volatility and Variance Derivatives # # **Wiley Finance (2017)** # # Dr. <NAME> | The Python Quants GmbH # # http://tpq.io | [@dyjh](http://twitter.com/dyjh) | http://books.tpq.io # # <img src="http://hilpisch.com/../images/lvvd_cover.png" alt="Derivatives Analytics with Python" width="30%" align="left" border="0"> # # Advanced Modeling of the VSTOXX Index # ## Introduction # This chapter is a bit different in style compared to the other chapters about the VSTOXX volatility index and related derivatives. It introduces another, parsimonious model, which is called square-root jump diffusion (SRJD) to model the VSTOXX volatility index. The model, which is essentially an extension of the Gruenbichler and Longstaff (1996) model as analyzed in the previous chapter, is capable of reproducing prices of European options written on the VSTOXX reasonably well. # # Two major enhancements characterize the SRJD model: # # * **term structure**: it allows to capture the term structure as observed in the prices of futures on the VSTOXX index # * **jump component**: including a jump component allows to better replicate option prices in the short term # # Adding these two components makes a market-consistent calibration of the model to a comprehensive set of European options on the VSTOXX index possible. For similar analyses and modeling approaches including jumps for the volatility index refer, for example, to Psychoyios (2005), Sepp (2008) or Psychoyios et al. (2010). # ## Market Quotes for Call Options # Before we introduce the model, let us set the stage by revisiting market quotes for European call options written on the VSTOXX volatility index. First, we import the data. import numpy as np import pandas as pd np.set_printoptions(suppress=True) path = './data/' h5 = pd.HDFStore(path + 'vstoxx_data_31032014.h5', 'r') # We have both options data and futures data stored in this file with quotes from 31. March 2014. # + jupyter={"outputs_hidden": false} h5 # - # Option market quotes are what we are concerned with for the moment. # + jupyter={"outputs_hidden": false} option_quotes = h5['options_data'] option_quotes['DATE'] = pd.to_datetime(option_quotes['DATE']) option_quotes['MATURITY'] = pd.to_datetime(option_quotes['MATURITY']) option_quotes.info() # + jupyter={"outputs_hidden": false} option_quotes.head() # - # At any given point in time, there are options on the VSTOXX available for eight maturities. # + jupyter={"outputs_hidden": false} mats = sorted(set(option_quotes['MATURITY'])) mats # - # The spot level of the VSTOXX index on 31. March 2014 has been 17.6639. v0 = 17.6639 # In what follows, we only want to plot call option market quotes which are not too far in-the-money nor out-of-the-money. # + jupyter={"outputs_hidden": false} tol = 0.4 to_plot = option_quotes[(option_quotes['STRIKE'] > (1 - tol) * v0) & (option_quotes['STRIKE'] < (1 + tol) * v0)] # - # The following figure shows the VSTOXX European call option quotes which fulfill the requirements. The goal of this chapter is to replicate "all these option quotes" as good as possible. # + jupyter={"outputs_hidden": false} from pylab import mpl, plt plt.style.use('seaborn') mpl.rcParams['font.family'] = 'serif' markers = ['.', 'o', '^', 'v', 'x', 'D', 'd', '>', '<'] plt.figure(figsize=(10, 6)); for i, mat in enumerate(mats): strikes = to_plot[(to_plot['MATURITY'] == mat)]['STRIKE'] prices = to_plot[(to_plot['MATURITY'] == mat)]['PRICE'] plt.plot(strikes, prices, 'b%s' % markers[i], label=str(mat)[:10]) plt.legend(); plt.xlabel('strike'); plt.ylabel('option quote'); # - # <p style="font-family: monospace;">VSTOXX European call option quotes on 31. March 2014. # ## The SRJD Model # Given is a filtered probability space $\{\Omega,\mathcal{F},\mathbb{F},P\}$ representing uncertainty in the model economy with final date $T$ where $0<T<\infty$. $\Omega$ denotes the continuous state space, $\mathcal{F}$ a $\sigma-$algebra, $\mathbb{F}$ a filtration &mdash; i.e. a family of non-decreasing $\sigma-$algebras $\mathbb{F}\equiv\{\mathcal{F}_{t\in [0,T]}\}$ with $\mathcal{F}_{0}\equiv\{\emptyset,\Omega\}$ and $\mathcal{F}_{T}\equiv \mathcal{F}$ &mdash; and $P$ the real or objective probability measure. # In the SRJD model, which is an affine jump diffusion (see Duffie et al. (2000)), the risk-neutral dynamics of the VSTOXX volatility index are given by the following stochastic differential equation (SDE) # # # $$\begin{equation*} # dv_{t}=\kappa (\theta -v_{t})dt+\sigma \sqrt{v_{t}}dZ_{t} + J_{t} v_{t} dN_{t} -r_{J}dt # \end{equation*}$$ # # The meaning of the variables and parameters is # # * $v_{t}$ volatility index level at date $t$ # * $\kappa$ speed of adjustment of $v_{t}$ to ... # * ... $\theta$, the long-term mean of the index # * $\sigma$ volatility coefficient of the index level # * $Z_{t}$ standard Brownian motion # * $J_{t}$ jump at date $t$ with distribution ... # * ... $\log(1+J_{t})\approx \mathbf{N}\left(\log(1+\mu)-\frac{\delta^{2}}{2},\delta^{2}\right)$ # * $\mathbf{N}$ cumulative distribution function of a standard normal random variable # * $N_{t}$ Poisson process with intensity $\lambda$ # * $r_{J}\equiv \lambda \cdot \left(e^{\mu + \delta^{2}/2}-1\right)$ drift correction for jump # The stochastic process for $v$ is adapted to the filtration $\mathbb{F}$. Moreover, $Z$ and $N$ are not correlated. The time $t$ value of a zero-coupon bond paying one unit of currency at $T,0\leq t < T,$ is $B_{t}(T) = e^{-r(T-t)}$ with $r \geq 0$ the constant risk-less short rate. # By the Fundamental Theorem of Asset Pricing, the time $t$ value of an attainable, $\mathcal{F}_{T}-$measurable contingent claim $V_{T}\equiv h_{T}(X_{T})\geq 0$ (satisfying suitable integrability conditions) is given by arbitrage as # # \begin{equation*} # V_{t}=\mathbf{E}^{Q}_{t}\left( B_{t}(T) V_{T} \right) # \end{equation*} # # with $V_{0}=\mathbf{E}^{Q}_{0}\left( B_{0}(T) V_{T} \right)$ as the important special case for valuation purposes. $Q$ is a $P-$equivalent martingale measure. The contingent claim could be a European call option maturing at $T$ with payoff $V_T = h_{T}(v_{T})\equiv\max[v_{T}-K,0]$. It could also be a European put with payoff $V_T = h_{T}(v_{T})\equiv\max[K-v_{T},0]$. In both cases, $K$ is the fixed strike price of the option. # To simulate the financial model, i.e. to generate numerical values for $v_{t}$, it has to be discretized. To this end, divide the given time interval $[0,T]$ in equidistant sub-intervals $\Delta t$ such that now $t \in \{0,\Delta t,2 \Delta t, ..., T\}$, i.e. there are $M+1$ points in time with $M\equiv T/\Delta t$. With $s=t-\Delta t$, a discretization of the continuous time market model is given by # # $$\begin{eqnarray*} # \tilde{v}_{t} &=& \tilde{v}_{s} + \kappa(\theta-\tilde{v}_{s}^{+}) \Delta t + \sigma\sqrt{\tilde{v}_{s}^{+}} \sqrt{\Delta t}z^1_{t} \nonumber \\ # &+& \left(e^{\mu_{J}+\delta^{2}z^{2}_{t}}-1\right) \tilde{v}_{s}^+ y_{t} -r_J \Delta t \\ # v_{t} &=& \tilde{v}_{t}^{+} # \end{eqnarray*}$$ # # for $t \in \{\Delta t, ..., T\}$ with $x^+\equiv\max[x,0]$ and the $z^{n}_{t}$ being standard normally distributed and $y_t$ Poisson distributed. $z^{1}_{t}, z^{2}_{t}$ and $y_t$ are uncorrelated. This discretization scheme is an Euler discretization and is generally called *full trunction* scheme. See Lord et al. (2008) for an analysis of this and other biased discretization schemes for the square-root diffusion process. # ## Term Structure Calibration # The first in the calibration of the SRJD model, is with regard to the futures term structure. # ### Futures Term Structure # It is difficult for parsimonious short rate models like Cox et al. (1985) to account for different term structures of the interest rate. A possible solution is the introduction of time-dependent parameters which, however, enlarges the number of parameters significantly, sacrificing at the same time the convenience of a limited number of economic parameters. Another solution is a *deterministic shift approach* according to Brigo and Mercurio (2001) which preserves the basic structure of the model with all its advantages and which nevertheless allows to better account for different term structures of the short rate. # # In this section, we transfer the deterministic shift approach for short rate models of Brigo and Mercurio (2001) to the SRJD model. Since the square-root diffusion and the jumps are not correlated, we can first apply the approach to the diffusion part and use this enhanced component later on in conjunction with the jump component. # # Before we present the theory, a look at the VSTOXX futures data first. # + jupyter={"outputs_hidden": false} futures_quotes = h5['futures_data'] futures_quotes['DATE'] = pd.to_datetime(futures_quotes['DATE']) futures_quotes['MATURITY'] = pd.to_datetime(futures_quotes['MATURITY']) futures_quotes.info() # - # The following figure presents the futures quotes for all eight maturities. # + jupyter={"outputs_hidden": false} ax = futures_quotes.plot(x='MATURITY', y='PRICE', figsize=(10, 6), legend=False) futures_quotes.plot(x='MATURITY', y='PRICE', style='ro', ax=ax); # - # <p style="font-family: monospace;">VSTOXX futures quotes on 31. March 2014. # Consider for now the square-root diffusion volatility model of Gruenbichler and Longstaff (1996) as presented in the previous chapter, which is formally the same as the short rate model of Cox et al. (1985) and which is a special case of the SRJD model # # \begin{equation*} # dv_t = \kappa (\theta -v_{t})dt + \sigma \sqrt{v_{t}}dZ_{t} # \end{equation*} # We want to calibrate this model to the observed volatility term structure, given by the above presented set of prices for futures on the VSTOXX index with different maturities. We have to minimize for all considered times $t$ and a parameter set $\alpha = (\kappa,\theta,\sigma,v_{0})$ simultaneously the single differences # # \begin{equation*} # \Delta f(0,t) \equiv f(0,t)-f^{GL96}(0,t;\alpha) # \end{equation*} # where $f(0,t)$ is the time $0$ market (instantaneous) forward volatility for time $t$ and the quantity $f^{GL96}(0,t;\alpha)$ is the model (instantaneous) forward volatility for time $t$ given parameter set $\alpha$. # Assume that there is a continuously differentiable volatility term structure function $F(0,t)$ available (i.e. there are infinitely many volatility futures prices). The forward volatility then is # # \begin{equation*} # f(0,t)=\frac{\partial F(0,t)}{\partial t} # \end{equation*} # # On the other hand, the model implied forward volatility is given as (see Brigo and Mercurio (2001)) # # \begin{eqnarray*} # f^{GL96}(0,t;\alpha) = \frac{\kappa \theta\left(e^{\gamma t}-1\right)}{2\gamma+(\kappa+\gamma)\left(e^{\gamma t}-1\right)} \nonumber \\ # +v_{0} \frac{4 \gamma ^{2} e^{\gamma t}}{\left(2\gamma+(\kappa+\gamma)\left(e^{\gamma t}-1\right)\right)^{2}} # \end{eqnarray*} # # with # # \begin{equation*} # \gamma \equiv \sqrt{\kappa^{2}+2\sigma^{2}} # \end{equation*} # The Python script ``srjd_fwd_calibration.py`` contains the Python code to calibrate the forward volatilities to the VSTOXX futures prices (see the appendix for the complete script). The beginning of the script is about library imports, importing the data sets and making some selections. # !sed -n 8,28p scripts/srjd_fwd_calibration.py # The function ``srd_forwards()`` implements the forward formula from above for a given parameter set. # + jupyter={"outputs_hidden": false} import sys sys.path.append('scripts') # - import srjd_fwd_calibration as srjdf # + # srjdf.srd_forwards?? # - # To operationalize the calibration, we use the mean squared error (MSE) as our yardstick # # \begin{equation*} # \min_{\alpha } \frac{1}{N} \sum_{n=1}^{N}\left( f_{n} - f_{n}^{GL96}(\alpha )\right)^{2} # \end{equation*} # which is to be minimized. Here, we assume that we have $N$ observations for the forward volatility. In Python this takes on the form of function ``srd_fwd_error()``. # + # srjdf.srd_fwd_error?? # - # Executing the script yields optimal parameters for the Gruenbichler and Longstaff (1996) model given the VSTOXX futures prices. # + jupyter={"outputs_hidden": false} # %run scripts/srjd_fwd_calibration.py # + jupyter={"outputs_hidden": false} opt.round(3) # - # These optimal values can be used to calculate the model forward volatilities using function ``srd_forwards()``. # + jupyter={"outputs_hidden": false} from srjd_fwd_calibration import * srd_fwds = srd_forwards(opt) srd_fwds # - # The numerical differences to the market futures prices are: # + jupyter={"outputs_hidden": false} srd_fwds - forwards # - # The following figure compares the model futures prices (forward volatilities) with the VSTOXX futures market quotes. For longer maturities the fit is quite well. # + jupyter={"outputs_hidden": false} plt.figure(figsize=(10, 6)); plt.plot(ttms, forwards, 'b', label='market prices'); plt.plot(ttms, srd_fwds, 'ro', label='model prices'); plt.legend(loc=0); plt.xlabel('time-to-maturity (year fractions)'); plt.ylabel('forward volatilities'); # - # <p style="font-family: monospace;">VSTOXX futures market quotes vs. model prices. # Finally, we save the results from the term structure calibration for later use during the simulation of the model. # + jupyter={"outputs_hidden": false} import pickle f = open('varphi', 'wb') # open file on disk ## write ttms object and differences (varphi values) as dictionary pickle.dump({'ttms': ttms, 'varphi': srd_fwds - forwards}, f) f.close() # close file # - # ### Shifted Volatility Process # Assume that we are given a continuously differentiable futures price function (i.e. through splines interpolation of discrete futures prices for different maturities). We consider now the deterministically shifted volatility process (see Brigo and Mercurio (2001)) # # \begin{equation*} # \hat{v}_t \equiv v_t + \varphi(t,\alpha^*) # \end{equation*} # with $\varphi(t,\alpha^*) \equiv f(0,t)-f^{GL96}(0,t;\alpha^*)$, the difference at time $t$ between the market implied forward volatility and the model implied forward volatility after calibration, that is for the optimal parameter set $\alpha^*$. $\varphi(t,\alpha^*)$ corresponds to the differences (bars) in the following figure. # + jupyter={"outputs_hidden": false} plt.figure(figsize=(10, 6)); plt.bar(ttms, srd_fwds - forwards, width=0.05, label='$\\varphi(t,\\alpha^*)$') plt.legend(loc=0) plt.xlabel('time-to-maturity (year fractions)') plt.ylabel('deterministic shift'); # - # <p style="font-family: monospace;">Deterministic shift values to account for VSTOXX futures term structure. # The SRJD model discretization can now be adjusted as follows: # # \begin{eqnarray} # \tilde{v}_{t} &=& \tilde{v}_{s} + \kappa(\theta-\tilde{v}_{s}^{+}) \Delta t + \sigma\sqrt{\tilde{v}_{s}^{+}} \sqrt{\Delta t}z^1_{t} \nonumber \\ # &+& \left(e^{\mu_{J}+\delta^{2}z^{2}_{t}}-1\right) \tilde{v}_{s}^+ y_{t} -r_J \Delta t \label{eq:disc3} \\ # \hat{v}_{t} &=& \tilde{v}_{t}^{+} + \varphi(t,\alpha^*) \label{eq:disc4} # \end{eqnarray} # # This is consistent since the diffusion and jump parts are not correlated and since the jump part is added in a way that the first moment of the stochastic volatility process does not change. # ## Option Valuation by Monte Carlo Simulation # This section implements Monte Carlo simulation procedures for the SRJD model. # ### Monte Carlo Valuation # In what follows, the model option values are computed by Monte Carlo simulation (MCS). Given the discrete version of the financial model, the value of a European call option on the volatility index is estimated by MCS as follows: # <img src="https://hilpisch.com/lvvd_algo.png" align="center"> # $V_0$ is the MCS estimator for the European call option value. # ### Technical Implementation # In view of using a numerical method like MCS for all valuation and calibration tasks, the parametrization and implementation of the MCS algorithm plays an important role. Some major features of our implementation are: # # * **discretization**: the algorithm uses the Euler discretization scheme which is an approximation only but which might bring performance benefits # * **random numbers**: for every single option valuation the seed can be held constant such that every option is valued with the same set of (pseudo-) random numbers # * **variance reduction**: both antithetic variates and moment matching (for the first two moments of the pseudo-random numbers) are used as generic variance reduction techniques # * **deterministic shift**: the deterministic shift values $\varphi$ are determined only once through a separate calibration and are held constant afterwards (even if model parameters change) # # $\varphi$ only has to be deterministic and integrable on closed intervals (see Brigo and Mercurio (2001)), which is of course the case. For the approach to be valid, it is not important how we came up with the $\varphi$ originally. # The Python code for simulating the SRJD model is found in the script ``srjd_simulation.py`` (see the appendix for the complete script). The beginning of the script shows several imports, the definition of example parameters and also the cubic splines interpolation to be used for the estimation of the deterministic shift parameters. # !sed -n 8,39p scripts/srjd_simulation.py # The Python function ``random_number_gen()`` generates arrays of standard normally distributed pseudo-random numbers using both antithetic variates and moment matching as generic variance reduction techniques. import srjd_simulation as srjds # + # srjds.random_number_gen?? # - # The major function of this script is ``srjd_simulation()`` which implements the Monte Carlo simulation for the SRJD model based on an Euler discretization scheme. The scheme used here is usually called *full truncation* scheme. # + # srjds.srjd_simulation?? # - # Finally, the function ``srjd_call_valuation()`` estimates the value of a European call option given the simulated volatility paths from ``srjd_simulation()``. # + # srjds.srjd_call_valuation?? # - # Executing the script yields a MCS estimator for the European call option with the parameters as assumed as in the script of about 1 currency unit. # + jupyter={"outputs_hidden": false} # %run scripts/srjd_simulation.py # - # ## Model Calibration # This section now calibrates the SRJD model to market quotes for European call options on VSTOXX futures. It considers calibrations to a single maturity as well as to multiple maturities. # ### The Python Code # The calibration of the SRJD model is similar to the procedure for the Gruenbichler and Lonstaff (1996) square-root diffusion model as presented in the previous chapter. The major difference now is that we have to take into account more parameter values for the optimization. The Python code is contained in script ``srjd_model_calibration.py`` (see the appendix for the complete script). As usual a few imports and parameter definitions first. # !sed -n 10,21p scripts/srjd_model_calibration.py # In what follows, we want to calibrate the model simultaneously to multiple maturities for the VSTOXX European call options. The valuation function ``srjd_valuation_function()`` therefore now calculates the differences between model and market values directly and returns an array with all differences (relative or absolute). import srjd_model_calibration as srjdc # + # srjdc.srjd_valuation_function?? # - # The error function ``srjd_error_function()`` for the SRJD model has to be enhanced compared to the SRD case to account for the additional parameters of the model. # + # srjdc.srjd_error_function?? # - # The same holds true for the calibration function ``srjd_model_calibration()`` itself. This function allows to select certain maturities for the the calibration. # + # srjdc.srjd_model_calibration?? # - # ### Short Maturity # The addition of a jump component shall allow a better fit to short term call option market quotes. Therefore, consider the following calibration to the shortest option maturity available. from srjd_model_calibration import * ## read option data, allow for 30% moneyness tolerance option_data = read_select_quotes(tol=0.3) option_data # + jupyter={"outputs_hidden": false} # %%time opt_1 = srjd_model_calibration(option_data, p0=None, rel=False, mats=['2014-4-18']) # - # The optimal parameter values are: # + jupyter={"outputs_hidden": false} opt_1 # - # Using these optimal parameter values, add the model prices to the ``DataFrame`` object containing the option data. # + jupyter={"outputs_hidden": false} values = [] kappa, theta, sigma, lamb, mu, delta = opt_1 for i, option in option_data.iterrows(): value = srjd_call_valuation(v0, kappa, theta, sigma, lamb, mu, delta, option['TTM'], r, option['STRIKE'], M=M, I=I, fixed_seed=True) values.append(value) option_data['MODEL'] = values # - # The following figure shows the calibration results graphically. Indeed, the fit seems to be quite good, reflecting a MSAE of about 0.0015 only. # + jupyter={"outputs_hidden": false} ## selecting the data for the shortest maturity os = option_data[option_data.MATURITY == '2014-4-18'] ## selecting corresponding strike prices strikes = os.STRIKE.values ## comparing the model prices with the market quotes fig, ax = plt.subplots(2, 1, sharex=True, figsize=(10, 6)); ax[0].plot(strikes, os.PRICE.values, label='market quotes'); ax[0].plot(strikes, os.MODEL.values, 'ro', label='model prices'); ax[0].legend() ax[1].bar(strikes, os.MODEL.values - os.PRICE.values, width=0.3); ax[1].set_xlim(12.5, 23); # - # <p style="font-family: monospace;">Calibration of SRJD model to European call options on the VSTOXX for shortest option maturity (April 2014). # ### Two Maturities # Let us proceed with the simultaneous calibration of the model to the May and July maturities. The MSE is also pretty low in this case (i.e. below 0.01). # + jupyter={"outputs_hidden": false} ## read option data, allow for 17.5% moneyness tolerance option_data = read_select_quotes(tol=0.175) # option_data # + jupyter={"outputs_hidden": false} # %%time opt_2 = srjd_model_calibration(option_data, rel=False, mats=['2014-5-16', '2014-7-18']) # - # The optimal parameter values are: # + jupyter={"outputs_hidden": false} opt_2 # - # In what follows, we use the Python function ``plot_calibration_results()`` to generate the plots for the different valuation runs. This function allows for different numbers of sub-plots, i.e. when the number of option maturities is changed. It is mainly a generalization of the plotting code used above. # + # srjdc.plot_calibration_results?? # - # The following figure shows the results of the calibration graphically. It is obvious that the SRJD model is able to account for multiple maturities at the same time which is mainly due to the term structure component introduced by the deterministic shift approach. # + jupyter={"outputs_hidden": false} plot_calibration_results(option_data, opt_2, ['2014-5-16', '2014-7-18']) # - # <p style="font-family: monospace;">Calibration of SRJD model to European call options on the VSTOXX for May and July 2014 maturities. # ### Four Maturities # In a next step, we consider four maturities &mdash; before we try to calibrate the model to all eight maturities. # + jupyter={"outputs_hidden": false} mats = sorted(set(option_data['MATURITY'])) mats # - # The four maturities for this particular calibration run are: # + jupyter={"outputs_hidden": false} mats[::2] # - # For this calibration run, we use the optimal parameters from the previous calibration to two maturities. Obviously, the more options to calibrate the model to, the longer the procedure takes. # + jupyter={"outputs_hidden": false} # %%time opt_4 = srjd_model_calibration(option_data, p0=opt_2, rel=False, mats=mats[::2]) # + jupyter={"outputs_hidden": false} opt_4 # - # Even calibrating the model to four maturities yields quite a good fit over these maturities as the following figure illustrates. # + jupyter={"outputs_hidden": false} plot_calibration_results(option_data, opt_4, mats[::2]) # - # <p style="font-family: monospace;">Calibration of SRJD model to European call options on the VSTOXX for four maturities. # ### All Maturities # Finally, let us attack the hardest calibration problem &mdash; the one involving all eight option maturities. # + jupyter={"outputs_hidden": false} # %%time opt_8_MSAE = srjd_model_calibration(option_data, rel=False, mats=mats) # + jupyter={"outputs_hidden": false} opt_8_MSAE # - # The following shows that the fit is still reasonable for eight maturities and that many options. # + jupyter={"outputs_hidden": false} plot_calibration_results(option_data, opt_8_MSAE, mats) # - # <p style="font-family: monospace;">Calibration of SRJD model to European call options on the VSTOXX for all eight maturities (MSAE used). # To check whether there is a (larger) difference when we calibrate the model using relative differences (i.e. the MSRE) as yardstick, consider the following calibration run. # + jupyter={"outputs_hidden": false} # %%time opt_8_MSRE = srjd_model_calibration(option_data, p0=opt_8_MSAE, rel=True, mats=mats) # - # The following figure presents the results. They are not too dissimular to the ones obtained using the MSAE as yardstick. The major difference is the weighting of the options in that now those options with lower market quotes (higher strikes) get more weight. # + jupyter={"outputs_hidden": false} plot_calibration_results(option_data, opt_8_MSRE, mats) # - # <p style="font-family: monospace;">Calibration of SRJD model to European call options on the VSTOXX for all eight maturities (MSRE used). # ## Conclusions # This chapter introduces a more sophisticated model, the so-called square-root jump diffusion model (SRJD), for the evolution of the VSTOXX volatility index over time. It enhances the Gruenbichler and Longstaff (1996) square-root diffusion model by two components: a log normally distributed *jump component* and a *determistic shift component*. While the first allows to better calibrate the model to short term option quotes, the latter makes it possible to take the volatility term structure &mdash; as embodied by the eight futures on the VSTOXX index &mdash; into account. All in all, the model yields good calibration results even in cases where all eight option maturities are accounted for. # ## Python Scripts # ### srjd_fwd_calibration.py # !cat scripts/srjd_fwd_calibration.py # ### srjd_simulation.py # !cat scripts/srjd_simulation.py # ### srjd_model_calibration.py # !cat scripts/srjd_model_calibration.py # <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="35%" align="right" border="0"><br> # # <a href="http://tpq.io" target="_blank">http://tpq.io</a> | <a href="http://twitter.com/dyjh" target="_blank">@dyjh</a> | <a href="mailto:<EMAIL>"><EMAIL></a>
code/07_advanced_modelling.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Transactions # #### 03.4 Writing Smart Contracts # ##### <NAME> (<EMAIL>) # 2021-12-19 # # See also: https://developer.algorand.org/tutorials/creating-python-transaction-purestake-api/ # # - Load credentials # - Create our own QR code for payments # - Interact with the blockchain and execute a payment from Python # + [markdown] tags=[] # ## Setup # Starting with this chapter 3.4, the lines below will always automatically load ... # * The accounts MyAlgo, Alice, Bob, Charlie, Dina # * The Purestake credentials # * The functions in `algo_util.py` # + # Loading shared code and credentials import sys, os codepath = '..'+os.path.sep+'..'+os.path.sep+'sharedCode' sys.path.append(codepath) from algo_util import * cred = load_credentials() # Shortcuts to directly access the main accounts MyAlgo = cred['MyAlgo'] Alice = cred['Alice'] Bob = cred['Bob'] Charlie = cred['Charlie'] Dina = cred['Dina'] # - from algosdk import account, mnemonic from algosdk.v2client import algod from algosdk.future.transaction import PaymentTxn, MultisigTransaction from algosdk.future.transaction import AssetConfigTxn, AssetTransferTxn, AssetFreezeTxn import algosdk.error import json print(MyAlgo['public']) print(Alice['public']) print(Bob['public']) print(Charlie['public']) print(Dina['public']) # ### Check the accounts on the blockchain # - Go to https://algoexplorer.io and insert address # - Go to https://testnet.algoexplorer.io for the testnet # Create a link to directly access your MyAlgo account print('https://algoexplorer.io/address/'+MyAlgo['public']) print('https://testnet.algoexplorer.io/address/'+MyAlgo['public']) # ### Fund with testnet Algos # - https://bank.testnet.algorand.network/ # - https://testnet.algoexplorer.io/dispenser # - Fund all three accounts. How many test ALGOs did you get? # ## Connecting Python to the Algorand Blockchain # Options: # - Set up your own indexer # - Set up your own virtual indexer using Docker # - Use a third party API ... we use Purestake # # ### Purestake token for authenticate # - See 03.3_WSC_Credentials # - API cendentials stored in `cred['purestake_token']` # - Note: this is already the pair `{'X-Api-key': '(your token'}` # - To obtain token alone `cred['purestake_token']['X-Api-key']` # + algod_token = '' # Only needed if we have our own server, algod_address = cred['algod_test'] # Or cred['algod_main'] purestake_token = cred['purestake_token'] # Authentication token pair {'X-Api-key': '(your token'} # Initialize the algod client algod_client = algod.AlgodClient(algod_token=algod_token, algod_address=algod_address, headers=purestake_token) # - # #### Test the connection # - Our first Python access of the blockchain # - What's the last block? # - Check on https://testnet.algoexplorer.io # - Note that block count on testnet is larger (Why?) algod_client.status()["last-round"] # ### Obtain holdings # Get holdings of testnet Algos address = Alice["public"] algod_client.account_info(address)["amount"] # Holdings are in micro Algo ... convert algo_precision = 1e6 algo_amount = algod_client.account_info(address)["amount"]/algo_precision print(f"Address {address} has {algo_amount} test algos") # #### Suggested parameters for a transaction (on the test network) sp = algod_client.suggested_params() print(json.dumps(vars(sp), indent=4)) # ## A first payment transaction # #### Step 1: prepare and create unsigned transaction # + # Parameters sp = algod_client.suggested_params() # suggested params amount = 0.1 algo_precision = 1e6 amt_microalgo = int(amount * algo_precision) # Create (unsigned) TX txn = PaymentTxn(sender = Alice['public'], # <--- From sp = sp, receiver = Bob['public'], # <---- To amt = amt_microalgo # <---- Amount in Micro-ALGOs ) print(txn) print(txn.get_txid()) # - # Is it already on the blockchain? No ... not yet sent print('https://testnet.algoexplorer.io/tx/'+txn.get_txid()) # #### Step 2: sign # + stxn = txn.sign(Alice['private']) # <---- Alice signs with private key # Transaction ID is the same, but still nothing on the blockchain print('https://testnet.algoexplorer.io/tx/'+stxn.get_txid()) # - # #### Step 3: send # + txid = algod_client.send_transaction(stxn) print("Send transaction with txID: "+txid) # The freshly submitted transaction on the blockchain txinfo = algod_client.pending_transaction_info(txid) print(txinfo) # - # #### Step 4: Wait for confirmation txinfo = wait_for_confirmation(algod_client, txid) # Note that txinfo has now a 'confirmed-round' print(txinfo) print('https://testnet.algoexplorer.io/tx/'+txid) # ### Add a note to a transaction # + # Step 1a: Prepare sp = algod_client.suggested_params() # suggested params amount = 0.1 algo_precision = 1e6 amt_microalgo = int(amount * algo_precision) # Step 1b: The note # Start with a Python dict, create JSON, byte-encode note_dict = {"Message":"Paying back for last dinner", "From":"Alice", "To":"Bob"} note_json = json.dumps(note_dict) note_byte = note_json.encode() # - # Step 1c: create (unsigned) TX txn = PaymentTxn(sender=Alice['public'], sp=sp, receiver = Bob['public'], amt=amt_microalgo, note=note_byte ) print(txn) # Step 2+3: sign and send TX stxn = txn.sign(Alice['private']) txid = algod_client.send_transaction(stxn) print("Send transaction with txID: {}".format(txid)) # Step 4: Wait for confirmation txinfo = wait_for_confirmation(algod_client, txid) print("https://testnet.algoexplorer.io/tx/"+txid) print(txinfo) # Convert message in txinfo to Python dict import base64 message_base64 = txinfo['txn']['txn']['note'] print(message_base64) message_byte = base64.b64decode(message_base64) print(message_byte) message_json = message_byte.decode() print(message_json) message = json.loads( message_json ) print( message['From'] ) # ## Useful functions # These function `wait_for_confirmation` is actually not an ufficial Algorand function.<br> # Below is the source code. def wait_for_confirmation(client, txid): # client = algosdk client # txid = transaction ID, for example from send_payment() txinfo = client.pending_transaction_info(txid) # obtain transaction information current_round = algod_client.status()["last-round"] # obtain last round number print("Current round is {}.".format(current_round)) # Wait for confirmation while ( txinfo.get('confirmed-round') is None ): # condition for waiting = 'confirmed-round' is empty print("Waiting for round {} to finish.".format(current_round)) algod_client.status_after_block(current_round) # this wait for the round to finish txinfo = algod_client.pending_transaction_info(txid) # update transaction information current_round += 1 print("Transaction {} confirmed in round {}.".format(txid, txinfo.get('confirmed-round'))) return txinfo # ## Also useful functions # These functions are much more convenient: # - `note_encode` encodes a note from a Pyhon dict # - `note_decode` decodes a note into a Pyhon dict # + def note_encode(note_dict): # note dict ... a Python dictionary note_json = json.dumps(note_dict) note_byte = note_json.encode() return(note_byte) def note_decode(note_64): # note64 = note in base64 endocing # returns a Python dict import base64 message_base64 = txinfo['txn']['txn']['note'] message_byte = base64.b64decode(message_base64) message_json = message_byte.decode() message_dict = json.loads( message_json ) return(message_dict) # - # ## Exercise # * Send 0.8 ALGO from Dina to Charlie with a thank you note # + # Your Python code goes here # - # ## Things that do not and will not work # Let's produce some error messages. Following are a few things that don't work # Need to import this to be able to read error messages import sys, algosdk.error # ### Overspending # Alice sends more than she owns # + # Step 1: prepare sp = algod_client.suggested_params() algo_precision = 1e6 sender = Alice['public'] receiver = Bob['public'] amount = 100 # <----------------- way too much! amount_microalgo = int(amount * algo_precision) # Step 2: create unsigned TX unsigned_txn = PaymentTxn(sender, sp, receiver, amount_microalgo) # Step 3a: Sign signed_txn = unsigned_txn.sign(Alice['private']) # - # Step 3b: Send txid = algod_client.send_transaction(signed_txn) # #### Can we *catch the error* and get a better structured error message? # + # Step 1: prepare sp = algod_client.suggested_params() algo_precision = 1e6 sender = Alice['public'] receiver = Bob['public'] amount = 100 # <----------------- way too much! amount_microalgo = int(amount * algo_precision) # Step 2: create unsigned TX unsigned_txn = PaymentTxn(sender, sp, receiver, amount_microalgo) # Step 3a: Sign signed_txn = unsigned_txn.sign(Alice['private']) # - # Step 3b: Send try: txid = algod_client.send_transaction(signed_txn) except algosdk.error.AlgodHTTPError as err: print(err) # print entire error message if ("overspend" in str(err)): # check for specific type of error print("Overspend error") txid = None # #### What happens if we wait for the failed transaction to complete? # We fail at the first command try: txinfo = algod_client.pending_transaction_info(txid) # obtain transaction information print(txinfo) except TypeError as err: # obtain error message # print entire error message print(err) # check for specific type of error print("txid is empty") # ### Wrong signature # Bob tries to sign a transaction from Alice to Bob # + # Step 1: prepare sp = algod_client.suggested_params() algo_precision = 1e6 sender = Alice['public'] receiver = Bob['public'] amount = 0.1 amount_microalgo = int(amount * algo_precision) # Step 2: create unsigned TX unsigned_txn = PaymentTxn(sender, sp, receiver, amount_microalgo) # Step 3a: Sign signed_txn = unsigned_txn.sign(Bob['private']) # <----------------- wrong person signs! try: txid = algod_client.send_transaction(signed_txn) except algosdk.error.AlgodHTTPError as err: # print entire error message print(err) if ("should have been authorized" in str(err)): # check for specific type of error print("Wrong signature error") txid = None # - # ### Sending the *indentical* transaction twice # * "Identical" means same ... # * Sender # * Recipien # * Ammount # * Parameters # Step 1: prepare sp = algod_client.suggested_params() algo_precision = 1e6 sender = Alice['public'] receiver = Bob['public'] amount = 0.1 amount_microalgo = int(amount * algo_precision) # + # Step 2: create unsigned TX unsigned_txn = PaymentTxn(sender, sp, receiver, amount_microalgo) # Step 3: Sign and send signed_txn = unsigned_txn.sign(Alice['private']) try: txid = algod_client.send_transaction(signed_txn) print("Submitted with txID: {}".format(txid)) except algosdk.error.AlgodHTTPError as err: # print entire error message print(err) if ("transaction already in ledger" in str(err)): # check for specific type of error print("Identical transaction {} has been submitted twice.".format(signed_txn.get_txid())) txid = None # check for specific type of error # - # **REPEAT** only step 2-3 $\rightarrow$ error message<br> # **REPEAT** only step 1-3 $\rightarrow$ no error <br> # #### See how the sp change # * Re-run this after 2-3 seconds sp = algod_client.suggested_params() print(json.dumps(vars(sp), indent=4))
ClassMaterial/03 - Wallets/03 code/03.4_WSC_Transactions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] run_control={"frozen": false, "read_only": false} # ## A Simple Pair Trading Strategy # # **_Please go through the "building strategies" notebook before looking at this notebook._** # # Let's build a aimple pair trading strategy to show how you can trade multiple symbols in a strategy. We will trade 2 stocks, Coca-Cola (KO) and Pepsi (PEP) # # 1. We will buy KO and sell PEP when the price ratio KO / PEP is more than 1 standard deviation lower than its 5 day simple moving average. # 2. We will buy PEP and sell KO when the price ratio KO / PEP is more than 1 standard deviation higher than its 5 day simple moving average. # 3. We will exit when the price ratio is less than +/- 0.5 standard deviations away from its simple moving average # 4. We will size the trades in 1 and 2 by allocating 10% of our capital to each trade. # # First lets load some price data in fifteen minute bars. # + run_control={"frozen": false, "read_only": false} import math import datetime import numpy as np import pandas as pd import matplotlib.pyplot as plt import scipy.stats import os from types import SimpleNamespace import pyqstrat as pq pq.set_defaults() # Set some display defaults to make dataframes and plots easier to look at try: ko_file = os.path.dirname(os.path.realpath(__file__)) + './support/coke_15_min_prices.csv.gz' pep_file = os.path.dirname(os.path.realpath(__file__)) + './support/pepsi_15_min_prices.csv.gz' # If we are running from unit tests except: ko_file_path = '../notebooks/support/coke_15_min_prices.csv.gz' pep_file_path = '../notebooks/support/pepsi_15_min_prices.csv.gz' ko_prices = pd.read_csv(ko_file_path) pep_prices = pd.read_csv(pep_file_path) ko_prices['timestamp'] = pd.to_datetime(ko_prices.date) pep_prices['timestamp'] = pd.to_datetime(pep_prices.date) timestamps = ko_prices.timestamp.values ko_contract_group = pq.ContractGroup.create('KO') pep_contract_group = pq.ContractGroup.create('PEP') # - # Lets compute the ratio of the two prices and add it to the market data. Since the two price series have the exact same timestamps, we can simply divide the two close price series # + run_control={"frozen": false, "read_only": false} ratio = ko_prices.c / pep_prices.c # - # Next, lets create an indicator for the zscore, and plot it. # + run_control={"frozen": false, "read_only": false} def zscore_indicator(symbol, timestamps, indicators, strategy_context): # simple moving average ratio = indicators.ratio r = pd.Series(ratio).rolling(window = 130) mean = r.mean() std = r.std(ddof = 0) zscore = (ratio - mean) / std zscore = np.nan_to_num(zscore) return zscore ko_zscore = zscore_indicator(None, None, SimpleNamespace(ratio = ratio), None) ratio_subplot = pq.Subplot([pq.TimeSeries('ratio', timestamps, ratio)], ylabel = 'Ratio') zscore_subplot = pq.Subplot([pq.TimeSeries('zscore', timestamps, ko_zscore)], ylabel = 'ZScore') plot = pq.Plot([ratio_subplot, zscore_subplot], title = 'KO') plot.draw(); # - # Now lets create the signal that will tell us to get in when the zscore is +/-1 and get out when its less than +/- 0.5. We use a signal value of 2 to figure out when to go long, and -2 to figure out when to go short. A value of 1 means get out of a long position, and -1 means get out of a short position. We also plot the signal to check it. # + run_control={"frozen": false, "read_only": false} def pair_strategy_signal(contract_group, timestamps, indicators, parent_signals, strategy_context): zscore = indicators.zscore signal = np.where(zscore > 1, 2, 0) signal = np.where(zscore < -1, -2, signal) signal = np.where((zscore > 0.5) & (zscore < 1), 1, signal) signal = np.where((zscore < -0.5) & (zscore > -1), -1, signal) if contract_group.name == 'PEP': signal = -1. * signal return signal signal = pair_strategy_signal(ko_contract_group, timestamps, SimpleNamespace(zscore = ko_zscore), None, None) signal_subplot = pq.Subplot([pq.TimeSeries('signal', timestamps, signal)], ylabel = 'Signal') plot = pq.Plot([signal_subplot], title = 'KO', show_date_gaps = False) plot.draw(); # - # Finally we create the trading rule and market simulator functions # + run_control={"frozen": false, "read_only": false} def pair_trading_rule(contract_group, i, timestamps, indicators, signal, account, strategy_context): timestamp = timestamps[i] curr_pos = account.position(contract_group, timestamp) signal_value = signal[i] risk_percent = 0.1 orders = [] symbol = contract_group.name contract = contract_group.get_contract(symbol) if contract is None: contract = pq.Contract.create(symbol, contract_group = contract_group) # if we don't already have a position, check if we should enter a trade if math.isclose(curr_pos, 0): if signal_value == 2 or signal_value == -2: curr_equity = account.equity(timestamp) order_qty = np.round(curr_equity * risk_percent / indicators.c[i] * np.sign(signal_value)) trigger_price = indicators.c[i] reason_code = pq.ReasonCode.ENTER_LONG if order_qty > 0 else pq.ReasonCode.ENTER_SHORT orders.append(pq.MarketOrder(contract, timestamp, order_qty, reason_code = reason_code)) else: # We have a current position, so check if we should exit if (curr_pos > 0 and signal_value == -1) or (curr_pos < 0 and signal_value == 1): order_qty = -curr_pos reason_code = pq.ReasonCode.EXIT_LONG if order_qty < 0 else pq.ReasonCode.EXIT_SHORT orders.append(pq.MarketOrder(contract, timestamp, order_qty, reason_code = reason_code)) return orders def market_simulator(orders, i, timestamps, indicators, signals, strategy_context): trades = [] timestamp = timestamps[i] for order in orders: trade_price = np.nan contract_group = order.contract.contract_group ind = indicators[contract_group] o, h, l, c = ind.o[i], ind.h[i], ind.l[i], ind.c[i] if isinstance(order, pq.MarketOrder): trade_price = 0.5 * (o + h) if order.qty > 0 else 0.5 * (o + l) else: raise Exception(f'unexpected order type: {order}') if np.isnan(trade_price): continue trade = pq.Trade(order.contract, timestamp, order.qty, trade_price, order = order, commission = 0, fee = 0) order.status = 'filled' trades.append(trade) return trades # - # Lets run the strategy, plot the results and look at the returns # + run_control={"frozen": false, "read_only": false} def get_price(contract, timestamps, i, strategy_context): if contract.symbol == 'KO': return strategy_context.ko_price[i] elif contract.symbol == 'PEP': return strategy_context.pep_price[i] raise Exception(f'Unknown symbol: {symbol}') strategy_context = SimpleNamespace(ko_price = ko_prices.c.values, pep_price = pep_prices.c.values) strategy = pq.Strategy(timestamps, [ko_contract_group, pep_contract_group], get_price, trade_lag = 1, strategy_context = strategy_context) for tup in [(ko_contract_group, ko_prices), (pep_contract_group, pep_prices)]: for column in ['o', 'h', 'l', 'c']: strategy.add_indicator(column, tup[1][column].values, contract_groups = [tup[0]]) strategy.add_indicator('ratio', ratio) strategy.add_indicator('zscore', zscore_indicator, depends_on = ['ratio']) strategy.add_signal('pair_strategy_signal', pair_strategy_signal, depends_on_indicators = ['zscore']) # ask pqstrat to call our trading rule when the signal has one of the values [-2, -1, 1, 2] strategy.add_rule('pair_trading_rule', pair_trading_rule, signal_name = 'pair_strategy_signal', sig_true_values = [-2, -1, 1, 2]) strategy.add_market_sim(market_simulator) portfolio = pq.Portfolio() portfolio.add_strategy('pair_strategy', strategy) portfolio.run() strategy.plot(primary_indicators = ['c'], secondary_indicators = ['zscore']) # + run_control={"frozen": false, "read_only": false} strategy.evaluate_returns();
pyqstrat/notebooks/pair_trading_strategy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Inspecting complicated models # # When models get complicated, inspecting the parameters and their relationships gets a bit clunky. This notebook offers a couple of prototypes to address this: # # - A LinearTied class that provides a handy repr showing the linear relationship. # - This lets `model.tied` show the relationship rather than just a pointer to a user-defined function # - A timing test shows no significant performance hit for this convenience # - A `tabulate_model` function to show the results and the constraints in tabular form by putting them in an astropy table. # # This is illustrated using the example from the 'Tied Constraints' section of the latest documentation https://docs.astropy.org/en/latest/modeling/example-fitting-constraints.html. import numpy as np from astropy.table import Table import matplotlib.pyplot as plt from astropy.utils.data import get_pkg_data_filename from astropy.io import ascii from astropy.modeling import models, fitting # %matplotlib inline # Get the data. #fname = get_pkg_data_filename('data/spec.txt', package='astropy.modeling.tests') fname=('spec.txt') spec = ascii.read(fname) wave = spec['lambda'] flux = spec['flux'] # ## Class to create a linear relationship between to a parameter class LinearTied(object): """ Tie two variables associated by a linear relation: derived_param = factor * param + offset Parameters ---------- model : Astropy.modeling FittableModel instance The model param : string The name of the first parameter factor : float64 The multiplicative factor to apply offset : float64 The additive offset to apply """ def __init__(self, model,param,factor=1.,offset=0.,fmt=".3g"): self.param_name = param idx = model.param_names.index(param) self.param = model.parameters[idx] self.factor = factor self.offset = offset factor_fstring = "" if factor == 1. else f"{self.factor:{fmt}} * " offset_fstring = "" if offset == 0. else f" + {self.offset:{fmt}}" self.output_string = factor_fstring + f"{self.param_name}" + offset_fstring def __call__(self,model): return self.param*self.factor+self.offset def __repr__(self): return self.output_string # Use the rest wavelengths of known lines as initial values for the fit. Hbeta = 4862.721 OIII_1 = 4958.911 OIII_2 = 5008.239 # Create Gaussian1D models for each of the Hbeta and OIII lines. h_beta = models.Gaussian1D(amplitude=34, mean=Hbeta, stddev=5,name='Hbeta') o3_2 = models.Gaussian1D(amplitude=170, mean=OIII_2, stddev=5,name='[OIII]4959') o3_1 = models.Gaussian1D(amplitude=57, mean=OIII_1, stddev=5,name='[OIII]5007') # Tie the ratio of the intensity of the two OIII lines. # + def tie_ampl(model): return model.amplitude_2 / 3.1 o3_1.amplitude.tied = tie_ampl # - # Also tie the wavelength of the Hbeta line to the OIII wavelength. # + def tie_wave(model): return model.mean_0 * OIII_1 / Hbeta o3_1.mean.tied = tie_wave # - # Create a Polynomial model to fit the continuum. mean_flux = flux.mean() cont = np.where(flux > mean_flux, mean_flux, flux) linfitter = fitting.LinearLSQFitter() poly_cont = linfitter(models.Polynomial1D(1,name='continuum'), wave, cont) # Create a compound model for the three lines and the continuum. hbeta_combo = h_beta + o3_1 + o3_2 + poly_cont # Inspect the tied parameters. The information on how the parameters are tied together is buried. hbeta_combo.tied # Tie them with the `LinearTied` class. hbeta_combo.amplitude_1.tied = LinearTied(hbeta_combo,'amplitude_2',factor=1./3.1) hbeta_combo.mean_1.tied = LinearTied(hbeta_combo,'mean_0',factor=OIII_1 / Hbeta) # Inspect the tied parameters. The relationship is now visible. hbeta_combo.tied # Fit all lines simultaneously. fitter = fitting.LevMarLSQFitter() fitted_model = fitter(hbeta_combo, wave, flux) fitted_lines = fitted_model(wave) # Plot the results fig = plt.figure(figsize=(9, 6)) p = plt.plot(wave, flux, label="data") p = plt.plot(wave, fitted_lines, 'r', label="fit") p = plt.legend() p = plt.xlabel("Wavelength") p = plt.ylabel("Flux") t = plt.text(4800, 70, 'Hbeta', rotation=90) t = plt.text(4900, 100, 'OIII_1', rotation=90) t = plt.text(4950, 180, 'OIII_2', rotation=90) def tabulate_model(model): column_names = ['parameter','submodel','class','subparam','value', 'fixed?','lower','upper','tie'] parameter = model.param_names submodelnumber = [model._param_map[p][0] for p in parameter] submodelname = np.array(model.submodel_names)[submodelnumber] smclass = [model._get_submodels()[i] for i in submodelnumber] submodelclass = [sm.__class__.__name__ for sm in smclass] subparam = [model._param_map[p][1] for p in parameter] value = fitted_model.parameters fixed = [fitted_model.fixed[p] for p in parameter] lower = [fitted_model.bounds[p][0] for p in parameter] upper = [fitted_model.bounds[p][1] for p in parameter] tied = [fitted_model.tied[p] for p in parameter] t = Table([parameter,submodelname,submodelclass,subparam,value, fixed,lower,upper,tied],names=column_names) return t tabulate_model(fitted_model) # ## Timing test # # Revert to original defintion (ties are custom functions) and time the fitting hbeta_combo = h_beta + o3_1 + o3_2 + poly_cont # %%timeit fitted_model = fitter(hbeta_combo, wave, flux) hbeta_combo.amplitude_1.tied = LinearTied(hbeta_combo,'amplitude_2',1./3.1) hbeta_combo.mean_1.tied = LinearTied(hbeta_combo,'mean_0',OIII_1 / Hbeta) # %%timeit fitted_model = fitter(hbeta_combo, wave, flux)
model_inspection/inspecting_complicated_models.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #importing important libraries. import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd from sklearn.datasets import load_boston from sklearn.metrics import mean_squared_error,r2_score,accuracy_score from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler #load the dataset which is alread is in sklearn library boston = load_boston() print(boston.keys()) # #### data: contains the information for various houses # #### target: prices of the house # #### feature_names: names of the features # #### DESCR: describes the dataset print(boston.feature_names) #13 features print(boston.DESCR) # ### Above cell describes that:- # # 1-Number of Instances: 506(Rows) # # 2-Number of Attributes: 13 numeric/categorical predictive. # # 3-Median Value (attribute 14) is usually the target. # # 4-**Missing Attribute Values: None** #making dataframes of features features=pd.DataFrame(boston.data,columns=boston.feature_names) features.head() #making dataframes of tagret # target is MEDV target=pd.DataFrame(boston.target,columns=['target']) target.head() #min and max in target column print(max(target['target'])) print(min(target['target'])) #now concatenate features and target into a sigle dataframe #axis=1 concatenate column wise df=pd.concat([features,target],axis=1) df.head() #describe statistical description of dataset df.describe().round(2) corr = df.corr('pearson').round(2) # annot = True to print the values inside the square plt.subplots(figsize=(15,10)) sns.heatmap(data=corr,vmin=0, vmax=1, annot=True) # ## observations # 1- looking at the correlation matrix we can see that RM has a strong positive # correlation with MEDV (0.7) where as LSTAT has a high negative correlation with MEDV(-0.74). # # 2- The features RAD, TAX have a multi-colenearity of 0.91. # # 3-The features DIS and AGE which have a correlation of -0.75. # # 4-top features that can be used for predicton are RM,LSTAT,PTRATIO. corr #making bar corrs=[abs(corr[attr]['target']) for attr in list(features)] corrs l=list(zip(corrs,list(features))) l #sort the list of pairs in decreasing order #with the corelation values as the key for sorting l.sort(key= lambda x:x[0], reverse=True) l # + '''zip(*l)--takes pair of list which look like [[a,b,c] [d,e,f] [g,h,i]] and returns [[a,d,g] [b,e,f] [c,f,i]]''' corrs,labels=list(zip(*l)) print(corrs) print(labels) # - #plot correlation with respect to the target as a bar graph index=np.arange(len(labels)) plt.figure(figsize=(15,5)) plt.bar(index,corrs,width=0.5) plt.xlabel('Attributes') plt.ylabel('corelation with the target value') plt.xticks(index,labels) plt.show() # ## Top features that can be used for predicton are RM,LSTAT,PTRATIO. # ## These attributes have highest correlation with the target lst=['CRIM','ZN','INDUS','CHAS','NOX','AGE','DIS','RAD','TAX','B','target'] X=df.drop(lst,axis=1) X.head() Y = target Y.head() boston_dataset_best_features=pd.concat([X,target],axis=1) sns.pairplot(boston_dataset_best_features) # ### Observation # 1-RM have direct relation with target # # 2-LSTAT have inverse relation with target # ## Normalization of data #now performing normalization so that each feature come onto same scale x_scaler=MinMaxScaler() X=x_scaler.fit_transform(X) X # + XX=pd.DataFrame(X,columns=['RM','PTRATIO','LSTAT']) XX.head() # - y_scaler=MinMaxScaler() Y=y_scaler.fit_transform(Y) print(Y[:5]) YY=pd.DataFrame(Y,columns=['TARGET']) YY.head() boston_dataset_best_features=pd.concat([XX,YY],axis=1) sns.pairplot(boston_dataset_best_features) # ## Model Building X_train, X_test, y_train, y_test = train_test_split(XX, YY, test_size=0.3, random_state = 30) # ### 1-Linear Regression Model # + from sklearn.linear_model import LinearRegression lr_model=LinearRegression() lr_model.fit(X_train, y_train) y_predict=lr_model.predict(X_test) r2score=r2_score(y_test,y_predict) mse=mean_squared_error(y_test,y_predict) print("R2 score of linear regression model = ",r2score) print("Mean_squared_error of linear regression model = ",mse) # - # ### 2-Ridge Regression # Ridge Regression from sklearn.linear_model import Ridge ridge=Ridge() ridge.fit(X,Y) prediction_ridge=ridge.predict(X_test) r2score=r2_score(y_test,prediction_ridge) mse=mean_squared_error(y_test,prediction_ridge) print("R2 score of linear regression model = ",r2score) print("Mean_squared_error of linear regression model = ",mse) # ### 3-Decision Tree Regressor #from sklearn.tree import DecisionTreeRegressor from sklearn import tree decision_regressor = tree.DecisionTreeRegressor(max_depth=4) decision_regressor.fit(X,Y) y_pred_dt = decision_regressor.predict(X_test) r2score_dt=r2_score(y_test,y_pred_dt) mse=mean_squared_error(y_test,y_pred_dt) print("R2 score of random forest regressor = ",r2score_dt) print("Mean_squared_error of Decision tree regressor model = ",mse) import matplotlib.pyplot as plt # %matplotlib inline plt.figure(figsize=(15,7)) decision_tree=tree.plot_tree(decision_regressor,filled=True) plt.savefig('decision_tree.png') plt.show() # ## Conclusion # - RM,LSTAT,PTRATIO have the highest corelation with the target i.e MEDV # - RM have direct relation with Target. # - LSTAT have inverse relation with Target. # - with **Linear Regression model** we get r2-score of 0.65 # - with **RidgeRegression** we get r2-score of 0.66 # - with **DecisionTree Regressor** we get r2-score of 0.87 # ### so DecisionTree regressor can be used for prediction prices of houses in Boston.
BOSTON HOUSING DATASET.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from ceres_infer.session import workflow from ceres_infer.models import model_infer_iter_ens import logging logging.basicConfig(level=logging.INFO) params = { # directories 'outdir_run': '../out/20.0216 feat/reg_rf_boruta/', # output dir for the run 'outdir_modtmp': '../out/20.0216 feat/reg_rf_boruta/model_perf/', # intermediate files for each model 'indir_dmdata_Q3': '../out/20.0817 proc_data/gene_effect/dm_data.pkl', # pickled preprocessed DepMap Q3 data 'indir_dmdata_external': '../out/20.0817 proc_data/gene_effect/dm_data_Q4.pkl', # pickled preprocessed DepMap Q3 data 'indir_genesets': '../data/gene_sets/', 'indir_landmarks': None, # csv file of landmarks [default: None] # notes 'session_notes': 'regression model, with random forest (iterative) and boruta feature selection; \ run on selective dependent genes (CERES std > 0.25 and CERES range > 0.6)', # data 'external_data_name': 'p19q4', # name of external validation dataset 'opt_scale_data': True, # scale input data True/False 'opt_scale_data_types': '\[(?:RNA-seq|CN)\]', # data source types to scale; in regexp 'model_data_source': ['CERES','RNA-seq','CN','Mut','Lineage'], 'anlyz_set_topN': 10, # for analysis set how many of the top features to look at 'perm_null': 1000, # number of samples to get build the null distribution, for corr 'useGene_dependency': False, # whether to use CERES gene dependency (true) or gene effect (false) 'scope': 'differential', # scope for which target genes to run on; list of gene names, or 'all', 'differential' # model 'model_name': 'rf', 'model_params': {'n_estimators':1000,'max_depth':15,'min_samples_leaf':5,'max_features':'log2'}, 'model_paramsgrid': {}, 'model_pipeline': model_infer_iter_ens, 'pipeline_params': {}, # pipeline 'parallelize': False, # parallelize workflow 'processes': 1, # number of cpu processes to use # analysis 'metric_eval': 'score_test', # metric in model_results to evaluate, e.g. score_test, score_oob 'thresholds': {'score_rd10': 0.1, # score of reduced model - threshold for filtering 'recall_rd10': 0.95}, # recall of reduced model - threshold for filtering 'min_gs_size': 4 # minimum gene set size, to be derived } # + # # Run just the inference # wf = workflow(params) # pipeline = ['load_processed_data', 'infer'] # wf.create_pipe(pipeline) # wf.run_pipe() # - # Run analysis, based on pre-existing inference wf = workflow(params) pipeline = ['load_processed_data', 'load_model_results', 'analyze', 'analyze_filtered', 'derive_genesets', 'run_Rscripts'] wf.create_pipe(pipeline) wf.run_pipe()
notebooks/run02-model_rf_boruta.ipynb
# --- # jupyter: # jupytext: # formats: ipynb,jl:hydrogen # text_representation: # extension: .jl # format_name: hydrogen # format_version: '1.3' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.7.0 # language: julia # name: julia-1.7 # --- # %% [markdown] # https://twitter.com/dolphin7473/status/1466376598853148680 # %% module O using OffsetArrays mutable struct Variables u::OffsetArray{ComplexF64, 1} end function update_u!(var::Variables) rhs = OffsetArray{ComplexF64}(undef, -2:2) rhs[-2:2] = randn(ComplexF64, 5) var.u .+= rhs end function main() u = OffsetArray{ComplexF64}(undef, -2:2) var = Variables(u) var.u[-2:2] .= 10:10:50 @time update_u!(var) end end O.main() O.main() O.main() # %% @code_warntype O.main() # %% module P using OffsetArrays mutable struct Variables{U<:OffsetArray{ComplexF64, 1}} u::U end function update_u!(var::Variables) rhs = OffsetArray{ComplexF64}(undef, -2:2) rhs[-2:2] = randn(ComplexF64, 5) var.u .+= rhs end function main() u = OffsetArray{ComplexF64}(undef, -2:2) var = Variables(u) var.u[-2:2] .= 10:10:50 @time update_u!(var) end end P.main() P.main() P.main() # %% @code_warntype P.main() # %% module Q using OffsetArrays using Random struct Variables{U} u::U end function update_u!(var::Variables, rhs) randn!(rhs) var.u .+= rhs var end function main() u = OffsetVector{ComplexF64}(undef, -2:2) var = Variables(u) parent(var.u) .= 10:10:50 rhs = similar(u) @time update_u!(var, rhs) end end Q.main() Q.main() var = Q.main() @show var var.u # %% @code_warntype Q.main() # %% u = Q.OffsetVector{ComplexF64}(undef, -2:2) u # %% typeof(u) # %%
0025/OffsetArrays example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Project 2 - GeoTweet # # @Author <NAME> (daddyjab) # @Date 3/27/19 # @File app.py # import necessary libraries import os from flask import Flask, render_template, jsonify, request, redirect # Import Flask_CORS extension to enable Cross Origin Resource Sharing (CORS) # when deployed on Heroku from flask_cors import CORS ################################################# # Flask Setup ################################################# app = Flask(__name__) # Enable Tracking of Flask-SQLAlchemy events for now (probably not needed) app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = True # Provide cross origin resource sharing CORS(app) ################################################# # Database Setup ################################################# from flask_sqlalchemy import SQLAlchemy from sqlalchemy.sql.expression import func #Probably don't need these from SQLAlchemy: asc, desc, between, distinct, func, null, nullsfirst, nullslast, or_, and_, not_ # + # #REVISED PATH HERE WITH JUPYTER NOTEBOOK RUNNING IN `resources` FOLDER: ****************************** # # db_path_flask_app = "sqlite:///data/twitter_trends.db" # db_path_flask_app = "sqlite:///../python/data/twitter_trends.db" # app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL', '') or db_path_flask_app # # Flask-SQLAlchemy database # db = SQLAlchemy(app) # + # # Import the schema for the Location and Trend tables needed for # # 'twitter_trends.sqlite' database tables 'locations' and 'trends' # #DIRECTLY ADD CODE HERE WITH JUPYTER NOTEBOOK: ***************************************** # # from .models import (Location, Trend) # # Database schema for Twitter 'locations' table # class Location(db.Model): # __tablename__ = 'locations' # # Defining the columns for the table 'locations', # # which will hold all of the locations in the U.S. for which # # top trends data is available, as well as location specific # # info like latitude/longitude # id = db.Column(db.Integer, primary_key=True) # woeid = db.Column(db.Integer, unique=True) # twitter_country = db.Column(db.String(100)) # tritter_country_code = db.Column(db.String(10)) # twitter_name = db.Column(db.String(250)) # twitter_parentid = db.Column(db.Integer) # twitter_type = db.Column(db.String(50)) # country_name = db.Column(db.String(250)) # country_name_only = db.Column(db.String(250)) # country_woeid = db.Column(db.Integer) # county_name = db.Column(db.String(250)) # county_name_only = db.Column(db.String(250)) # county_woeid = db.Column(db.Integer) # latitude = db.Column(db.Float) # longitude = db.Column(db.Float) # name_full = db.Column(db.String(250)) # name_only = db.Column(db.String(250)) # name_woe = db.Column(db.String(250)) # place_type = db.Column(db.String(250)) # state_name = db.Column(db.String(250)) # state_name_only = db.Column(db.String(250)) # state_woeid = db.Column(db.Integer) # timezone = db.Column(db.String(250)) # my_trends = db.relationship('Trend', backref=db.backref('my_location', lazy=True)) # def __repr__(self): # return '<Location %r>' % (self.name) # # Database schema for Twitter 'trends' table # class Trend(db.Model): # __tablename__ = 'trends' # # Defining the columns for the table 'trends', # # which will hold all of the top trends associated with # # locations in the 'locations' table # id = db.Column(db.Integer, primary_key=True) # woeid = db.Column(db.Integer, db.ForeignKey('locations.woeid') ) # twitter_as_of = db.Column(db.String(100)) # twitter_created_at = db.Column(db.String(100)) # twitter_name = db.Column(db.String(250)) # twitter_tweet_name = db.Column(db.String(250)) # twitter_tweet_promoted_content = db.Column(db.String(250)) # twitter_tweet_query = db.Column(db.String(250)) # twitter_tweet_url = db.Column(db.String(250)) # twitter_tweet_volume = db.Column(db.Float) # # locations = db.relationship('Location', backref=db.backref('trends', lazy=True)) # def __repr__(self): # return '<Trend %r>' % (self.name) # + # Import database management functions needed to update the # 'twitter_trends.sqlite' database tables 'locations' and 'trends' #DIRECTLY ADD CODE HERE WITH JUPYTER NOTEBOOK: ***************************************** # from .db_management import ( # api_calls_remaining, api_time_before_reset, # update_db_locations_table, update_db_trends_table # ) # This file contains function which update the # 'tritter_trends.sqlite' database tables # 'locations' and 'trends' via API calls to Twitter and Flickr # The following dependencies are only required for update/mgmt of # 'locations' and 'trends' data, not for reading the data import json import time import os import pandas as pd from datetime import datetime from dateutil import tz import requests from pprint import pprint # Import a pointer to the Flask-SQLAlchemy database session # created in the main app.py file # from app import db, Location, Trend #DIRECTLY ADD CODE HERE WITH JUPYTER NOTEBOOK: ***************************************** # from .app import db, app # from .models import Location, Trend # Only perform import if this is being run locally. # If being run from Heroku the keys will be provided # via the app environment variables configured there try: # This will run if the keys are all set via Heroku environment # Twitter API key_twitter_tweetquestor_consumer_api_key = os.environ['key_twitter_tweetquestor_consumer_api_key'] key_twitter_tweetquestor_consumer_api_secret_key = os.environ['key_twitter_tweetquestor_consumer_api_secret_key'] key_twitter_tweetquestor_access_token = os.environ['key_twitter_tweetquestor_access_token'] key_twitter_tweetquestor_access_secret_token = os.environ['key_twitter_tweetquestor_access_secret_token'] # Flickr API key_flicker_infoquestor_key = os.environ['key_flicker_infoquestor_key'] key_flicker_infoquestor_secret = os.environ['key_flicker_infoquestor_secret'] except KeyError: # Keys have not been set in the environment # So need to import them locally try: from api_config import * # If the api_config file is not available, then all we can do is flag an error except ImportError: print("Error: At least one of the API Keys has not been populated on Heroku, and api_config not available!") # Setup Tweepy API Authentication to access Twitter import tweepy auth = tweepy.OAuthHandler(key_twitter_tweetquestor_consumer_api_key, key_twitter_tweetquestor_consumer_api_secret_key) auth.set_access_token(key_twitter_tweetquestor_access_token, key_twitter_tweetquestor_access_secret_token) api = tweepy.API(auth, parser=tweepy.parsers.JSONParser()) # # Function Definitions: Twitter API Rate Limit Management def api_calls_remaining( a_type = "place"): # Return the number of Twitter API calls remaining # for the specified API type: # 'place': Top 10 trending topics for a WOEID # 'closest': Locations near a specificed lat/long for which Twitter has trending topic info # 'available': Locations for which Twitter has topic info # # Global Variable: 'api': Tweepy API # # Get Twitter rate limit information using the Tweepy API rate_limits = api.rate_limit_status() # Focus on the rate limits for trends calls trends_limits = rate_limits['resources']['trends'] # Return the remaining requests available for the # requested type of trends query (or "" if not a valid type) try: remaining = trends_limits[ f"/trends/{a_type}" ]['remaining'] print(f"Twitter API 'trends/{a_type}' - API Calls Remaining: {remaining}") except: return "" return remaining def api_time_before_reset( a_type = "place"): # Return the number of minutes until the Twitter API is reset # for the specified API type: # 'place': Top 10 trending topics for a WOEID # 'closest': Locations near a specificed lat/long for which Twitter has trending topic info # 'available': Locations for which Twitter has topic info # # Global Variable: 'api': Tweepy API # # Get Twitter rate limit information using the Tweepy API rate_limits = api.rate_limit_status() # Focus on the rate limits for trends calls trends_limits = rate_limits['resources']['trends'] # Return the reset time for the # requested type of trends query (or "" if not a valid type) try: reset_ts = trends_limits[ f"/trends/{a_type}" ]['reset'] except: return -1 # Calculate the remaining time using datetime methods to # get the UTC time from the POSIX timestamp reset_utc = datetime.utcfromtimestamp(reset_ts) # Current the current time current_utc = datetime.utcnow() # Calculate the number of seconds remaining, # Assumption: reset time will be >= current time time_before_reset = (reset_utc - current_utc).total_seconds() / 60.0 # Tell the datetime object that it's in UTC time zone since # datetime objects are 'naive' by default reset_utc = reset_utc.replace(tzinfo = tz.tzutc() ) # Convert time zone reset_local = reset_utc.astimezone( tz.tzlocal() ) # Tell the datetime object that it's in UTC time zone since # datetime objects are 'naive' by default current_utc = current_utc.replace(tzinfo = tz.tzutc() ) # Convert time zone current_local = current_utc.astimezone( tz.tzlocal() ) print(f"Twitter API 'trends/{a_type}' - Time Before Rate Limit Reset: {time_before_reset:.1f}: Reset Time: {reset_local.strftime('%Y-%m-%d %H:%M:%S')}, Local Time: {current_local.strftime('%Y-%m-%d %H:%M:%S')}") # Return the time before reset (in minutes) return time_before_reset # # Function Definitions: Twitter Locations with Available Trends Info def get_loc_with_trends_available_to_df( ): # Get locations that have trends data from a api.trends_available() call, # flatten the data, and create a dataframe # Obtain the WOEID locations for which Twitter Trends info is available try: trends_avail = api.trends_available() except TweepError as e: # No top trends info available for this WOEID, return False print(f"Error obtaining top trends for WOEID {a_woeid}: ", e) return False # Import trend availability info into a dataframe trends_avail_df = pd.DataFrame.from_dict(trends_avail, orient='columns') # Retain only locations in the U.S. trends_avail_df = trends_avail_df[ (trends_avail_df['countryCode'] == "US") ] # Reset the index trends_avail_df.reset_index(drop=True, inplace=True) # Flatten the dataframe by unpacking the placeType column information into separate columns trends_avail_df['twitter_type'] = trends_avail_df['placeType'].map( lambda x: x['name']) # Remove unneeded fields trends_avail_df.drop(['placeType', 'url' ], axis='columns' , inplace = True) # Rename the fields trends_avail_df.rename(columns={ 'woeid': 'woeid', 'country': 'twitter_country', 'countryCode': 'tritter_country_code', 'name': 'twitter_name', 'parentid': 'twitter_parentid' }, inplace=True) return trends_avail_df def get_location_info( a_woeid ): # Use Flickr API call to get location information associated with a Yahoo! WOEID # Note: Yahoo! no longer supports this type of lookup! :( # Setup the Flickr API base URL flickr_api_base_url = f"https://api.flickr.com/services/rest/?method=flickr.places.getInfo&api_key={key_flicker_infoquestor_key}&format=json&nojsoncallback=1&woe_id=" # Populate the WOEID and convert to string format woeid_to_search = str(a_woeid) # Build the full URL for API REST request flickr_api_url = flickr_api_base_url + woeid_to_search try: # Get the REST response, which will be in JSON format response = requests.get(url=flickr_api_url) except requests.exceptions.RequestException as e: print("Error obtaining location information for WOEID {a_woeid}: ", e) return False # Parse the json location_data = response.json() # Check for failure to locate the information if (location_data['stat'] == 'fail'): print(f"Error finding location WOEID {a_woeid}: {location_data['message']}") #pprint(location_data) # Return just a useful subset of the location info as flattened dictionary key_location_info = {} # Basic information that should be present for any location try: key_location_info.update( { 'woeid': int(location_data['place']['woeid']), 'name_woe': location_data['place']['woe_name'], 'name_full': location_data['place']['name'], 'name_only': location_data['place']['name'].split(",")[0].strip(), 'place_type': location_data['place']['place_type'], 'latitude': float(location_data['place']['latitude']), 'longitude': float(location_data['place']['longitude']), }) except: print("Error - basic location information not returned for WOEID{a_woeid}: ", sys.exc_info()[0]) # Timezone associated with the location - if available try: key_location_info.update( { 'timezone': location_data['place']['timezone'] }) except: key_location_info.update( { 'timezone': None }) # County associated with the location - if available try: key_location_info.update( { 'county_name': location_data['place']['county']['_content'], 'county_name_only': location_data['place']['county']['_content'].split(",")[0].strip(), 'county_woeid': int(location_data['place']['county']['woeid']), }) except: key_location_info.update( { 'county_name': None, 'county_name_only': None, 'county_woeid': None, }) # State associated with the location - if available try: key_location_info.update( { 'state_name': location_data['place']['region']['_content'], 'state_name_only': location_data['place']['region']['_content'].split(",")[0].strip(), 'state_woeid': int(location_data['place']['region']['woeid']), }) except: key_location_info.update( { 'state_name': None, 'state_name_only': None, 'state_woeid': None, }) # Country associated with the location - if available try: key_location_info.update( { 'country_name': location_data['place']['country']['_content'], 'country_name_only': location_data['place']['country']['_content'].split(",")[0].strip(), 'country_woeid': int(location_data['place']['country']['woeid']), }) except: key_location_info.update( { 'country_name': None, 'country_name_only': None, 'country_woeid': None, }) return key_location_info def update_db_locations_table(): # Function to update the list of Twitter locations in the'locations' DB table. # This function uses a Twitter API to get the list of locations for which top trends # information is available. It then uses a Flickr API to obtain location details for # each of these Twitter specified locations. A merge is then performed of the two # DataFrames, resulting in a single dataframe that is used to update the 'locations' table. # NOTE: The Twitter 'trends/available' API call is not rate limited. # # This function assumes that the 'locations' table in the database has already been configured # and is ready for data. # Flatten the Twitter Trends results and populate in a Dataframe loc_with_trends_available_df = get_loc_with_trends_available_to_df( ) # Use the get_location_info() function to add location info (from Flickr) # for each location (Twitter WOEID) that has trend info loc_info_list = list( loc_with_trends_available_df['woeid'].apply( get_location_info ) ) # Create a DataFrame from the location info list loc_info_df = pd.DataFrame.from_dict(loc_info_list) # Merge the Twitter trend location available dataframe with the # location info dataframe to create a master list of all # Twitter Trend locations and associated location information twitter_trend_locations_df = loc_with_trends_available_df.merge(loc_info_df, how='inner', on='woeid') # Delete all location information currently in the database 'locations' table try: db.session.query(Location).delete() except (UndefinedTable, ProgrammingError, InternalError, InFailedSqlTransaction): # The locations table does not yet exist - ignore this error pass db.session.commit() # Write this table of location data to the database 'locations' table twitter_trend_locations_df.to_sql( 'locations', con=db.engine, if_exists='append', index=False) db.session.commit() # Print an informative message regarding the update just performed num_loc = db.session.query(Location).count() #q_results = db.session.query(Location).all() #print(f"Updated {len(q_results)} locations") #for row in q_results: # print(row.woeid, row.name_full) return num_loc # # Function Definitions: Twitter Top Trends for Twitter Locations def get_trends_for_loc( a_woeid ): # Get top Twitter trending tweets for a location specified by a WOEID, # flatten the data, and return it as a list of dictionaries # Import trend availability info into a dataframe try: top_trends = api.trends_place( a_woeid )[0] except TweepError as e: # No top trends info available for this WOEID, return False print(f"Error obtaining top trends for WOEID {a_woeid}: ", e) return False #pprint(top_trends) # Repeat some information that is common for all elements in the trends list common_info = {} # Basic information that should be present for any location # 'as_of': '2019-03-26T21:22:42Z', # 'created_at': '2019-03-26T21:17:18Z', # 'locations': [{'name': 'Atlanta', 'woeid': 2357024}] try: common_info.update( { 'woeid': int(top_trends['locations'][0]['woeid']), 'twitter_name': top_trends['locations'][0]['name'], 'twitter_created_at': top_trends['created_at'], 'twitter_as_of': top_trends['as_of'] }) except: print("Error - basic location information not returned for WOEID{a_woeid}: ", sys.exc_info()[0]) # Loop through all of the trends and store in an array of dictionary elements # 'name': '<NAME>' # 'promoted_content': None # 'query': '%22Jussie+Smollett%22' # 'tweet_volume': 581331 # 'url': 'http://twitter.com/search?q=%22Jussie+Smollett%22' # Return the trends as an array of flattened dictionaries trend_info = [] for ti in top_trends['trends']: # Put the trend info into a dictionary, starting with the common info this_trend = common_info.copy() # Timezone associated with the location - if available try: this_trend.update( { 'twitter_tweet_name': ti['name'], 'twitter_tweet_promoted_content': ti['promoted_content'], 'twitter_tweet_query': ti['query'], 'twitter_tweet_volume': ti['tweet_volume'], 'twitter_tweet_url': ti['url'] }) except: this_trend.update( { 'twitter_tweet_name': None, 'twitter_tweet_promoted_content': None, 'twitter_tweet_query': None, 'twitter_tweet_volume': None, 'twitter_tweet_url': None }) # Append this trend to the list trend_info.append( this_trend ) return trend_info def update_db_trends_table(): # Function to obtain the list of Twitter locations from the 'locations' DB table. # The function then loops through each location, # obtains the Twitter top trends info, and then appends that data to the 'trends' table. # The function uses rate limit check functions to see if the Twitter API call rate limit # is about to be reached, and if so, delays the next relevant API call until the rate limit # is scheduled to be reset (a period of up to 15minutes) before continuing. # # This function assumes that the 'trends' table in the database has already been configured # and is ready for data. # Obtain the list of Twitter locations from the 'locations' DB table loc_list = [ x[0] for x in db.session.query(Location.woeid).all()] print(f"Retrieved {len(loc_list)} locations for processing") # Keep track of the actual number of locations # where trend info was written to the 'trends' table num_location_trends_written_to_db = 0 for tw_woeid in loc_list: print(f">> Updating trends for location {tw_woeid}") # Make sure we haven't hit the rate limit yet calls_remaining = api_calls_remaining( "place" ) time_before_reset = api_time_before_reset( "place") # If we're close to hitting the rate limit for the trends/place API, # then wait until the next reset = # 'time_before_reset' minutes + 1 minute buffer if (calls_remaining < 2): print (f">> Waiting {time_before_reset} minutes due to rate limit") time.sleep( (time_before_reset+1) * 60) # Get trend info for a WOEID location t_info = get_trends_for_loc(tw_woeid) try: # Create a DataFrame t_info_df = pd.DataFrame.from_dict(t_info) # Delete any trends associated with this WOEID # before appending new trends to the 'trends' table for this WOEID try: db.session.query(Trend).filter(Trend.woeid == tw_woeid).delete() except (UndefinedTable, ProgrammingError, InternalError, InFailedSqlTransaction): # The trends table does not yet exist - ignore this error pass db.session.commit() # Append trends for this WOEID to the 'trends' database table t_info_df.to_sql( 'trends', con=db.engine, if_exists='append', index=False) db.session.commit() # Increment the count num_location_trends_written_to_db += 1 except: print(f">> Error occurred with location {tw_woeid} while attempting to prepare and write trends data") return num_location_trends_written_to_db # + # #******************************************************************************** # # Default route - display the main page # # NOTE: Flask expects rendered templates to be in the ./templates folder # @app.route("/") # def home(): # return render_template("index.html") # #******************************************************************************** # # Return information relevant to update # # of the 'locations' and 'trends' database tables # @app.route("/update") # def update_info(): # # Obtain remaining number of API calls for trends/place # api_calls_remaining_place = api_calls_remaining( "place") # # Obtain time before rate limits are reset for trends/available # api_time_before_reset_place = api_time_before_reset( "place") # # Obtain remaining number of API calls for trends/place # api_calls_remaining_available = api_calls_remaining( "available") # # Obtain time before rate limits are reset for trends/available # api_time_before_reset_available = api_time_before_reset( "available") # # Count the number of locations in the 'locations' table # n_locations = db.session.query(Location).count() # # Count the number of total trends in the 'trends' table # n_trends = db.session.query(Trend).count() # # Provide the average number of Twitter Trends provided per location # # Use try/except to catch divide by zero # try: # n_trends_per_location_avg = n_trends / n_locations # except ZeroDivisionError: # n_trends_per_location_avg = None # api_info = { # 'api_calls_remaining_place': api_calls_remaining_place, # 'api_time_before_reset_place': api_time_before_reset_place, # 'api_calls_remaining_available': api_calls_remaining_available, # 'api_time_before_reset_available': api_time_before_reset_available, # 'n_locations': n_locations, # 'n_trends': n_trends, # 'n_trends_per_location_avg' : n_trends_per_location_avg # } # return jsonify(api_info) # #******************************************************************************** # # Update the 'locations' table via API calls # # Note: Typically requires less than 1 minute # @app.route("/update/locations") # def update_locations_table(): # # Update the locations table through API calls # n_locations = update_db_locations_table() # api_info = { # 'n_locations': n_locations # } # return jsonify(api_info) # #******************************************************************************** # # Update the 'trends' table via API calls # # Note: Typically requires less than 1 minute if no rate limits # # But require up to 15 minutes if rate limits are in effect # @app.route("/update/trends") # def update_trends_table(): # # Update the trends table through API calls # n_location_trends = update_db_trends_table() # api_info = { # 'n_location_trends': n_location_trends # } # return jsonify(api_info) # #******************************************************************************** # # Return a list of all locations with Twitter Top Trend info # @app.route("/locations") # def get_all_locations(): # results = db.session.query(Location).all() # loc_list = [] # for r in results: # loc_info = { # 'woeid': r.woeid, # 'latitude': r.latitude, # 'longitude': r.longitude, # 'name_full': r.name_full, # 'name_only': r.name_only, # 'name_woe': r.name_woe, # 'county_name': r.county_name, # 'county_name_only': r.county_name_only, # 'county_woeid': r.county_woeid, # 'state_name': r.state_name, # 'state_name_only': r.state_name_only, # 'state_woeid': r.state_woeid, # 'country_name': r.country_name, # 'country_name_only': r.country_name_only, # 'country_woeid': r.country_woeid, # 'place_type': r.place_type, # 'timezone': r.timezone, # 'twitter_type': r.twitter_type, # 'twitter_country': r.twitter_country, # 'tritter_country_code': r.tritter_country_code, # 'twitter_name': r.twitter_name, # 'twitter_parentid': r.twitter_parentid # } # # loc_info = { # # 'woeid': r.Location.woeid, # # 'latitude': r.Location.latitude, # # 'longitude': r.Location.longitude, # # 'name_full': r.Location.name_full, # # 'name_only': r.Location.name_only, # # 'name_woe': r.Location.name_woe, # # 'county_name': r.Location.county_name, # # 'county_name_only': r.Location.county_name_only, # # 'county_woeid': r.Location.county_woeid, # # 'state_name': r.Location.state_name, # # 'state_name_only': r.Location.state_name_only, # # 'state_woeid': r.Location.state_woeid, # # 'country_name': r.Location.country_name, # # 'country_name_only': r.Location.country_name_only, # # 'country_woeid': r.Location.country_woeid, # # 'place_type': r.Location.place_type, # # 'timezone': r.Location.timezone, # # 'twitter_type': r.Location.twitter_type, # # 'twitter_country': r.Location.twitter_country, # # 'tritter_country_code': r.Location.tritter_country_code, # # 'twitter_parentid': r.Location.twitter_parentid, # # 'twitter_as_of': r.Trend.twitter_as_of, # # 'twitter_created_at': r.Trend.twitter_created_at, # # 'twitter_name': r.Trend.twitter_name, # # 'twitter_tweet_name': r.Trend.twitter_tweet_name, # # 'twitter_tweet_promoted_content': r.Trend.twitter_tweet_promoted_content, # # 'twitter_tweet_query': r.Trend.twitter_tweet_query, # # 'twitter_tweet_url': r.Trend.twitter_tweet_url, # # 'twitter_tweet_volume': r.Trend.twitter_tweet_volume # # } # loc_list.append(loc_info) # return jsonify(loc_list) # #******************************************************************************** # # Return a list of one location with Twitter Top Trend info with teh specified WOEID # @app.route("/locations/<a_woeid>") # def get_info_for_location(a_woeid): # results = db.session.query(Location) \ # .filter(Location.woeid == a_woeid) \ # .all() # loc_list = [] # for r in results: # loc_info = { # 'woeid': r.woeid, # 'latitude': r.latitude, # 'longitude': r.longitude, # 'name_full': r.name_full, # 'name_only': r.name_only, # 'name_woe': r.name_woe, # 'county_name': r.county_name, # 'county_name_only': r.county_name_only, # 'county_woeid': r.county_woeid, # 'state_name': r.state_name, # 'state_name_only': r.state_name_only, # 'state_woeid': r.state_woeid, # 'country_name': r.country_name, # 'country_name_only': r.country_name_only, # 'country_woeid': r.country_woeid, # 'place_type': r.place_type, # 'timezone': r.timezone, # 'twitter_type': r.twitter_type, # 'twitter_country': r.twitter_country, # 'tritter_country_code': r.tritter_country_code, # 'twitter_name': r.twitter_name, # 'twitter_parentid': r.twitter_parentid # } # loc_list.append(loc_info) # return jsonify(loc_list) # #******************************************************************************** # # Return a list of all locations that have the specified tweet in its top trends # # and then sort the results by tweet volume in descending order # @app.route("/locations/tweet/<a_tweet>") # def get_locations_with_tweet(a_tweet): # results = db.session.query(Trend, Location).join(Location) \ # .filter(Trend.twitter_tweet_name == a_tweet ) \ # .order_by( Trend.twitter_tweet_volume.desc() ).all() # loc_list = [] # for r in results: # #print(f"Trend Information for {r.Trend.woeid} {r.Location.name_full}: {r.Trend.twitter_tweet_name} {r.Trend.twitter_tweet_volume}") # loc_info = { # 'woeid': r.Location.woeid, # 'latitude': r.Location.latitude, # 'longitude': r.Location.longitude, # 'name_full': r.Location.name_full, # 'name_only': r.Location.name_only, # 'name_woe': r.Location.name_woe, # 'county_name': r.Location.county_name, # 'county_name_only': r.Location.county_name_only, # 'county_woeid': r.Location.county_woeid, # 'state_name': r.Location.state_name, # 'state_name_only': r.Location.state_name_only, # 'state_woeid': r.Location.state_woeid, # 'country_name': r.Location.country_name, # 'country_name_only': r.Location.country_name_only, # 'country_woeid': r.Location.country_woeid, # 'place_type': r.Location.place_type, # 'timezone': r.Location.timezone, # 'twitter_type': r.Location.twitter_type, # 'twitter_country': r.Location.twitter_country, # 'tritter_country_code': r.Location.tritter_country_code, # 'twitter_parentid': r.Location.twitter_parentid, # 'twitter_as_of': r.Trend.twitter_as_of, # 'twitter_created_at': r.Trend.twitter_created_at, # 'twitter_name': r.Trend.twitter_name, # 'twitter_tweet_name': r.Trend.twitter_tweet_name, # 'twitter_tweet_promoted_content': r.Trend.twitter_tweet_promoted_content, # 'twitter_tweet_query': r.Trend.twitter_tweet_query, # 'twitter_tweet_url': r.Trend.twitter_tweet_url, # 'twitter_tweet_volume': r.Trend.twitter_tweet_volume # } # loc_list.append(loc_info) # return jsonify(loc_list) # #******************************************************************************** # # Return the full list of all trends with Twitter Top Trend info # @app.route("/trends") # def get_all_trends(): # results = db.session.query(Trend).all() # trend_list = [] # for r in results: # trend_info = { # 'woeid': r.woeid, # 'twitter_as_of': r.twitter_as_of, # 'twitter_created_at': r.twitter_created_at, # 'twitter_name': r.twitter_name, # 'twitter_tweet_name': r.twitter_tweet_name, # 'twitter_tweet_promoted_content': r.twitter_tweet_promoted_content, # 'twitter_tweet_query': r.twitter_tweet_query, # 'twitter_tweet_url': r.twitter_tweet_url, # 'twitter_tweet_volume': r.twitter_tweet_volume # } # trend_list.append(trend_info) # return jsonify(trend_list) # #******************************************************************************** # # Return the full list of Twitter Top Trends for a specific location # @app.route("/trends/<a_woeid>") # def get_trends_for_location(a_woeid): # results = db.session.query(Trend).filter(Trend.woeid == a_woeid).all() # trend_list = [] # for r in results: # trend_info = { # 'woeid': r.woeid, # 'twitter_as_of': r.twitter_as_of, # 'twitter_created_at': r.twitter_created_at, # 'twitter_name': r.twitter_name, # 'twitter_tweet_name': r.twitter_tweet_name, # 'twitter_tweet_promoted_content': r.twitter_tweet_promoted_content, # 'twitter_tweet_query': r.twitter_tweet_query, # 'twitter_tweet_url': r.twitter_tweet_url, # 'twitter_tweet_volume': r.twitter_tweet_volume # } # trend_list.append(trend_info) # return jsonify(trend_list) # #******************************************************************************** # # Return the top 5 list of Twitter Top Trends for a specific location # @app.route("/trends/top/<a_woeid>") # def get_top_trends_for_location(a_woeid): # results = db.session.query(Trend) \ # .filter(Trend.woeid == a_woeid) \ # .order_by(Trend.twitter_tweet_volume.desc()) \ # .limit(5).all() # trend_list = [] # for r in results: # trend_info = { # 'woeid': r.woeid, # 'twitter_as_of': r.twitter_as_of, # 'twitter_created_at': r.twitter_created_at, # 'twitter_name': r.twitter_name, # 'twitter_tweet_name': r.twitter_tweet_name, # 'twitter_tweet_promoted_content': r.twitter_tweet_promoted_content, # 'twitter_tweet_query': r.twitter_tweet_query, # 'twitter_tweet_url': r.twitter_tweet_url, # 'twitter_tweet_volume': r.twitter_tweet_volume # } # trend_list.append(trend_info) # return jsonify(trend_list) # if __name__ == "__main__": # app.run() # - # # Verify Basic DB functions using SQLite # + # Read from locations table results = db.session.query(Location).all() loc_list = [] for r in results: loc_info = { 'woeid': r.woeid, 'latitude': r.latitude, 'longitude': r.longitude, 'name_full': r.name_full, 'name_only': r.name_only, 'name_woe': r.name_woe, 'county_name': r.county_name, 'county_name_only': r.county_name_only, 'county_woeid': r.county_woeid, 'state_name': r.state_name, 'state_name_only': r.state_name_only, 'state_woeid': r.state_woeid, 'country_name': r.country_name, 'country_name_only': r.country_name_only, 'country_woeid': r.country_woeid, 'place_type': r.place_type, 'timezone': r.timezone, 'twitter_type': r.twitter_type, 'twitter_country': r.twitter_country, 'tritter_country_code': r.tritter_country_code, 'twitter_name': r.twitter_name, 'twitter_parentid': r.twitter_parentid } loc_list.append(loc_info) # - print(len(loc_list)) pprint(loc_list) # + # Read from trends table results = db.session.query(Trend).all() trend_list = [] for r in results: trend_info = { 'woeid': r.woeid, 'twitter_as_of': r.twitter_as_of, 'twitter_created_at': r.twitter_created_at, 'twitter_name': r.twitter_name, 'twitter_tweet_name': r.twitter_tweet_name, 'twitter_tweet_promoted_content': r.twitter_tweet_promoted_content, 'twitter_tweet_query': r.twitter_tweet_query, 'twitter_tweet_url': r.twitter_tweet_url, 'twitter_tweet_volume': r.twitter_tweet_volume } trend_list.append(trend_info) # - print(len(trend_list)) pprint(trend_list) # + # Check API rate limits information # Obtain remaining number of API calls for trends/place api_calls_remaining_place = api_calls_remaining( "place") # Obtain time before rate limits are reset for trends/available api_time_before_reset_place = api_time_before_reset( "place") # Obtain remaining number of API calls for trends/place api_calls_remaining_available = api_calls_remaining( "available") # Obtain time before rate limits are reset for trends/available api_time_before_reset_available = api_time_before_reset( "available") # Count the number of locations in the 'locations' table n_locations = db.session.query(Location).count() # Count the number of total trends in the 'trends' table n_trends = db.session.query(Trend).count() # Provide the average number of Twitter Trends provided per location # Use try/except to catch divide by zero try: n_trends_per_location_avg = n_trends / n_locations except ZeroDivisionError: n_trends_per_location_avg = None api_info = { 'api_calls_remaining_place': api_calls_remaining_place, 'api_time_before_reset_place': api_time_before_reset_place, 'api_calls_remaining_available': api_calls_remaining_available, 'api_time_before_reset_available': api_time_before_reset_available, 'n_locations': n_locations, 'n_trends': n_trends, 'n_trends_per_location_avg' : n_trends_per_location_avg } pprint(api_info) # + # Update locations table # n_locations = update_db_locations_table() # print(n_locations) # + # Update trends table # n_location_trends = update_db_trends_table() # print(n_location_trends) # - # # Shift to PostgreSQL # Import keys and other info # postgres_geotweetapp_login # postgres_geotweetapp_password from api_config import * # + #REVISED PATH HERE WITH JUPYTER NOTEBOOK RUNNING IN `resources` FOLDER: ****************************** # db_path_flask_app = "sqlite:///data/twitter_trends.db" db_path_flask_app = f"postgresql://{postgres_geotweetapp_login}:{postgres_geotweetapp_password}@localhost/twitter_trends" app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL', '') or db_path_flask_app # Flask-SQLAlchemy database db = SQLAlchemy(app) # + # Import the schema for the Location and Trend tables needed for # 'twitter_trends.sqlite' database tables 'locations' and 'trends' #DIRECTLY ADD CODE HERE WITH JUPYTER NOTEBOOK: ***************************************** # from .models import (Location, Trend) # Database schema for Twitter 'locations' table class Location(db.Model): __tablename__ = 'locations' # Defining the columns for the table 'locations', # which will hold all of the locations in the U.S. for which # top trends data is available, as well as location specific # info like latitude/longitude id = db.Column(db.Integer, primary_key=True) woeid = db.Column(db.Integer, unique=True, nullable=False) twitter_country = db.Column(db.String(100)) tritter_country_code = db.Column(db.String(10)) twitter_name = db.Column(db.String(250)) twitter_parentid = db.Column(db.Integer) twitter_type = db.Column(db.String(50)) country_name = db.Column(db.String(250)) country_name_only = db.Column(db.String(250)) country_woeid = db.Column(db.Integer) county_name = db.Column(db.String(250)) county_name_only = db.Column(db.String(250)) county_woeid = db.Column(db.Integer) latitude = db.Column(db.Float) longitude = db.Column(db.Float) name_full = db.Column(db.String(250)) name_only = db.Column(db.String(250)) name_woe = db.Column(db.String(250)) place_type = db.Column(db.String(250)) state_name = db.Column(db.String(250)) state_name_only = db.Column(db.String(250)) state_woeid = db.Column(db.Integer) timezone = db.Column(db.String(250)) my_trends = db.relationship('Trend', backref=db.backref('my_location', lazy=True)) def __repr__(self): return '<Location %r>' % (self.name) # Database schema for Twitter 'trends' table class Trend(db.Model): __tablename__ = 'trends' # Defining the columns for the table 'trends', # which will hold all of the top trends associated with # locations in the 'locations' table id = db.Column(db.Integer, primary_key=True) woeid = db.Column(db.Integer, db.ForeignKey('locations.woeid') ) twitter_as_of = db.Column(db.String(100)) twitter_created_at = db.Column(db.String(100)) twitter_name = db.Column(db.String(250)) twitter_tweet_name = db.Column(db.String(250)) twitter_tweet_promoted_content = db.Column(db.String(250)) twitter_tweet_query = db.Column(db.String(250)) twitter_tweet_url = db.Column(db.String(250)) twitter_tweet_volume = db.Column(db.Float) # locations = db.relationship('Location', backref=db.backref('trends', lazy=True)) def __repr__(self): return '<Trend %r>' % (self.name) # - #DIRECTLY ADD CODE HERE WITH JUPYTER NOTEBOOK: ***************************************** # Initial the database on Heroku start-up # from python.app import db db.create_all() # Update locations table n_locations = update_db_locations_table() print(n_locations)
resources/Database_Exploration/Shift_to_PostgreSQL.ipynb
# # Hartree-Fock methods # # # ## Why Hartree-Fock? Derivation of Hartree-Fock equations in coordinate space # # Hartree-Fock (HF) theory is an algorithm for finding an approximative expression for the ground state of a given Hamiltonian. The basic ingredients are # * Define a single-particle basis $\{\psi_{\alpha}\}$ so that # $$ # \hat{h}^{\mathrm{HF}}\psi_{\alpha} = \varepsilon_{\alpha}\psi_{\alpha} # $$ # with the Hartree-Fock Hamiltonian defined as # $$ # \hat{h}^{\mathrm{HF}}=\hat{t}+\hat{u}_{\mathrm{ext}}+\hat{u}^{\mathrm{HF}} # $$ # * The term $\hat{u}^{\mathrm{HF}}$ is a single-particle potential to be determined by the HF algorithm. # # * The HF algorithm means to choose $\hat{u}^{\mathrm{HF}}$ in order to have # $$ # \langle \hat{H} \rangle = E^{\mathrm{HF}}= \langle \Phi_0 | \hat{H}|\Phi_0 \rangle # $$ # that is to find a local minimum with a Slater determinant $\Phi_0$ being the ansatz for the ground state. # * The variational principle ensures that $E^{\mathrm{HF}} \ge E_0$, with $E_0$ the exact ground state energy. # # We will show that the Hartree-Fock Hamiltonian $\hat{h}^{\mathrm{HF}}$ equals our definition of the operator $\hat{f}$ discussed in connection with the new definition of the normal-ordered Hamiltonian (see later lectures), that is we have, for a specific matrix element # $$ # \langle p |\hat{h}^{\mathrm{HF}}| q \rangle =\langle p |\hat{f}| q \rangle=\langle p|\hat{t}+\hat{u}_{\mathrm{ext}}|q \rangle +\sum_{i\le F} \langle pi | \hat{V} | qi\rangle_{AS}, # $$ # meaning that # $$ # \langle p|\hat{u}^{\mathrm{HF}}|q\rangle = \sum_{i\le F} \langle pi | \hat{V} | qi\rangle_{AS}. # $$ # The so-called Hartree-Fock potential $\hat{u}^{\mathrm{HF}}$ brings an explicit medium dependence due to the summation over all single-particle states below the Fermi level $F$. It brings also in an explicit dependence on the two-body interaction (in nuclear physics we can also have complicated three- or higher-body forces). The two-body interaction, with its contribution from the other bystanding fermions, creates an effective mean field in which a given fermion moves, in addition to the external potential $\hat{u}_{\mathrm{ext}}$ which confines the motion of the fermion. For systems like nuclei, there is no external confining potential. Nuclei are examples of self-bound systems, where the binding arises due to the intrinsic nature of the strong force. For nuclear systems thus, there would be no external one-body potential in the Hartree-Fock Hamiltonian. # # ## Variational Calculus and Lagrangian Multipliers # # The calculus of variations involves # problems where the quantity to be minimized or maximized is an integral. # # In the general case we have an integral of the type # $$ # E[\Phi]= \int_a^b f(\Phi(x),\frac{\partial \Phi}{\partial x},x)dx, # $$ # where $E$ is the quantity which is sought minimized or maximized. # The problem is that although $f$ is a function of the variables $\Phi$, $\partial \Phi/\partial x$ and $x$, the exact dependence of # $\Phi$ on $x$ is not known. This means again that even though the integral has fixed limits $a$ and $b$, the path of integration is # not known. In our case the unknown quantities are the single-particle wave functions and we wish to choose an integration path which makes # the functional $E[\Phi]$ stationary. This means that we want to find minima, or maxima or saddle points. In physics we search normally for minima. # Our task is therefore to find the minimum of $E[\Phi]$ so that its variation $\delta E$ is zero subject to specific # constraints. In our case the constraints appear as the integral which expresses the orthogonality of the single-particle wave functions. # The constraints can be treated via the technique of Lagrangian multipliers # # Let us specialize to the expectation value of the energy for one particle in three-dimensions. # This expectation value reads # $$ # E=\int dxdydz \psi^*(x,y,z) \hat{H} \psi(x,y,z), # $$ # with the constraint # $$ # \int dxdydz \psi^*(x,y,z) \psi(x,y,z)=1, # $$ # and a Hamiltonian # $$ # \hat{H}=-\frac{1}{2}\nabla^2+V(x,y,z). # $$ # We will, for the sake of notational convenience, skip the variables $x,y,z$ below, and write for example $V(x,y,z)=V$. # # The integral involving the kinetic energy can be written as, with the function $\psi$ vanishing # strongly for large values of $x,y,z$ (given here by the limits $a$ and $b$), # $$ # \int_a^b dxdydz \psi^* \left(-\frac{1}{2}\nabla^2\right) \psi dxdydz = \psi^*\nabla\psi|_a^b+\int_a^b dxdydz\frac{1}{2}\nabla\psi^*\nabla\psi. # $$ # We will drop the limits $a$ and $b$ in the remaining discussion. # Inserting this expression into the expectation value for the energy and taking the variational minimum we obtain # $$ # \delta E = \delta \left\{\int dxdydz\left( \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi\right)\right\} = 0. # $$ # The constraint appears in integral form as # $$ # \int dxdydz \psi^* \psi=\mathrm{constant}, # $$ # and multiplying with a Lagrangian multiplier $\lambda$ and taking the variational minimum we obtain the final variational equation # $$ # \delta \left\{\int dxdydz\left( \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi-\lambda\psi^*\psi\right)\right\} = 0. # $$ # We introduce the function $f$ # $$ # f = \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi-\lambda\psi^*\psi= # \frac{1}{2}(\psi^*_x\psi_x+\psi^*_y\psi_y+\psi^*_z\psi_z)+V\psi^*\psi-\lambda\psi^*\psi, # $$ # where we have skipped the dependence on $x,y,z$ and introduced the shorthand $\psi_x$, $\psi_y$ and $\psi_z$ for the various derivatives. # # For $\psi^*$ the Euler-Lagrange equations yield # $$ # \frac{\partial f}{\partial \psi^*}- \frac{\partial }{\partial x}\frac{\partial f}{\partial \psi^*_x}-\frac{\partial }{\partial y}\frac{\partial f}{\partial \psi^*_y}-\frac{\partial }{\partial z}\frac{\partial f}{\partial \psi^*_z}=0, # $$ # which results in # $$ # -\frac{1}{2}(\psi_{xx}+\psi_{yy}+\psi_{zz})+V\psi=\lambda \psi. # $$ # We can then identify the Lagrangian multiplier as the energy of the system. The last equation is # nothing but the standard # Schroedinger equation and the variational approach discussed here provides # a powerful method for obtaining approximate solutions of the wave function. # # # # ## Derivation of Hartree-Fock equations in coordinate space # # Let us denote the ground state energy by $E_0$. According to the # variational principle we have # $$ # E_0 \le E[\Phi] = \int \Phi^*\hat{H}\Phi d\mathbf{\tau} # $$ # where $\Phi$ is a trial function which we assume to be normalized # $$ # \int \Phi^*\Phi d\mathbf{\tau} = 1, # $$ # where we have used the shorthand $d\mathbf{\tau}=dx_1dx_2\dots dx_A$. # # # # # In the Hartree-Fock method the trial function is a Slater # determinant which can be rewritten as # $$ # \Psi(x_1,x_2,\dots,x_A,\alpha,\beta,\dots,\nu) = \frac{1}{\sqrt{A!}}\sum_{P} (-)^PP\psi_{\alpha}(x_1) # \psi_{\beta}(x_2)\dots\psi_{\nu}(x_A)=\sqrt{A!}\hat{A}\Phi_H, # $$ # where we have introduced the anti-symmetrization operator $\hat{A}$ defined by the # summation over all possible permutations *p* of two fermions. # It is defined as # $$ # \hat{A} = \frac{1}{A!}\sum_{p} (-)^p\hat{P}, # $$ # with the the Hartree-function given by the simple product of all possible single-particle function # $$ # \Phi_H(x_1,x_2,\dots,x_A,\alpha,\beta,\dots,\nu) = # \psi_{\alpha}(x_1) # \psi_{\beta}(x_2)\dots\psi_{\nu}(x_A). # $$ # Our functional is written as # $$ # E[\Phi] = \sum_{\mu=1}^A \int \psi_{\mu}^*(x_i)\hat{h}_0(x_i)\psi_{\mu}(x_i) dx_i # + \frac{1}{2}\sum_{\mu=1}^A\sum_{\nu=1}^A # \left[ \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j)\hat{v}(r_{ij})\psi_{\mu}(x_i)\psi_{\nu}(x_j)dx_idx_j- \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j) # \hat{v}(r_{ij})\psi_{\nu}(x_i)\psi_{\mu}(x_j)dx_idx_j\right] # $$ # The more compact version reads # $$ # E[\Phi] # = \sum_{\mu}^A \langle \mu | \hat{h}_0 | \mu\rangle+ \frac{1}{2}\sum_{\mu\nu}^A\left[\langle \mu\nu |\hat{v}|\mu\nu\rangle-\langle \nu\mu |\hat{v}|\mu\nu\rangle\right]. # $$ # Since the interaction is invariant under the interchange of two particles it means for example that we have # $$ # \langle \mu\nu|\hat{v}|\mu\nu\rangle = \langle \nu\mu|\hat{v}|\nu\mu\rangle, # $$ # or in the more general case # $$ # \langle \mu\nu|\hat{v}|\sigma\tau\rangle = \langle \nu\mu|\hat{v}|\tau\sigma\rangle. # $$ # The direct and exchange matrix elements can be brought together if we define the antisymmetrized matrix element # $$ # \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}= \langle \mu\nu|\hat{v}|\mu\nu\rangle-\langle \mu\nu|\hat{v}|\nu\mu\rangle, # $$ # or for a general matrix element # $$ # \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{AS}= \langle \mu\nu|\hat{v}|\sigma\tau\rangle-\langle \mu\nu|\hat{v}|\tau\sigma\rangle. # $$ # It has the symmetry property # $$ # \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{AS}= -\langle \mu\nu|\hat{v}|\tau\sigma\rangle_{AS}=-\langle \nu\mu|\hat{v}|\sigma\tau\rangle_{AS}. # $$ # The antisymmetric matrix element is also hermitian, implying # $$ # \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{AS}= \langle \sigma\tau|\hat{v}|\mu\nu\rangle_{AS}. # $$ # With these notations we rewrite the Hartree-Fock functional as # <!-- Equation labels as ordinary links --> # <div id="H2Expectation2"></div> # # $$ # \begin{equation} # \int \Phi^*\hat{H_I}\Phi d\mathbf{\tau} # = \frac{1}{2}\sum_{\mu=1}^A\sum_{\nu=1}^A \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}. \label{H2Expectation2} \tag{1} # \end{equation} # $$ # Adding the contribution from the one-body operator $\hat{H}_0$ to # ([1](#H2Expectation2)) we obtain the energy functional # <!-- Equation labels as ordinary links --> # <div id="FunctionalEPhi"></div> # # $$ # \begin{equation} # E[\Phi] # = \sum_{\mu=1}^A \langle \mu | h | \mu \rangle + # \frac{1}{2}\sum_{{\mu}=1}^A\sum_{{\nu}=1}^A \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}. \label{FunctionalEPhi} \tag{2} # \end{equation} # $$ # In our coordinate space derivations below we will spell out the Hartree-Fock equations in terms of their integrals. # # # # # If we generalize the Euler-Lagrange equations to more variables # and introduce $N^2$ Lagrange multipliers which we denote by # $\epsilon_{\mu\nu}$, we can write the variational equation for the functional of $E$ # $$ # \delta E - \sum_{\mu\nu}^A \epsilon_{\mu\nu} \delta # \int \psi_{\mu}^* \psi_{\nu} = 0. # $$ # For the orthogonal wave functions $\psi_{i}$ this reduces to # $$ # \delta E - \sum_{\mu=1}^A \epsilon_{\mu} \delta # \int \psi_{\mu}^* \psi_{\mu} = 0. # $$ # Variation with respect to the single-particle wave functions $\psi_{\mu}$ yields then # 3 # 3 # # < # < # < # ! # ! # M # A # T # H # _ # B # L # O # C # K # $$ # \sum_{\mu=1}^A \int \psi_{\mu}^*\hat{h_0}(x_i)\delta\psi_{\mu} # dx_i # + \frac{1}{2}\sum_{{\mu}=1}^A\sum_{{\nu}=1}^A \left[ \int # \psi_{\mu}^*\psi_{\nu}^*\hat{v}(r_{ij})\delta\psi_{\mu}\psi_{\nu} dx_idx_j- \int # \psi_{\mu}^*\psi_{\nu}^*\hat{v}(r_{ij})\psi_{\nu}\delta\psi_{\mu} # dx_idx_j \right]- \sum_{{\mu}=1}^A E_{\mu} \int \delta\psi_{\mu}^* # \psi_{\mu}dx_i # - \sum_{{\mu}=1}^A E_{\mu} \int \psi_{\mu}^* # \delta\psi_{\mu}dx_i = 0. # $$ # Although the variations $\delta\psi$ and $\delta\psi^*$ are not # independent, they may in fact be treated as such, so that the # terms dependent on either $\delta\psi$ and $\delta\psi^*$ individually # may be set equal to zero. To see this, simply # replace the arbitrary variation $\delta\psi$ by $i\delta\psi$, so that # $\delta\psi^*$ is replaced by $-i\delta\psi^*$, and combine the two # equations. We thus arrive at the Hartree-Fock equations # <!-- Equation labels as ordinary links --> # <div id="eq:hartreefockcoordinatespace"></div> # # $$ # \begin{equation} # \left[ -\frac{1}{2}\nabla_i^2+ \sum_{\nu=1}^A\int \psi_{\nu}^*(x_j)\hat{v}(r_{ij})\psi_{\nu}(x_j)dx_j \right]\psi_{\mu}(x_i) - \left[ \sum_{{\nu}=1}^A \int\psi_{\nu}^*(x_j)\hat{v}(r_{ij})\psi_{\mu}(x_j) dx_j\right] \psi_{\nu}(x_i) = \epsilon_{\mu} \psi_{\mu}(x_i). \label{eq:hartreefockcoordinatespace} \tag{3} # \end{equation} # $$ # Notice that the integration $\int dx_j$ implies an # integration over the spatial coordinates $\mathbf{r_j}$ and a summation # over the spin-coordinate of fermion $j$. We note that the factor of $1/2$ in front of the sum involving the two-body interaction, has been removed. This is due to the fact that we need to vary both $\delta\psi_{\mu}^*$ and # $\delta\psi_{\nu}^*$. Using the symmetry properties of the two-body interaction and interchanging $\mu$ and $\nu$ # as summation indices, we obtain two identical terms. # # # # # The two first terms in the last equation are the one-body kinetic energy and the # electron-nucleus potential. The third or *direct* term is the averaged electronic repulsion of the other # electrons. As written, the # term includes the *self-interaction* of # electrons when $\mu=\nu$. The self-interaction is cancelled in the fourth # term, or the *exchange* term. The exchange term results from our # inclusion of the Pauli principle and the assumed determinantal form of # the wave-function. Equation ([3](#eq:hartreefockcoordinatespace)), in addition to the kinetic energy and the attraction from the atomic nucleus that confines the motion of a single electron, represents now the motion of a single-particle modified by the two-body interaction. The additional contribution to the Schroedinger equation due to the two-body interaction, represents a mean field set up by all the other bystanding electrons, the latter given by the sum over all single-particle states occupied by $N$ electrons. # # The Hartree-Fock equation is an example of an integro-differential equation. These equations involve repeated calculations of integrals, in addition to the solution of a set of coupled differential equations. # The Hartree-Fock equations can also be rewritten in terms of an eigenvalue problem. The solution of an eigenvalue problem represents often a more practical algorithm and the solution of coupled integro-differential equations. # This alternative derivation of the Hartree-Fock equations is given below. # # # # # ## Analysis of Hartree-Fock equations in coordinate space # # A theoretically convenient form of the # Hartree-Fock equation is to regard the direct and exchange operator # defined through # $$ # V_{\mu}^{d}(x_i) = \int \psi_{\mu}^*(x_j) # \hat{v}(r_{ij})\psi_{\mu}(x_j) dx_j # $$ # and # $$ # V_{\mu}^{ex}(x_i) g(x_i) # = \left(\int \psi_{\mu}^*(x_j) # \hat{v}(r_{ij})g(x_j) dx_j # \right)\psi_{\mu}(x_i), # $$ # respectively. # # # # # The function $g(x_i)$ is an arbitrary function, # and by the substitution $g(x_i) = \psi_{\nu}(x_i)$ # we get # $$ # V_{\mu}^{ex}(x_i) \psi_{\nu}(x_i) # = \left(\int \psi_{\mu}^*(x_j) # \hat{v}(r_{ij})\psi_{\nu}(x_j) # dx_j\right)\psi_{\mu}(x_i). # $$ # We may then rewrite the Hartree-Fock equations as # $$ # \hat{h}^{HF}(x_i) \psi_{\nu}(x_i) = \epsilon_{\nu}\psi_{\nu}(x_i), # $$ # with # $$ # \hat{h}^{HF}(x_i)= \hat{h}_0(x_i) + \sum_{\mu=1}^AV_{\mu}^{d}(x_i) - # \sum_{\mu=1}^AV_{\mu}^{ex}(x_i), # $$ # and where $\hat{h}_0(i)$ is the one-body part. The latter is normally chosen as a part which yields solutions in closed form. The harmonic oscilltor is a classical problem thereof. # We normally rewrite the last equation as # $$ # \hat{h}^{HF}(x_i)= \hat{h}_0(x_i) + \hat{u}^{HF}(x_i). # $$ # ## Hartree-Fock by varying the coefficients of a wave function expansion # # Another possibility is to expand the single-particle functions in a known basis and vary the coefficients, # that is, the new single-particle wave function is written as a linear expansion # in terms of a fixed chosen orthogonal basis (for example the well-known harmonic oscillator functions or the hydrogen-like functions etc). # We define our new Hartree-Fock single-particle basis by performing a unitary transformation # on our previous basis (labelled with greek indices) as # <!-- Equation labels as ordinary links --> # <div id="eq:newbasis"></div> # # $$ # \begin{equation} # \psi_p^{HF} = \sum_{\lambda} C_{p\lambda}\phi_{\lambda}. \label{eq:newbasis} \tag{4} # \end{equation} # $$ # In this case we vary the coefficients $C_{p\lambda}$. If the basis has infinitely many solutions, we need # to truncate the above sum. We assume that the basis $\phi_{\lambda}$ is orthogonal. # # # # # It is normal to choose a single-particle basis defined as the eigenfunctions # of parts of the full Hamiltonian. The typical situation consists of the solutions of the one-body part of the Hamiltonian, that is we have # $$ # \hat{h}_0\phi_{\lambda}=\epsilon_{\lambda}\phi_{\lambda}. # $$ # The single-particle wave functions $\phi_{\lambda}(\mathbf{r})$, defined by the quantum numbers $\lambda$ and $\mathbf{r}$ # are defined as the overlap # $$ # \phi_{\lambda}(\mathbf{r}) = \langle \mathbf{r} | \lambda \rangle . # $$ # In deriving the Hartree-Fock equations, we will expand the single-particle functions in a known basis and vary the coefficients, # that is, the new single-particle wave function is written as a linear expansion # in terms of a fixed chosen orthogonal basis (for example the well-known harmonic oscillator functions or the hydrogen-like functions etc). # # We stated that a unitary transformation keeps the orthogonality. To see this consider first a basis of vectors $\mathbf{v}_i$, # $$ # \mathbf{v}_i = \begin{bmatrix} v_{i1} \\ \dots \\ \dots \\v_{in} \end{bmatrix} # $$ # We assume that the basis is orthogonal, that is # $$ # \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. # $$ # An orthogonal or unitary transformation # $$ # \mathbf{w}_i=\mathbf{U}\mathbf{v}_i, # $$ # preserves the dot product and orthogonality since # $$ # \mathbf{w}_j^T\mathbf{w}_i=(\mathbf{U}\mathbf{v}_j)^T\mathbf{U}\mathbf{v}_i=\mathbf{v}_j^T\mathbf{U}^T\mathbf{U}\mathbf{v}_i= \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. # $$ # This means that if the coefficients $C_{p\lambda}$ belong to a unitary or orthogonal trasformation (using the Dirac bra-ket notation) # $$ # \vert p\rangle = \sum_{\lambda} C_{p\lambda}\vert\lambda\rangle, # $$ # orthogonality is preserved, that is $\langle \alpha \vert \beta\rangle = \delta_{\alpha\beta}$ # and $\langle p \vert q\rangle = \delta_{pq}$. # # This propertry is extremely useful when we build up a basis of many-body Stater determinant based states. # # **Note also that although a basis $\vert \alpha\rangle$ contains an infinity of states, for practical calculations we have always to make some truncations.** # # # # # # Before we develop the Hartree-Fock equations, there is another very useful property of determinants that we will use both in connection with Hartree-Fock calculations and later shell-model calculations. # # Consider the following determinant # $$ # \left| \begin{array}{cc} \alpha_1b_{11}+\alpha_2sb_{12}& a_{12}\\ # \alpha_1b_{21}+\alpha_2b_{22}&a_{22}\end{array} \right|=\alpha_1\left|\begin{array}{cc} b_{11}& a_{12}\\ # b_{21}&a_{22}\end{array} \right|+\alpha_2\left| \begin{array}{cc} b_{12}& a_{12}\\b_{22}&a_{22}\end{array} \right| # $$ # We can generalize this to an $n\times n$ matrix and have # $$ # \left| \begin{array}{cccccc} a_{11}& a_{12} & \dots & \sum_{k=1}^n c_k b_{1k} &\dots & a_{1n}\\ # a_{21}& a_{22} & \dots & \sum_{k=1}^n c_k b_{2k} &\dots & a_{2n}\\ # \dots & \dots & \dots & \dots & \dots & \dots \\ # \dots & \dots & \dots & \dots & \dots & \dots \\ # a_{n1}& a_{n2} & \dots & \sum_{k=1}^n c_k b_{nk} &\dots & a_{nn}\end{array} \right|= # \sum_{k=1}^n c_k\left| \begin{array}{cccccc} a_{11}& a_{12} & \dots & b_{1k} &\dots & a_{1n}\\ # a_{21}& a_{22} & \dots & b_{2k} &\dots & a_{2n}\\ # \dots & \dots & \dots & \dots & \dots & \dots\\ # \dots & \dots & \dots & \dots & \dots & \dots\\ # a_{n1}& a_{n2} & \dots & b_{nk} &\dots & a_{nn}\end{array} \right| . # $$ # This is a property we will use in our Hartree-Fock discussions. # # # # # We can generalize the previous results, now # with all elements $a_{ij}$ being given as functions of # linear combinations of various coefficients $c$ and elements $b_{ij}$, # $$ # \left| \begin{array}{cccccc} \sum_{k=1}^n b_{1k}c_{k1}& \sum_{k=1}^n b_{1k}c_{k2} & \dots & \sum_{k=1}^n b_{1k}c_{kj} &\dots & \sum_{k=1}^n b_{1k}c_{kn}\\ # \sum_{k=1}^n b_{2k}c_{k1}& \sum_{k=1}^n b_{2k}c_{k2} & \dots & \sum_{k=1}^n b_{2k}c_{kj} &\dots & \sum_{k=1}^n b_{2k}c_{kn}\\ # \dots & \dots & \dots & \dots & \dots & \dots \\ # \dots & \dots & \dots & \dots & \dots &\dots \\ # \sum_{k=1}^n b_{nk}c_{k1}& \sum_{k=1}^n b_{nk}c_{k2} & \dots & \sum_{k=1}^n b_{nk}c_{kj} &\dots & \sum_{k=1}^n b_{nk}c_{kn}\end{array} \right|=det(\mathbf{C})det(\mathbf{B}), # $$ # where $det(\mathbf{C})$ and $det(\mathbf{B})$ are the determinants of $n\times n$ matrices # with elements $c_{ij}$ and $b_{ij}$ respectively. # This is a property we will use in our Hartree-Fock discussions. Convince yourself about the correctness of the above expression by setting $n=2$. # # # # # # # With our definition of the new basis in terms of an orthogonal basis we have # $$ # \psi_p(x) = \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x). # $$ # If the coefficients $C_{p\lambda}$ belong to an orthogonal or unitary matrix, the new basis # is also orthogonal. # Our Slater determinant in the new basis $\psi_p(x)$ is written as # $$ # \frac{1}{\sqrt{A!}} # \left| \begin{array}{ccccc} \psi_{p}(x_1)& \psi_{p}(x_2)& \dots & \dots & \psi_{p}(x_A)\\ # \psi_{q}(x_1)&\psi_{q}(x_2)& \dots & \dots & \psi_{q}(x_A)\\ # \dots & \dots & \dots & \dots & \dots \\ # \dots & \dots & \dots & \dots & \dots \\ # \psi_{t}(x_1)&\psi_{t}(x_2)& \dots & \dots & \psi_{t}(x_A)\end{array} \right|=\frac{1}{\sqrt{A!}} # \left| \begin{array}{ccccc} \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_1)& \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_A)\\ # \sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_1)&\sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_A)\\ # \dots & \dots & \dots & \dots & \dots \\ # \dots & \dots & \dots & \dots & \dots \\ # \sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_1)&\sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_A)\end{array} \right|, # $$ # which is nothing but $det(\mathbf{C})det(\Phi)$, with $det(\Phi)$ being the determinant given by the basis functions $\phi_{\lambda}(x)$. # # # # In our discussions hereafter we will use our definitions of single-particle states above and below the Fermi ($F$) level given by the labels # $ijkl\dots \le F$ for so-called single-hole states and $abcd\dots > F$ for so-called particle states. # For general single-particle states we employ the labels $pqrs\dots$. # # # # # In Eq. ([2](#FunctionalEPhi)), restated here # $$ # E[\Phi] # = \sum_{\mu=1}^A \langle \mu | h | \mu \rangle + # \frac{1}{2}\sum_{{\mu}=1}^A\sum_{{\nu}=1}^A \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}, # $$ # we found the expression for the energy functional in terms of the basis function $\phi_{\lambda}(\mathbf{r})$. We then varied the above energy functional with respect to the basis functions $|\mu \rangle$. # Now we are interested in defining a new basis defined in terms of # a chosen basis as defined in Eq. ([4](#eq:newbasis)). We can then rewrite the energy functional as # <!-- Equation labels as ordinary links --> # <div id="FunctionalEPhi2"></div> # # $$ # \begin{equation} # E[\Phi^{HF}] # = \sum_{i=1}^A \langle i | h | i \rangle + # \frac{1}{2}\sum_{ij=1}^A\langle ij|\hat{v}|ij\rangle_{AS}, \label{FunctionalEPhi2} \tag{5} # \end{equation} # $$ # where $\Phi^{HF}$ is the new Slater determinant defined by the new basis of Eq. ([4](#eq:newbasis)). # # # # # # Using Eq. ([4](#eq:newbasis)) we can rewrite Eq. ([5](#FunctionalEPhi2)) as # <!-- Equation labels as ordinary links --> # <div id="FunctionalEPhi3"></div> # # $$ # \begin{equation} # E[\Psi] # = \sum_{i=1}^A \sum_{\alpha\beta} C^*_{i\alpha}C_{i\beta}\langle \alpha | h | \beta \rangle + # \frac{1}{2}\sum_{ij=1}^A\sum_{{\alpha\beta\gamma\delta}} C^*_{i\alpha}C^*_{j\beta}C_{i\gamma}C_{j\delta}\langle \alpha\beta|\hat{v}|\gamma\delta\rangle_{AS}. \label{FunctionalEPhi3} \tag{6} # \end{equation} # $$ # We wish now to minimize the above functional. We introduce again a set of Lagrange multipliers, noting that # since $\langle i | j \rangle = \delta_{i,j}$ and $\langle \alpha | \beta \rangle = \delta_{\alpha,\beta}$, # the coefficients $C_{i\gamma}$ obey the relation # $$ # \langle i | j \rangle=\delta_{i,j}=\sum_{\alpha\beta} C^*_{i\alpha}C_{i\beta}\langle \alpha | \beta \rangle= # \sum_{\alpha} C^*_{i\alpha}C_{i\alpha}, # $$ # which allows us to define a functional to be minimized that reads # <!-- Equation labels as ordinary links --> # <div id="_auto1"></div> # # $$ # \begin{equation} # F[\Phi^{HF}]=E[\Phi^{HF}] - \sum_{i=1}^A\epsilon_i\sum_{\alpha} C^*_{i\alpha}C_{i\alpha}. # \label{_auto1} \tag{7} # \end{equation} # $$ # Minimizing with respect to $C^*_{i\alpha}$, remembering that the equations for $C^*_{i\alpha}$ and $C_{i\alpha}$ # can be written as two independent equations, we obtain # $$ # \frac{d}{dC^*_{i\alpha}}\left[ E[\Phi^{HF}] - \sum_{j}\epsilon_j\sum_{\alpha} C^*_{j\alpha}C_{j\alpha}\right]=0, # $$ # which yields for every single-particle state $i$ and index $\alpha$ (recalling that the coefficients $C_{i\alpha}$ are matrix elements of a unitary (or orthogonal for a real symmetric matrix) matrix) # the following Hartree-Fock equations # $$ # \sum_{\beta} C_{i\beta}\langle \alpha | h | \beta \rangle+ # \sum_{j=1}^A\sum_{\beta\gamma\delta} C^*_{j\beta}C_{j\delta}C_{i\gamma}\langle \alpha\beta|\hat{v}|\gamma\delta\rangle_{AS}=\epsilon_i^{HF}C_{i\alpha}. # $$ # We can rewrite this equation as (changing dummy variables) # $$ # \sum_{\beta} \left\{\langle \alpha | h | \beta \rangle+ # \sum_{j}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}\right\}C_{i\beta}=\epsilon_i^{HF}C_{i\alpha}. # $$ # Note that the sums over greek indices run over the number of basis set functions (in principle an infinite number). # # # # # # Defining # $$ # h_{\alpha\beta}^{HF}=\langle \alpha | h | \beta \rangle+ # \sum_{j=1}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}, # $$ # we can rewrite the new equations as # <!-- Equation labels as ordinary links --> # <div id="eq:newhf"></div> # # $$ # \begin{equation} # \sum_{\beta}h_{\alpha\beta}^{HF}C_{i\beta}=\epsilon_i^{HF}C_{i\alpha}. \label{eq:newhf} \tag{8} # \end{equation} # $$ # The latter is nothing but a standard eigenvalue problem. Compared with Eq. ([3](#eq:hartreefockcoordinatespace)), # we see that we do not need to compute any integrals in an iterative procedure for solving the equations. # It suffices to tabulate the matrix elements $\langle \alpha | h | \beta \rangle$ and $\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}$ once and for all. Successive iterations require thus only a look-up in tables over one-body and two-body matrix elements. These details will be discussed below when we solve the Hartree-Fock equations numerical. # # # # ## Hartree-Fock algorithm # # Our Hartree-Fock matrix is thus # $$ # \hat{h}_{\alpha\beta}^{HF}=\langle \alpha | \hat{h}_0 | \beta \rangle+ # \sum_{j=1}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. # $$ # The Hartree-Fock equations are solved in an iterative waym starting with a guess for the coefficients $C_{j\gamma}=\delta_{j,\gamma}$ and solving the equations by diagonalization till the new single-particle energies # $\epsilon_i^{\mathrm{HF}}$ do not change anymore by a prefixed quantity. # # # # # Normally we assume that the single-particle basis $|\beta\rangle$ forms an eigenbasis for the operator # $\hat{h}_0$, meaning that the Hartree-Fock matrix becomes # $$ # \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ # \sum_{j=1}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. # $$ # The Hartree-Fock eigenvalue problem # $$ # \sum_{\beta}\hat{h}_{\alpha\beta}^{HF}C_{i\beta}=\epsilon_i^{\mathrm{HF}}C_{i\alpha}, # $$ # can be written out in a more compact form as # $$ # \hat{h}^{HF}\hat{C}=\epsilon^{\mathrm{HF}}\hat{C}. # $$ # The Hartree-Fock equations are, in their simplest form, solved in an iterative way, starting with a guess for the # coefficients $C_{i\alpha}$. We label the coefficients as $C_{i\alpha}^{(n)}$, where the subscript $n$ stands for iteration $n$. # To set up the algorithm we can proceed as follows: # # * We start with a guess $C_{i\alpha}^{(0)}=\delta_{i,\alpha}$. Alternatively, we could have used random starting values as long as the vectors are normalized. Another possibility is to give states below the Fermi level a larger weight. # # * The Hartree-Fock matrix simplifies then to (assuming that the coefficients $C_{i\alpha} $ are real) # $$ # \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ # \sum_{j = 1}^A\sum_{\gamma\delta} C_{j\gamma}^{(0)}C_{j\delta}^{(0)}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. # $$ # Solving the Hartree-Fock eigenvalue problem yields then new eigenvectors $C_{i\alpha}^{(1)}$ and eigenvalues # $\epsilon_i^{HF(1)}$. # * With the new eigenvalues we can set up a new Hartree-Fock potential # $$ # \sum_{j = 1}^A\sum_{\gamma\delta} C_{j\gamma}^{(1)}C_{j\delta}^{(1)}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. # $$ # The diagonalization with the new Hartree-Fock potential yields new eigenvectors and eigenvalues. # This process is continued till for example # $$ # \frac{\sum_{p} |\epsilon_i^{(n)}-\epsilon_i^{(n-1)}|}{m} \le \lambda, # $$ # where $\lambda$ is a user prefixed quantity ($\lambda \sim 10^{-8}$ or smaller) and $p$ runs over all calculated single-particle # energies and $m$ is the number of single-particle states. # # # # ## Analysis of Hartree-Fock equations and Koopman's theorem # # We can rewrite the ground state energy by adding and subtracting $\hat{u}^{HF}(x_i)$ # $$ # E_0^{HF} =\langle \Phi_0 | \hat{H} | \Phi_0\rangle = # \sum_{i\le F}^A \langle i | \hat{h}_0 +\hat{u}^{HF}| j\rangle+ \frac{1}{2}\sum_{i\le F}^A\sum_{j \le F}^A\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]-\sum_{i\le F}^A \langle i |\hat{u}^{HF}| i\rangle, # $$ # which results in # $$ # E_0^{HF} # = \sum_{i\le F}^A \varepsilon_i^{HF} + \frac{1}{2}\sum_{i\le F}^A\sum_{j \le F}^A\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]-\sum_{i\le F}^A \langle i |\hat{u}^{HF}| i\rangle. # $$ # Our single-particle states $ijk\dots$ are now single-particle states obtained from the solution of the Hartree-Fock equations. # # # # Using our definition of the Hartree-Fock single-particle energies we obtain then the following expression for the total ground-state energy # $$ # E_0^{HF} # = \sum_{i\le F}^A \varepsilon_i - \frac{1}{2}\sum_{i\le F}^A\sum_{j \le F}^A\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]. # $$ # This form will be used in our discussion of Koopman's theorem. # # # # In the atomic physics case we have # $$ # E[\Phi^{\mathrm{HF}}(N)] # = \sum_{i=1}^H \langle i | \hat{h}_0 | i \rangle + # \frac{1}{2}\sum_{ij=1}^N\langle ij|\hat{v}|ij\rangle_{AS}, # $$ # where $\Phi^{\mathrm{HF}}(N)$ is the new Slater determinant defined by the new basis of Eq. ([4](#eq:newbasis)) # for $N$ electrons (same $Z$). If we assume that the single-particle wave functions in the new basis do not change # when we remove one electron or add one electron, we can then define the corresponding energy for the $N-1$ systems as # $$ # E[\Phi^{\mathrm{HF}}(N-1)] # = \sum_{i=1; i\ne k}^N \langle i | \hat{h}_0 | i \rangle + # \frac{1}{2}\sum_{ij=1;i,j\ne k}^N\langle ij|\hat{v}|ij\rangle_{AS}, # $$ # where we have removed a single-particle state $k\le F$, that is a state below the Fermi level. # # # # Calculating the difference # $$ # E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \langle k | \hat{h}_0 | k \rangle + # \frac{1}{2}\sum_{i=1;i\ne k}^N\langle ik|\hat{v}|ik\rangle_{AS} + \frac{1}{2}\sum_{j=1;j\ne k}^N\langle kj|\hat{v}|kj\rangle_{AS}, # $$ # we obtain # $$ # E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \langle k | \hat{h}_0 | k \rangle +\sum_{j=1}^N\langle kj|\hat{v}|kj\rangle_{AS} # $$ # which is just our definition of the Hartree-Fock single-particle energy # $$ # E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \epsilon_k^{\mathrm{HF}} # $$ # Similarly, we can now compute the difference (we label the single-particle states above the Fermi level as $abcd > F$) # $$ # E[\Phi^{\mathrm{HF}}(N+1)]- E[\Phi^{\mathrm{HF}}(N)]= \epsilon_a^{\mathrm{HF}}. # $$ # These two equations can thus be used to the electron affinity or ionization energies, respectively. # Koopman's theorem states that for example the ionization energy of a closed-shell system is given by the energy of the highest occupied single-particle state. If we assume that changing the number of electrons from $N$ to $N+1$ does not change the Hartree-Fock single-particle energies and eigenfunctions, then Koopman's theorem simply states that the ionization energy of an atom is given by the single-particle energy of the last bound state. In a similar way, we can also define the electron affinities. # # # # # As an example, consider a simple model for atomic sodium, Na. Neutral sodium has eleven electrons, # with the weakest bound one being confined the $3s$ single-particle quantum numbers. The energy needed to remove an electron from neutral sodium is rather small, 5.1391 eV, a feature which pertains to all alkali metals. # Having performed a Hartree-Fock calculation for neutral sodium would then allows us to compute the # ionization energy by using the single-particle energy for the $3s$ states, namely $\epsilon_{3s}^{\mathrm{HF}}$. # # From these considerations, we see that Hartree-Fock theory allows us to make a connection between experimental # observables (here ionization and affinity energies) and the underlying interactions between particles. # In this sense, we are now linking the dynamics and structure of a many-body system with the laws of motion which govern the system. Our approach is a reductionistic one, meaning that we want to understand the laws of motion # in terms of the particles or degrees of freedom which we believe are the fundamental ones. Our Slater determinant, being constructed as the product of various single-particle functions, follows this philosophy. # # # # # With similar arguments as in atomic physics, we can now use Hartree-Fock theory to make a link # between nuclear forces and separation energies. Changing to nuclear system, we define # $$ # E[\Phi^{\mathrm{HF}}(A)] # = \sum_{i=1}^A \langle i | \hat{h}_0 | i \rangle + # \frac{1}{2}\sum_{ij=1}^A\langle ij|\hat{v}|ij\rangle_{AS}, # $$ # where $\Phi^{\mathrm{HF}}(A)$ is the new Slater determinant defined by the new basis of Eq. ([4](#eq:newbasis)) # for $A$ nucleons, where $A=N+Z$, with $N$ now being the number of neutrons and $Z$ th enumber of protons. If we assume again that the single-particle wave functions in the new basis do not change from a nucleus with $A$ nucleons to a nucleus with $A-1$ nucleons, we can then define the corresponding energy for the $A-1$ systems as # $$ # E[\Phi^{\mathrm{HF}}(A-1)] # = \sum_{i=1; i\ne k}^A \langle i | \hat{h}_0 | i \rangle + # \frac{1}{2}\sum_{ij=1;i,j\ne k}^A\langle ij|\hat{v}|ij\rangle_{AS}, # $$ # where we have removed a single-particle state $k\le F$, that is a state below the Fermi level. # # # # # Calculating the difference # $$ # E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] # = \langle k | \hat{h}_0 | k \rangle + # \frac{1}{2}\sum_{i=1;i\ne k}^A\langle ik|\hat{v}|ik\rangle_{AS} + \frac{1}{2}\sum_{j=1;j\ne k}^A\langle kj|\hat{v}|kj\rangle_{AS}, # $$ # which becomes # $$ # E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] # = \langle k | \hat{h}_0 | k \rangle +\sum_{j=1}^A\langle kj|\hat{v}|kj\rangle_{AS} # $$ # which is just our definition of the Hartree-Fock single-particle energy # $$ # E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] # = \epsilon_k^{\mathrm{HF}} # $$ # Similarly, we can now compute the difference (recall that the single-particle states $abcd > F$) # $$ # E[\Phi^{\mathrm{HF}}(A+1)]- E[\Phi^{\mathrm{HF}}(A)]= \epsilon_a^{\mathrm{HF}}. # $$ # If we then recall that the binding energy differences # $$ # BE(A)-BE(A-1) \hspace{0.5cm} \mathrm{and} \hspace{0.5cm} BE(A+1)-BE(A), # $$ # define the separation energies, we see that the Hartree-Fock single-particle energies can be used to # define separation energies. We have thus our first link between nuclear forces (included in the potential energy term) and an observable quantity defined by differences in binding energies. # # # # # We have thus the following interpretations (if the single-particle fields do not change) # $$ # BE(A)-BE(A-1)\approx E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] # = \epsilon_k^{\mathrm{HF}}, # $$ # and # $$ # BE(A+1)-BE(A)\approx E[\Phi^{\mathrm{HF}}(A+1)]- E[\Phi^{\mathrm{HF}}(A)] = \epsilon_a^{\mathrm{HF}}. # $$ # If we use $^{16}\mbox{O}$ as our closed-shell nucleus, we could then interpret the separation energy # $$ # BE(^{16}\mathrm{O})-BE(^{15}\mathrm{O})\approx \epsilon_{0p^{\nu}_{1/2}}^{\mathrm{HF}}, # $$ # and # $$ # BE(^{16}\mathrm{O})-BE(^{15}\mathrm{N})\approx \epsilon_{0p^{\pi}_{1/2}}^{\mathrm{HF}}. # $$ # Similalry, we could interpret # $$ # BE(^{17}\mathrm{O})-BE(^{16}\mathrm{O})\approx \epsilon_{0d^{\nu}_{5/2}}^{\mathrm{HF}}, # $$ # and # $$ # BE(^{17}\mathrm{F})-BE(^{16}\mathrm{O})\approx\epsilon_{0d^{\pi}_{5/2}}^{\mathrm{HF}}. # $$ # We can continue like this for all $A\pm 1$ nuclei where $A$ is a good closed-shell (or subshell closure) # nucleus. Examples are $^{22}\mbox{O}$, $^{24}\mbox{O}$, $^{40}\mbox{Ca}$, $^{48}\mbox{Ca}$, $^{52}\mbox{Ca}$, $^{54}\mbox{Ca}$, $^{56}\mbox{Ni}$, # $^{68}\mbox{Ni}$, $^{78}\mbox{Ni}$, $^{90}\mbox{Zr}$, $^{88}\mbox{Sr}$, $^{100}\mbox{Sn}$, $^{132}\mbox{Sn}$ and $^{208}\mbox{Pb}$, to mention some possile cases. # # # # # We can thus make our first interpretation of the separation energies in terms of the simplest # possible many-body theory. # If we also recall that the so-called energy gap for neutrons (or protons) is defined as # $$ # \Delta S_n= 2BE(N,Z)-BE(N-1,Z)-BE(N+1,Z), # $$ # for neutrons and the corresponding gap for protons # $$ # \Delta S_p= 2BE(N,Z)-BE(N,Z-1)-BE(N,Z+1), # $$ # we can define the neutron and proton energy gaps for $^{16}\mbox{O}$ as # $$ # \Delta S_{\nu}=\epsilon_{0d^{\nu}_{5/2}}^{\mathrm{HF}}-\epsilon_{0p^{\nu}_{1/2}}^{\mathrm{HF}}, # $$ # and # $$ # \Delta S_{\pi}=\epsilon_{0d^{\pi}_{5/2}}^{\mathrm{HF}}-\epsilon_{0p^{\pi}_{1/2}}^{\mathrm{HF}}. # $$ # <!-- --- begin exercise --- --> # # ## Exercise 1: Derivation of Hartree-Fock equations # # Consider a Slater determinant built up of single-particle orbitals $\psi_{\lambda}$, # with $\lambda = 1,2,\dots,N$. # # The unitary transformation # $$ # \psi_a = \sum_{\lambda} C_{a\lambda}\phi_{\lambda}, # $$ # brings us into the new basis. # The new basis has quantum numbers $a=1,2,\dots,N$. # # # **a)** # Show that the new basis is orthonormal. # # **b)** # Show that the new Slater determinant constructed from the new single-particle wave functions can be # written as the determinant based on the previous basis and the determinant of the matrix $C$. # # **c)** # Show that the old and the new Slater determinants are equal up to a complex constant with absolute value unity. # # <!-- --- begin hint in exercise --- --> # # **Hint.** # Use the fact that $C$ is a unitary matrix. # # <!-- --- end hint in exercise --- --> # # # # # <!-- --- end exercise --- --> # # # # # <!-- --- begin exercise --- --> # # ## Exercise 2: Derivation of Hartree-Fock equations # # Consider the Slater determinant # $$ # \Phi_{0}=\frac{1}{\sqrt{n!}}\sum_{p}(-)^{p}P # \prod_{i=1}^{n}\psi_{\alpha_{i}}(x_{i}). # $$ # A small variation in this function is given by # $$ # \delta\Phi_{0}=\frac{1}{\sqrt{n!}}\sum_{p}(-)^{p}P # \psi_{\alpha_{1}}(x_{1})\psi_{\alpha_{2}}(x_{2})\dots # \psi_{\alpha_{i-1}}(x_{i-1})(\delta\psi_{\alpha_{i}}(x_{i})) # \psi_{\alpha_{i+1}}(x_{i+1})\dots\psi_{\alpha_{n}}(x_{n}). # $$ # **a)** # Show that # $$ # \langle \delta\Phi_{0}|\sum_{i=1}^{n}\left\{t(x_{i})+u(x_{i}) # \right\}+\frac{1}{2} # \sum_{i\neq j=1}^{n}v(x_{i},x_{j})|\Phi_{0}\rangle=\sum_{i=1}^{n}\langle \delta\psi_{\alpha_{i}}|\hat{t}+\hat{u} # |\phi_{\alpha_{i}}\rangle # +\sum_{i\neq j=1}^{n}\left\{\langle\delta\psi_{\alpha_{i}} # \psi_{\alpha_{j}}|\hat{v}|\psi_{\alpha_{i}}\psi_{\alpha_{j}}\rangle- # \langle\delta\psi_{\alpha_{i}}\psi_{\alpha_{j}}|\hat{v} # |\psi_{\alpha_{j}}\psi_{\alpha_{i}}\rangle\right\} # $$ # <!-- --- end exercise --- --> # # # # # <!-- --- begin exercise --- --> # # ## Exercise 3: Developing a Hartree-Fock program # # Neutron drops are a powerful theoretical laboratory for testing, # validating and improving nuclear structure models. Indeed, all # approaches to nuclear structure, from ab initio theory to shell model # to density functional theory are applicable in such systems. We will, # therefore, use neutron drops as a test system for setting up a # Hartree-Fock code. This program can later be extended to studies of # the binding energy of nuclei like $^{16}$O or $^{40}$Ca. The # single-particle energies obtained by solving the Hartree-Fock # equations can then be directly related to experimental separation # energies. # Since Hartree-Fock theory is the starting point for # several many-body techniques (density functional theory, random-phase # approximation, shell-model etc), the aim here is to develop a computer # program to solve the Hartree-Fock equations in a given single-particle basis, # here the harmonic oscillator. # # The Hamiltonian for a system of $N$ neutron drops confined in a # harmonic potential reads # $$ # \hat{H} = \sum_{i=1}^{N} \frac{\hat{p}_{i}^{2}}{2m}+\sum_{i=1}^{N} \frac{1}{2} m\omega {r}_{i}^{2}+\sum_{i<j} \hat{V}_{ij}, # $$ # with $\hbar^{2}/2m = 20.73$ fm$^{2}$, $mc^{2} = 938.90590$ MeV, and # $\hat{V}_{ij}$ is the two-body interaction potential whose # matrix elements are precalculated # and to be read in by you. # # The Hartree-Fock algorithm can be broken down as follows. We recall that our Hartree-Fock matrix is # $$ # \hat{h}_{\alpha\beta}^{HF}=\langle \alpha \vert\hat{h}_0 \vert \beta \rangle+ # \sum_{j=1}^N\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|V|\beta\delta\rangle_{AS}. # $$ # Normally we assume that the single-particle basis $\vert\beta\rangle$ # forms an eigenbasis for the operator $\hat{h}_0$ (this is our case), meaning that the # Hartree-Fock matrix becomes # $$ # \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ # \sum_{j=1}^N\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|V|\beta\delta\rangle_{AS}. # $$ # The Hartree-Fock eigenvalue problem # $$ # \sum_{\beta}\hat{h}_{\alpha\beta}^{HF}C_{i\beta}=\epsilon_i^{\mathrm{HF}}C_{i\alpha}, # $$ # can be written out in a more compact form as # $$ # \hat{h}^{HF}\hat{C}=\epsilon^{\mathrm{HF}}\hat{C}. # $$ # The equations are often rewritten in terms of a so-called density matrix, # which is defined as # <!-- Equation labels as ordinary links --> # <div id="_auto2"></div> # # $$ # \begin{equation} # \rho_{\gamma\delta}=\sum_{i=1}^{N}\langle\gamma|i\rangle\langle i|\delta\rangle = \sum_{i=1}^{N}C_{i\gamma}C^*_{i\delta}. # \label{_auto2} \tag{9} # \end{equation} # $$ # It means that we can rewrite the Hartree-Fock Hamiltonian as # $$ # \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ # \sum_{\gamma\delta} \rho_{\gamma\delta}\langle \alpha\gamma|V|\beta\delta\rangle_{AS}. # $$ # It is convenient to use the density matrix since we can precalculate in every iteration the product of two eigenvector components $C$. # # # Note that $\langle \alpha\vert\hat{h}_0\vert\beta \rangle$ denotes the # matrix elements of the one-body part of the starting hamiltonian. For # self-bound nuclei $\langle \alpha\vert\hat{h}_0\vert\beta \rangle$ is the # kinetic energy, whereas for neutron drops, $\langle \alpha \vert \hat{h}_0 \vert \beta \rangle$ represents the harmonic oscillator hamiltonian since # the system is confined in a harmonic trap. If we are working in a # harmonic oscillator basis with the same $\omega$ as the trapping # potential, then $\langle \alpha\vert\hat{h}_0 \vert \beta \rangle$ is # diagonal. # # # The python # [program](https://github.com/CompPhysics/ManyBodyMethods/tree/master/doc/src/hfock/Code) # shows how one can, in a brute force way read in matrix elements in # $m$-scheme and compute the Hartree-Fock single-particle energies for # four major shells. The interaction which has been used is the # so-called N3LO interaction of [Machleidt and # Entem](http://journals.aps.org/prc/abstract/10.1103/PhysRevC.68.041001) # using the [Similarity Renormalization # Group](http://journals.aps.org/prc/abstract/10.1103/PhysRevC.75.061001) # approach method to renormalize the interaction, using an oscillator # energy $\hbar\omega=10$ MeV. # # The nucleon-nucleon two-body matrix elements are in $m$-scheme and are fully anti-symmetrized. The Hartree-Fock programs uses the density matrix discussed above in order to compute the Hartree-Fock matrix. # Here we display the Hartree-Fock part only, assuming that single-particle data and two-body matrix elements have already been read in. # + editable=true import numpy as np from decimal import Decimal # expectation value for the one body part, Harmonic oscillator in three dimensions def onebody(i, n, l): homega = 10.0 return homega*(2*n[i] + l[i] + 1.5) if __name__ == '__main__': Nparticles = 16 """ Read quantum numbers from file """ index = [] n = [] l = [] j = [] mj = [] tz = [] spOrbitals = 0 with open("nucleispnumbers.dat", "r") as qnumfile: for line in qnumfile: nums = line.split() if len(nums) != 0: index.append(int(nums[0])) n.append(int(nums[1])) l.append(int(nums[2])) j.append(int(nums[3])) mj.append(int(nums[4])) tz.append(int(nums[5])) spOrbitals += 1 """ Read two-nucleon interaction elements (integrals) from file, brute force 4-dim array """ nninteraction = np.zeros([spOrbitals, spOrbitals, spOrbitals, spOrbitals]) with open("nucleitwobody.dat", "r") as infile: for line in infile: number = line.split() a = int(number[0]) - 1 b = int(number[1]) - 1 c = int(number[2]) - 1 d = int(number[3]) - 1 nninteraction[a][b][c][d] = Decimal(number[4]) """ Set up single-particle integral """ singleparticleH = np.zeros(spOrbitals) for i in range(spOrbitals): singleparticleH[i] = Decimal(onebody(i, n, l)) """ Star HF-iterations, preparing variables and density matrix """ """ Coefficients for setting up density matrix, assuming only one along the diagonals """ C = np.eye(spOrbitals) # HF coefficients DensityMatrix = np.zeros([spOrbitals,spOrbitals]) for gamma in range(spOrbitals): for delta in range(spOrbitals): sum = 0.0 for i in range(Nparticles): sum += C[gamma][i]*C[delta][i] DensityMatrix[gamma][delta] = Decimal(sum) maxHFiter = 100 epsilon = 1.0e-5 difference = 1.0 hf_count = 0 oldenergies = np.zeros(spOrbitals) newenergies = np.zeros(spOrbitals) while hf_count < maxHFiter and difference > epsilon: print("############### Iteration %i ###############" % hf_count) HFmatrix = np.zeros([spOrbitals,spOrbitals]) for alpha in range(spOrbitals): for beta in range(spOrbitals): """ If tests for three-dimensional systems, including isospin conservation """ if l[alpha] != l[beta] and j[alpha] != j[beta] and mj[alpha] != mj[beta] and tz[alpha] != tz[beta]: continue """ Setting up the Fock matrix using the density matrix and antisymmetrized NN interaction in m-scheme """ sumFockTerm = 0.0 for gamma in range(spOrbitals): for delta in range(spOrbitals): if (mj[alpha]+mj[gamma]) != (mj[beta]+mj[delta]) and (tz[alpha]+tz[gamma]) != (tz[beta]+tz[delta]): continue sumFockTerm += DensityMatrix[gamma][delta]*nninteraction[alpha][gamma][beta][delta] HFmatrix[alpha][beta] = Decimal(sumFockTerm) """ Adding the one-body term, here plain harmonic oscillator """ if beta == alpha: HFmatrix[alpha][alpha] += singleparticleH[alpha] spenergies, C = np.linalg.eigh(HFmatrix) """ Setting up new density matrix in m-scheme """ DensityMatrix = np.zeros([spOrbitals,spOrbitals]) for gamma in range(spOrbitals): for delta in range(spOrbitals): sum = 0.0 for i in range(Nparticles): sum += C[gamma][i]*C[delta][i] DensityMatrix[gamma][delta] = Decimal(sum) newenergies = spenergies """ Brute force computation of difference between previous and new sp HF energies """ sum =0.0 for i in range(spOrbitals): sum += (abs(newenergies[i]-oldenergies[i]))/spOrbitals difference = sum oldenergies = newenergies print ("Single-particle energies, ordering may have changed ") for i in range(spOrbitals): print('{0:4d} {1:.4f}'.format(i, Decimal(oldenergies[i]))) hf_count += 1 # - # Running the program, one finds that the lowest-lying states for a nucleus like $^{16}\mbox{O}$, we see that the nucleon-nucleon force brings a natural spin-orbit splitting for the $0p$ states (or other states except the $s$-states). # Since we are using the $m$-scheme for our calculations, we observe that there are several states with the same # eigenvalues. The number of eigenvalues corresponds to the degeneracy $2j+1$ and is well respected in our calculations, as see from the table here. # # The values of the lowest-lying states are ($\pi$ for protons and $\nu$ for neutrons) # <table border="1"> # <thead> # <tr><th align="center">Quantum numbers </th> <th align="center">Energy [MeV]</th> </tr> # </thead> # <tbody> # <tr><td align="center"> $0s_{1/2}^{\pi}$ </td> <td align="center"> -40.4602 </td> </tr> # <tr><td align="center"> $0s_{1/2}^{\pi}$ </td> <td align="center"> -40.4602 </td> </tr> # <tr><td align="center"> $0s_{1/2}^{\nu}$ </td> <td align="center"> -40.6426 </td> </tr> # <tr><td align="center"> $0s_{1/2}^{\nu}$ </td> <td align="center"> -40.6426 </td> </tr> # <tr><td align="center"> $0p_{1/2}^{\pi}$ </td> <td align="center"> -6.7133 </td> </tr> # <tr><td align="center"> $0p_{1/2}^{\pi}$ </td> <td align="center"> -6.7133 </td> </tr> # <tr><td align="center"> $0p_{1/2}^{\nu}$ </td> <td align="center"> -6.8403 </td> </tr> # <tr><td align="center"> $0p_{1/2}^{\nu}$ </td> <td align="center"> -6.8403 </td> </tr> # <tr><td align="center"> $0p_{3/2}^{\pi}$ </td> <td align="center"> -11.5886 </td> </tr> # <tr><td align="center"> $0p_{3/2}^{\pi}$ </td> <td align="center"> -11.5886 </td> </tr> # <tr><td align="center"> $0p_{3/2}^{\pi}$ </td> <td align="center"> -11.5886 </td> </tr> # <tr><td align="center"> $0p_{3/2}^{\pi}$ </td> <td align="center"> -11.5886 </td> </tr> # <tr><td align="center"> $0p_{3/2}^{\nu}$ </td> <td align="center"> -11.7201 </td> </tr> # <tr><td align="center"> $0p_{3/2}^{\nu}$ </td> <td align="center"> -11.7201 </td> </tr> # <tr><td align="center"> $0p_{3/2}^{\nu}$ </td> <td align="center"> -11.7201 </td> </tr> # <tr><td align="center"> $0p_{3/2}^{\nu}$ </td> <td align="center"> -11.7201 </td> </tr> # <tr><td align="center"> $0d_{5/2}^{\pi}$ </td> <td align="center"> 18.7589 </td> </tr> # <tr><td align="center"> $0d_{5/2}^{\nu}$ </td> <td align="center"> 18.8082 </td> </tr> # </tbody> # </table> # We can use these results to attempt our first link with experimental data, namely to compute the shell gap or the separation energies. The shell gap for neutrons is given by # $$ # \Delta S_n= 2BE(N,Z)-BE(N-1,Z)-BE(N+1,Z). # $$ # For $^{16}\mbox{O}$ we have an experimental value for the shell gap of $11.51$ MeV for neutrons, while our Hartree-Fock calculations result in $25.65$ MeV. This means that correlations beyond a simple Hartree-Fock calculation with a two-body force play an important role in nuclear physics. # The splitting between the $0p_{3/2}^{\nu}$ and the $0p_{1/2}^{\nu}$ state is 4.88 MeV, while the experimental value for the gap between the ground state $1/2^{-}$ and the first excited $3/2^{-}$ states is 6.08 MeV. The two-nucleon spin-orbit force plays a central role here. In our discussion of nuclear forces we will see how the spin-orbit force comes into play here. # # <!-- --- end exercise --- --> # # # ## Hartree-Fock in second quantization and stability of HF solution # # We wish now to derive the Hartree-Fock equations using our second-quantized formalism and study the stability of the equations. # Our ansatz for the ground state of the system is approximated as (this is our representation of a Slater determinant in second quantization) # $$ # |\Phi_0\rangle = |c\rangle = a^{\dagger}_i a^{\dagger}_j \dots a^{\dagger}_l|0\rangle. # $$ # We wish to determine $\hat{u}^{HF}$ so that # $E_0^{HF}= \langle c|\hat{H}| c\rangle$ becomes a local minimum. # # In our analysis here we will need Thouless' theorem, which states that # an arbitrary Slater determinant $|c'\rangle$ which is not orthogonal to a determinant # $| c\rangle ={\displaystyle\prod_{i=1}^{n}} # a_{\alpha_{i}}^{\dagger}|0\rangle$, can be written as # $$ # |c'\rangle=exp\left\{\sum_{a>F}\sum_{i\le F}C_{ai}a_{a}^{\dagger}a_{i}\right\}| c\rangle # $$ # Let us give a simple proof of Thouless' theorem. The theorem states that we can make a linear combination av particle-hole excitations with respect to a given reference state $\vert c\rangle$. With this linear combination, we can make a new Slater determinant $\vert c'\rangle $ which is not orthogonal to # $\vert c\rangle$, that is # $$ # \langle c|c'\rangle \ne 0. # $$ # To show this we need some intermediate steps. The exponential product of two operators $\exp{\hat{A}}\times\exp{\hat{B}}$ is equal to $\exp{(\hat{A}+\hat{B})}$ only if the two operators commute, that is # $$ # [\hat{A},\hat{B}] = 0. # $$ # ## Thouless' theorem # # # If the operators do not commute, we need to resort to the [Baker-Campbell-Hauersdorf](http://www.encyclopediaofmath.org/index.php/Campbell%E2%80%93Hausdorff_formula). This relation states that # $$ # \exp{\hat{C}}=\exp{\hat{A}}\exp{\hat{B}}, # $$ # with # $$ # \hat{C}=\hat{A}+\hat{B}+\frac{1}{2}[\hat{A},\hat{B}]+\frac{1}{12}[[\hat{A},\hat{B}],\hat{B}]-\frac{1}{12}[[\hat{A},\hat{B}],\hat{A}]+\dots # $$ # From these relations, we note that # in our expression for $|c'\rangle$ we have commutators of the type # $$ # [a_{a}^{\dagger}a_{i},a_{b}^{\dagger}a_{j}], # $$ # and it is easy to convince oneself that these commutators, or higher powers thereof, are all zero. This means that we can write out our new representation of a Slater determinant as # $$ # |c'\rangle=exp\left\{\sum_{a>F}\sum_{i\le F}C_{ai}a_{a}^{\dagger}a_{i}\right\}| c\rangle=\prod_{i}\left\{1+\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}+\left(\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}\right)^2+\dots\right\}| c\rangle # $$ # We note that # $$ # \prod_{i}\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}\sum_{b>F}C_{bi}a_{b}^{\dagger}a_{i}| c\rangle =0, # $$ # and all higher-order powers of these combinations of creation and annihilation operators disappear # due to the fact that $(a_i)^n| c\rangle =0$ when $n > 1$. This allows us to rewrite the expression for $|c'\rangle $ as # $$ # |c'\rangle=\prod_{i}\left\{1+\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}\right\}| c\rangle, # $$ # which we can rewrite as # $$ # |c'\rangle=\prod_{i}\left\{1+\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}\right\}| a^{\dagger}_{i_1} a^{\dagger}_{i_2} \dots a^{\dagger}_{i_n}|0\rangle. # $$ # The last equation can be written as # <!-- Equation labels as ordinary links --> # <div id="_auto3"></div> # # $$ # \begin{equation} # |c'\rangle=\prod_{i}\left\{1+\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}\right\}| a^{\dagger}_{i_1} a^{\dagger}_{i_2} \dots a^{\dagger}_{i_n}|0\rangle=\left(1+\sum_{a>F}C_{ai_1}a_{a}^{\dagger}a_{i_1}\right)a^{\dagger}_{i_1} # \label{_auto3} \tag{10} # \end{equation} # $$ # <!-- Equation labels as ordinary links --> # <div id="_auto4"></div> # # $$ # \begin{equation} # \times\left(1+\sum_{a>F}C_{ai_2}a_{a}^{\dagger}a_{i_2}\right)a^{\dagger}_{i_2} \dots |0\rangle=\prod_{i}\left(a^{\dagger}_{i}+\sum_{a>F}C_{ai}a_{a}^{\dagger}\right)|0\rangle. # \label{_auto4} \tag{11} # \end{equation} # $$ # ## New operators # # # If we define a new creation operator # <!-- Equation labels as ordinary links --> # <div id="eq:newb"></div> # # $$ # \begin{equation} # b^{\dagger}_{i}=a^{\dagger}_{i}+\sum_{a>F}C_{ai}a_{a}^{\dagger}, \label{eq:newb} \tag{12} # \end{equation} # $$ # we have # $$ # |c'\rangle=\prod_{i}b^{\dagger}_{i}|0\rangle=\prod_{i}\left(a^{\dagger}_{i}+\sum_{a>F}C_{ai}a_{a}^{\dagger}\right)|0\rangle, # $$ # meaning that the new representation of the Slater determinant in second quantization, $|c'\rangle$, looks like our previous ones. However, this representation is not general enough since we have a restriction on the sum over single-particle states in Eq. ([12](#eq:newb)). The single-particle states have all to be above the Fermi level. # The question then is whether we can construct a general representation of a Slater determinant with a creation operator # $$ # \tilde{b}^{\dagger}_{i}=\sum_{p}f_{ip}a_{p}^{\dagger}, # $$ # where $f_{ip}$ is a matrix element of a unitary matrix which transforms our creation and annihilation operators # $a^{\dagger}$ and $a$ to $\tilde{b}^{\dagger}$ and $\tilde{b}$. These new operators define a new representation of a Slater determinant as # $$ # |\tilde{c}\rangle=\prod_{i}\tilde{b}^{\dagger}_{i}|0\rangle. # $$ # ## Showing that $|\tilde{c}\rangle= |c'\rangle$ # # # # We need to show that $|\tilde{c}\rangle= |c'\rangle$. We need also to assume that the new state # is not orthogonal to $|c\rangle$, that is $\langle c| \tilde{c}\rangle \ne 0$. From this it follows that # $$ # \langle c| \tilde{c}\rangle=\langle 0| a_{i_n}\dots a_{i_1}\left(\sum_{p=i_1}^{i_n}f_{i_1p}a_{p}^{\dagger} \right)\left(\sum_{q=i_1}^{i_n}f_{i_2q}a_{q}^{\dagger} \right)\dots \left(\sum_{t=i_1}^{i_n}f_{i_nt}a_{t}^{\dagger} \right)|0\rangle, # $$ # which is nothing but the determinant $det(f_{ip})$ which we can, using the intermediate normalization condition, # normalize to one, that is # $$ # det(f_{ip})=1, # $$ # meaning that $f$ has an inverse defined as (since we are dealing with orthogonal, and in our case unitary as well, transformations) # $$ # \sum_{k} f_{ik}f^{-1}_{kj} = \delta_{ij}, # $$ # and # $$ # \sum_{j} f^{-1}_{ij}f_{jk} = \delta_{ik}. # $$ # Using these relations we can then define the linear combination of creation (and annihilation as well) # operators as # $$ # \sum_{i}f^{-1}_{ki}\tilde{b}^{\dagger}_{i}=\sum_{i}f^{-1}_{ki}\sum_{p=i_1}^{\infty}f_{ip}a_{p}^{\dagger}=a_{k}^{\dagger}+\sum_{i}\sum_{p=i_{n+1}}^{\infty}f^{-1}_{ki}f_{ip}a_{p}^{\dagger}. # $$ # Defining # $$ # c_{kp}=\sum_{i \le F}f^{-1}_{ki}f_{ip}, # $$ # we can redefine # $$ # a_{k}^{\dagger}+\sum_{i}\sum_{p=i_{n+1}}^{\infty}f^{-1}_{ki}f_{ip}a_{p}^{\dagger}=a_{k}^{\dagger}+\sum_{p=i_{n+1}}^{\infty}c_{kp}a_{p}^{\dagger}=b_k^{\dagger}, # $$ # our starting point. We have shown that our general representation of a Slater determinant # $$ # |\tilde{c}\rangle=\prod_{i}\tilde{b}^{\dagger}_{i}|0\rangle=|c'\rangle=\prod_{i}b^{\dagger}_{i}|0\rangle, # $$ # with # $$ # b_k^{\dagger}=a_{k}^{\dagger}+\sum_{p=i_{n+1}}^{\infty}c_{kp}a_{p}^{\dagger}. # $$ # This means that we can actually write an ansatz for the ground state of the system as a linear combination of # terms which contain the ansatz itself $|c\rangle$ with an admixture from an infinity of one-particle-one-hole states. The latter has important consequences when we wish to interpret the Hartree-Fock equations and their stability. We can rewrite the new representation as # $$ # |c'\rangle = |c\rangle+|\delta c\rangle, # $$ # where $|\delta c\rangle$ can now be interpreted as a small variation. If we approximate this term with # contributions from one-particle-one-hole (*1p-1h*) states only, we arrive at # $$ # |c'\rangle = \left(1+\sum_{ai}\delta C_{ai}a_{a}^{\dagger}a_i\right)|c\rangle. # $$ # In our derivation of the Hartree-Fock equations we have shown that # $$ # \langle \delta c| \hat{H} | c\rangle =0, # $$ # which means that we have to satisfy # $$ # \langle c|\sum_{ai}\delta C_{ai}\left\{a_{a}^{\dagger}a_i\right\} \hat{H} | c\rangle =0. # $$ # With this as a background, we are now ready to study the stability of the Hartree-Fock equations. # # # # ## Hartree-Fock in second quantization and stability of HF solution # # The variational condition for deriving the Hartree-Fock equations guarantees only that the expectation value $\langle c | \hat{H} | c \rangle$ has an extreme value, not necessarily a minimum. To figure out whether the extreme value we have found is a minimum, we can use second quantization to analyze our results and find a criterion # for the above expectation value to a local minimum. We will use Thouless' theorem and show that # $$ # \frac{\langle c' |\hat{H} | c'\rangle}{\langle c' |c'\rangle} \ge \langle c |\hat{H} | c\rangle= E_0, # $$ # with # $$ # {|c'\rangle} = {|c\rangle + |\delta c\rangle}. # $$ # Using Thouless' theorem we can write out $|c'\rangle$ as # <!-- Equation labels as ordinary links --> # <div id="_auto5"></div> # # $$ # \begin{equation} # {|c'\rangle}=\exp\left\{\sum_{a > F}\sum_{i \le F}\delta C_{ai}a_{a}^{\dagger}a_{i}\right\}| c\rangle # \label{_auto5} \tag{13} # \end{equation} # $$ # <!-- Equation labels as ordinary links --> # <div id="_auto6"></div> # # $$ # \begin{equation} # =\left\{1+\sum_{a > F}\sum_{i \le F}\delta C_{ai}a_{a}^{\dagger} # a_{i}+\frac{1}{2!}\sum_{ab > F}\sum_{ij \le F}\delta C_{ai}\delta C_{bj}a_{a}^{\dagger}a_{i}a_{b}^{\dagger}a_{j}+\dots\right\} # \label{_auto6} \tag{14} # \end{equation} # $$ # where the amplitudes $\delta C$ are small. # # # The norm of $|c'\rangle$ is given by (using the intermediate normalization condition $\langle c' |c\rangle=1$) # $$ # \langle c' | c'\rangle = 1+\sum_{a>F} # \sum_{i\le F}|\delta C_{ai}|^2+O(\delta C_{ai}^3). # $$ # The expectation value for the energy is now given by (using the Hartree-Fock condition) # 1 # 4 # 5 # # < # < # < # ! # ! # M # A # T # H # _ # B # L # O # C # K # $$ # \frac{1}{2!}\sum_{ab>F} # \sum_{ij\le F}\delta C_{ai}\delta C_{bj}\langle c |\hat{H}a_{a}^{\dagger}a_{i}a_{b}^{\dagger}a_{j}|c\rangle+\frac{1}{2!}\sum_{ab>F} # \sum_{ij\le F}\delta C_{ai}^*\delta C_{bj}^*\langle c|a_{j}^{\dagger}a_{b}a_{i}^{\dagger}a_{a}\hat{H}|c\rangle # +\dots # $$ # We have already calculated the second term on the right-hand side of the previous equation # <!-- Equation labels as ordinary links --> # <div id="_auto7"></div> # # $$ # \begin{equation} # \langle c | \left(\{a^\dagger_i a_a\} \hat{H} \{a^\dagger_b a_j\} \right) | c\rangle=\sum_{pq} \sum_{ijab}\delta C_{ai}^*\delta C_{bj} \langle p|\hat{h}_0 |q\rangle # \langle c | \left(\{a^{\dagger}_i a_a\}\{a^{\dagger}_pa_q\} # \{a^{\dagger}_b a_j\} \right)| c\rangle # \label{_auto7} \tag{15} # \end{equation} # $$ # <!-- Equation labels as ordinary links --> # <div id="_auto8"></div> # # $$ # \begin{equation} # +\frac{1}{4} \sum_{pqrs} \sum_{ijab}\delta C_{ai}^*\delta C_{bj} \langle pq| \hat{v}|rs\rangle # \langle c | \left(\{a^\dagger_i a_a\}\{a^{\dagger}_p a^{\dagger}_q a_s a_r\} \{a^{\dagger}_b a_j\} \right)| c\rangle , # \label{_auto8} \tag{16} # \end{equation} # $$ # resulting in # $$ # E_0\sum_{ai}|\delta C_{ai}|^2+\sum_{ai}|\delta C_{ai}|^2(\varepsilon_a-\varepsilon_i)-\sum_{ijab} \langle aj|\hat{v}| bi\rangle \delta C_{ai}^*\delta C_{bj}. # $$ # $$ # \frac{1}{2!}\langle c |\left(\{a^\dagger_j a_b\} \{a^\dagger_i a_a\} \hat{V}_N \right) | c\rangle = # \frac{1}{2!}\langle c |\left( \hat{V}_N \{a^\dagger_a a_i\} \{a^\dagger_b a_j\} \right)^{\dagger} | c\rangle # $$ # which is nothing but # $$ # \frac{1}{2!}\langle c | \left( \hat{V}_N \{a^\dagger_a a_i\} \{a^\dagger_b a_j\} \right) | c\rangle^* # =\frac{1}{2} \sum_{ijab} (\langle ij|\hat{v}|ab\rangle)^*\delta C_{ai}^*\delta C_{bj}^* # $$ # or # $$ # \frac{1}{2} \sum_{ijab} (\langle ab|\hat{v}|ij\rangle)\delta C_{ai}^*\delta C_{bj}^* # $$ # where we have used the relation # $$ # \langle a |\hat{A} | b\rangle = (\langle b |\hat{A}^{\dagger} | a\rangle)^* # $$ # due to the hermiticity of $\hat{H}$ and $\hat{V}$. # # # We define two matrix elements # $$ # A_{ai,bj}=-\langle aj|\hat{v} bi\rangle # $$ # and # $$ # B_{ai,bj}=\langle ab|\hat{v}|ij\rangle # $$ # both being anti-symmetrized. # # # # With these definitions we write out the energy as # <!-- Equation labels as ordinary links --> # <div id="_auto9"></div> # # $$ # \begin{equation} # \langle c'|H|c'\rangle = \left(1+\sum_{ai}|\delta C_{ai}|^2\right)\langle c |H|c\rangle+\sum_{ai}|\delta C_{ai}|^2(\varepsilon_a^{HF}-\varepsilon_i^{HF})+\sum_{ijab}A_{ai,bj}\delta C_{ai}^*\delta C_{bj}+ # \label{_auto9} \tag{17} # \end{equation} # $$ # <!-- Equation labels as ordinary links --> # <div id="_auto10"></div> # # $$ # \begin{equation} # \frac{1}{2} \sum_{ijab} B_{ai,bj}^*\delta C_{ai}\delta C_{bj}+\frac{1}{2} \sum_{ijab} B_{ai,bj}\delta C_{ai}^*\delta C_{bj}^* # +O(\delta C_{ai}^3), # \label{_auto10} \tag{18} # \end{equation} # $$ # which can be rewritten as # $$ # \langle c'|H|c'\rangle = \left(1+\sum_{ai}|\delta C_{ai}|^2\right)\langle c |H|c\rangle+\Delta E+O(\delta C_{ai}^3), # $$ # and skipping higher-order terms we arrived # $$ # \frac{\langle c' |\hat{H} | c'\rangle}{\langle c' |c'\rangle} =E_0+\frac{\Delta E}{\left(1+\sum_{ai}|\delta C_{ai}|^2\right)}. # $$ # We have defined # $$ # \Delta E = \frac{1}{2} \langle \chi | \hat{M}| \chi \rangle # $$ # with the vectors # $$ # \chi = \left[ \delta C\hspace{0.2cm} \delta C^*\right]^T # $$ # and the matrix # $$ # \hat{M}=\left(\begin{array}{cc} \Delta + A & B \\ B^* & \Delta + A^*\end{array}\right), # $$ # with $\Delta_{ai,bj} = (\varepsilon_a-\varepsilon_i)\delta_{ab}\delta_{ij}$. # # # # The condition # $$ # \Delta E = \frac{1}{2} \langle \chi | \hat{M}| \chi \rangle \ge 0 # $$ # for an arbitrary vector # $$ # \chi = \left[ \delta C\hspace{0.2cm} \delta C^*\right]^T # $$ # means that all eigenvalues of the matrix have to be larger than or equal zero. # A necessary (but no sufficient) condition is that the matrix elements (for all $ai$ ) # $$ # (\varepsilon_a-\varepsilon_i)\delta_{ab}\delta_{ij}+A_{ai,bj} \ge 0. # $$ # This equation can be used as a first test of the stability of the Hartree-Fock equation.
doc/LectureNotes/hartreefocktheory.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ### Brain-hacking 101 # # Author: [**<NAME>**](http://arokem.org), [**The University of Washington eScience Institute**](http://escience.washington.edu) # ### Hack 4: interact with the data # To get a sense of your data, one of the best things you can do is to interactively explore the patterns in your data. # While building full-fledged interactive applications that do more than one thing is rather hard, it is possible to build small interactive data-exploration tools, that do just one thing, with only a few lines of code. Here, we'll show how to do that using `IPython`'s interactive widget system. We will demonstrate this below import numpy as np import nibabel as nib import matplotlib as mpl import matplotlib.pyplot as plt # %matplotlib inline mpl.style.use('bmh') img = nib.load('./data/run1.nii.gz') data = img.get_data() import scipy.signal as sps tsnr = np.mean(data, -1) / np.std(data, -1) def plot_tsnr(x=data.shape[0]/2, y=data.shape[1]/2, z=data.shape[2]/2): fig, axes = plt.subplots(2, 2) ax = axes[0, 0] ax.axis('off') ax.matshow(tsnr[:, :, z], cmap=mpl.cm.hot) ax = axes[0, 1] ax.axis('off') ax.matshow(np.rot90(tsnr[:, y, :]), cmap=mpl.cm.hot) ax = axes[1, 0] ax.axis('off') ax.matshow(np.rot90(tsnr[x, :, :]), cmap=mpl.cm.hot) ax = axes[1, 1] ax.plot(data[x, y, z]) ax.set_xlabel('Time') ax.set_ylabel('FMRI signal (a.u.)') fig.set_size_inches(10, 10) return fig import IPython.html.widgets as wdg import IPython.display as display pb_widget = wdg.interactive(plot_tsnr, x=wdg.IntSliderWidget(min=1, max=data.shape[0], value=data.shape[0]//2), y=wdg.IntSliderWidget(min=1, max=data.shape[1], value=data.shape[1]//2), z=wdg.IntSliderWidget(min=1, max=data.shape[2], value=data.shape[2]//2) ) display.display(pb_widget)
beginner-python/004-interactions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tower of Hanoi using Recursion # ### Based on Can YOU DO RECURSION? Towers of Hanoi: SOLVED in PYTHON. COMPLETE EXPLANATION. # ### https://www.youtube.com/watch?v=buWXDMbY3Ww # Which itself is based on https://en.wikipedia.org/wiki/Tower_of_Hanoi # # I have added code to put the name of the pile as the first entry in each list and extra print statements, so you can see the process and steps involved. # + def hanoi(n, source, target, spare): global count if n>0: print("hanoi({},{},{},{})".format(n, source[0], target[0], spare[0])) hanoi(n-1, source, spare, target) top_disk = source.pop() target.append(top_disk) print("Move {} from {} to {}".format(top_disk, source[0], target[0])) count += 1 hanoi(n-1, spare, target, source) for size in [3]: # You can make this a list to do more than one # each tower has an identifying letter as the first element, then a list of numbers A = ['A'] A.extend(range(size,0,-1)) B = ['B'] C = ['C'] count = 0 hanoi(size, A, B, C) print("Moving a tower of {} took {} moves".format(size, count))
TowersOfHanoi.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Dask Arrays # # In the following exercises, you'll be working with the code snippet below: # # ```python # # %%timeit # x = da.random.random((10000, 10000), chunks=(1000, 1000)) # y = x + x.T # z = y[::2, 5000:].mean(axis=1) # z.compute() # ``` import dask.array as da # ### Change the code above by setting chunks=(250, 250). How long does it take to run? # %%timeit x = da.random.random((10000, 10000), chunks=(250, 250)) y = x + x.T z = y[::2, 5000:].mean(axis=1) z.compute() # ### Now, set the parameter to chunks=(500, 500). How long does it take to run? Does this one or the previous one run quickly? Why? # %%timeit x = da.random.random((10000, 10000), chunks=(500, 500)) y = x + x.T z = y[::2, 5000:].mean(axis=1) z.compute() # <span style="color:blue">the larger chunk size is ~500ms faster per iteration than the smaller chunk size. This is likely due to the fact that with smaller chunks, dask will require more processing operations with a fixed number of threads so the processor is the limiting factor</span>
Assignment-4.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Time-frequency analysis on sensors # # The objective is to calculate power and phase locking value on sensors space data. # # For this lecture we will use the MNE somatosensory dataset that contains # so called event related synchronizations (ERS) / desynchronizations (ERD) # in the beta band. # # ` # Authors: <NAME> <<EMAIL>> # <NAME> <<EMAIL>> # ` # # License: BSD (3-clause) # # From Raw to Epochs # + import numpy as np import mne from mne import io from mne.time_frequency import tfr_morlet from mne.datasets import somato # reduce verbosity mne.set_log_level('WARNING') # - # Set parameters data_path = somato.data_path() raw_fname = data_path + '/MEG/somato/sef_raw_sss.fif' event_id, tmin, tmax = 1, -1., 3. # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) baseline = (None, 0) events = mne.find_events(raw, stim_channel='STI 014') mne.viz.plot_events(events, raw.info['sfreq']) _, times = raw[:, :] (times.max() - times.min()) / 60. # %matplotlib qt raw.plot(events=events) raw.info['sfreq'] raw.plot_psd(fmax=45) import matplotlib.pyplot as plt plt.close('all') # + # picks MEG gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6)) # - epochs.drop_bad() epochs epochs.plot_drop_log() epochs.drop_bad(reject=dict(eog=200e-6, grad=4e-10)) epochs.plot_drop_log() # %matplotlib qt epochs.plot(n_epochs=5) epochs.average().plot(spatial_colors=True, gfp=True) # Calculate power and intertrial coherence (between epochs phase locking value) epochs_no_evoked = epochs.copy().subtract_evoked() # + freqs = np.arange(6, 30, 3) # define frequencies of interest n_cycles = freqs / 2. # different number of cycle per frequency power, _ = tfr_morlet(epochs_no_evoked, freqs=freqs, n_cycles=n_cycles, use_fft=False, return_itc=True, decim=3, n_jobs=1) # Baseline correction can be applied to power or done in plots # To illustrate the baseline correction in plots the next line is commented # power.apply_baseline(baseline=(-0.5, 0), mode='logratio') # - power # Visualize results # + # %matplotlib qt # Use qt to have interactive visualization options # Inspect power power.plot_topo(baseline=(-0.5, 0), mode='logratio', title='Average power'); # - plt.close('all') # + # mne.baseline.rescale?? # + # %matplotlib inline power.plot([82], baseline=(-0.5, 0), mode='percent'); # + # %matplotlib inline import matplotlib.pyplot as plt power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=8, fmax=12, baseline=(-0.5, 0), mode='logratio', title='Alpha', vmin=0., vmax=0.45); power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=13, fmax=25, baseline=(-0.5, 0), mode='logratio', title='Beta', vmin=0., vmax=0.45); # - # ### See also # # # [tfr_multitaper](http://martinos.org/mne/dev/generated/mne.time_frequency.tfr_multitaper.html) # # [tfr_stockwell](http://martinos.org/mne/dev/generated/mne.time_frequency.tfr_stockwell.html) # # for Multi-Taper analysis like Fieldtrip or the use of the Stockwell transform (S-Transform). # + from mne.time_frequency import tfr_multitaper, tfr_stockwell # ... # - # Excercise 1: visualize the phase locking value that was also returned # ===================================================================== # Excercise 2: Compute power and phase lock in label of the source space # ====================================================================== # # Compute time-frequency maps of power and phase lock in the source space for somato data. # The inverse method is linear based on dSPM inverse operator. # Hint: learn from this example. # # http://martinos.org/mne/stable/auto_examples/time_frequency/plot_source_label_time_frequency.html # # You'll need to compute forward and inverse solutions.
2016_05_Halifax/Day3/Sensors_Time_Frequency.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Land Usage for Utility Scale PV Power Plants # # I received a message from an old friend this afternoon who said, "Random question. How big of site would you guess is required for a 1 megawatt solar field?" To which I responded, in classic Ph.D. fashion, "Well, that's complicated." # # As you might guess, the answer depends heavily on the cell, module, and mounting/tracking technologies used at the power plant. Obviously, a plant built with 25% efficient modules will use less land than a plant built with 15% efficient modules for the same overall capacity. You also need to consider design decisions like [ground cover ratio](https://www.researchgate.net/figure/Ground-coverage-ratio-GCR-is-the-ratio-of-module-area-to-land-area-or-the-ratio-of_fig1_304106060) and many others to exactly estimate this quantity. # # The "Suncyclopedia" [states that](http://www.suncyclopedia.com/en/area-required-for-solar-pv-power-plants/) "A simple rule of thumb is to take 100 sqft for every 1kW of solar panels." But to be honest, I did trust that number! So, I did a little more digging. As it turns out, Wikipedia helpfully provides a [list of photovoltaic power stations that are larger than 200 megawatts in current net capacity](https://en.wikipedia.org/wiki/List_of_photovoltaic_power_stations), which includes nameplate capacity and total land usage for most of the listed power plants. # # Having never actually scraped data from a Wikipedia table before, I figured this was a great opportunity to try out a new Python skill, while doing a bit of light research and data analysis. I used `requests` and `Beautiful Soup` to extract the table from Wikipedia and `pandas` to turn the raw html data into a table for analysis. # # We begin with the imports we'll need: import requests from bs4 import BeautifulSoup import pandas as pd # First things first, set up the HTML request, parse the HTML response, and extract the table. website_url = requests.get('https://en.wikipedia.org/wiki/List_of_photovoltaic_power_stations').text soup = BeautifulSoup(website_url, 'lxml') my_table = soup.find('table', {'class': 'wikitable sortable'}) # Tables in Wikipedia tend to have references in the cell text, which is annoying if the cell is supposed to have a float value. Finding and removing the references later can be a hassle, because the references are numeric as is the data we are looking for (and I'm not that proficient at regex). Luckily, `BeautifulSoup` makes searching and modifying HTML trees exceptionally easy. In the cell below, we search all cells in the table and remove all examples of the `reference` class. for row in my_table.findAll('tr'): cells = row.findAll(['th', 'td']) for cell in cells: references = cell.findAll('sup', {'class': 'reference'}) if references: for ref in references: ref.extract() # `pandas` has all for data I/O needs covered, and comes with an HTML reader. We simply convert the HTML tree to a string and pass it to `pandas` to make a data frame out of. df = pd.read_html(str(my_table), header=0)[0] # Now we just need to clean up some column names and data types. Some of the entries in the `Capacity` column contain an asterisk character (`*`) as explained on the Wikipedia page. As with the references, we need to remove these characters to isolate the numerica data. The second to last line below strips all non-numeric characters from the `Capacity` column. cols = list(df.columns) cols[3] = 'Capacity' cols[4] = 'YearlyEnergy' cols[5] = 'LandSize' df.columns = cols df['Capacity'] = df['Capacity'].str.extract('(\d+)', expand=False) df = df.astype({'LandSize': float, 'Capacity': float}) # And now we have successfully converted the table on Wikipedia to a useable data frame! df.head() # And now, let's answer my friend's original question and check if the simple rule of thumb is correct. The data in the table is in terms of MW and square kilometers, so we'll need to change our units to kW and square feet to compare to the given rule of thumb. land_usage = (df['LandSize'] * 1.076e+7 / df['Capacity'] / 1000).dropna() plt.figure(figsize=(10,6)) plt.hist(land_usage, bins=20) plt.xlabel('Square feet per kW') plt.ylabel('System Count') title1 = 'Land usage for solar power plants, exracted from:\n' title2 = 'https://en.wikipedia.org/wiki/List_of_photovoltaic_power_stations' plt.title(title1 + title2) plt.axvline(100, ls='--', color='r', label='rule of thumb: 100 ft^2/kW') med = np.median(land_usage) plt.axvline(med, ls=':', color='orange', label='median: {:.0f} ft^2/kW'.format(med)) plt.legend(); # So, we see that the median value for this set of power plants is more than three times larger than the standard rule of thumb!
notebooks/UtilityPVPlantLandUsage.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + pycharm={"is_executing": false} import my_setting.utils as utils import cufflinks as cf import jieba import matplotlib.pyplot as plt import missingno as msno import pandas as pd import seaborn as sns from wordcloud import WordCloud # %matplotlib inline cf.go_offline() pd.set_option('display.max_columns', None) pd.set_option('max_colwidth',100) plt.style.use('seaborn') # - output_900k=pd.read_csv('/home/zhw/PycharmProjects/nCovSentimentAnalysis/Output/submit_file.csv',encoding='utf-8') output_900k.head(10) temp=output_900k['y'].value_counts() temp_df=pd.DataFrame({'labels':temp.index,'values':temp.values}) temp_df.iplot(kind='pie',labels='labels',values='values',title='Labels') simplify_weibo=pd.read_csv('/home/zhw/PycharmProjects/nCovSentimentAnalysis/Data/UTF8nCoV_900k_train.unlabled.csv',encoding='utf-8') simplify_weibo.head(10) simplify_weibo.isnull().any() msno.bar(simplify_weibo) temp=simplify_weibo['label'].value_counts() temp_df=pd.DataFrame({'labels':temp.index,'values':temp.values}) temp_df.iplot(kind='pie',labels='labels',values='values',title='Labels') simplify_weibo['length']=simplify_weibo['review'].astype(str).apply(len) # + fig,ax1=plt.subplots(1,figsize=(14,8)) sns.distplot(simplify_weibo['length'],ax=ax1,color='green') plt.rcParams['font.family']='STSong' plt.rcParams['axes.unicode_minus']=False ax1.set_title('长度分布') plt.show() # + pycharm={"name": "#%%\n"} nip_corpus=pd.read_csv('/home/zhw/PycharmProjects/nCovSentimentAnalysis/Data/SentimentRelevantCorpus/unzip/chineseNIP_weibo_senti_100k.csv',encoding='utf-8') nip_corpus.head(10) # + pycharm={"is_executing": false, "name": "#%%\n"} nip_corpus.isnull().any() # + pycharm={"name": "#%%\n"} msno.bar(nip_corpus) # + pycharm={"name": "#%%\n"} temp = nip_corpus['label'].value_counts() temp_df = pd.DataFrame({'labels':temp.index,'values':temp.values}) temp_df.iplot(kind='pie',labels='labels',values='values',title='Labels') # + pycharm={"name": "#%%\n"} nip_corpus['length']=nip_corpus['review'].astype(str).apply(len) # + pycharm={"name": "#%%\n"} fig, ax1 = plt.subplots(1,figsize=(14,8)) sns.distplot(nip_corpus['length'],ax=ax1, color='green') plt.rcParams['font.family']='SIMHEI' plt.rcParams['axes.unicode_minus'] = False ax1.set_title('训练集微博长度分布') plt.show() # - # ## 读取数据 # # 由于其他编码会出现少部分汉字乱码,比如“XX超话”、表情符号等。所以我手动用记事本转换成了 UTF-8 编码。 # # 其中,超话以及其他特殊字符以“”表示,大部分表情符号被“??”代替。 # + pycharm={"is_executing": false} train_labeled = pd.read_csv('/home/zhw/PycharmProjects/nCovSentimentAnalysis/Data/UTF8nCoV_100k_train.labled.csv', encoding='utf-8') train_labeled.rename(columns = {"微博id": "Weibo_ID", "微博发布时间": "Publish_Time", "发布人账号": "Account_ID", "微博中文内容": "Chinese_Content", "微博图片": "Pictures", "微博视频": "Videos", "情感倾向": "Labels"}, inplace=True) train_labeled_copy = train_labeled.copy() # + test = pd.read_csv('/home/zhw/PycharmProjects/nCovSentimentAnalysis/Data/UTF8nCov_10k_test.csv', encoding='utf-8') test.rename(columns = {"微博id": "Weibo_ID", "微博发布时间": "Publish_Time", "发布人账号": "Account_ID", "微博中文内容": "Chinese_Content", "微博图片": "Pictures", "微博视频": "Videos"}, inplace=True) test_copy = test.copy() # - train_labeled.head(5) test.head(5) # ## 缺失值检查 # # 检查一下训练集和测试集的缺失值,可以看出有些数据是缺失的。 train_labeled.isnull().any() msno.bar(train_labeled) msno.bar(test) # 有少量微博正文数据丢失,部分labels为空。 # ## 标签检查 # # 标签统计如下: # + train_labeled_copy.fillna({"Labels": "empty_label"}, inplace=True) temp = train_labeled_copy["Labels"].value_counts() temp_df = pd.DataFrame({'labels': temp.index, 'values': temp.values}) temp_df.iplot(kind='pie',labels='labels',values='values', title='Labels') # - # 有6种噪音标签,每种各一个,此外还有一点点空标签,统计如下: # + noise = train_labeled_copy[(train_labeled_copy["Labels"] != '0') & (train_labeled_copy["Labels"] != '1') & (train_labeled_copy["Labels"] != '-1')] noise = noise["Labels"].value_counts() noise_df = pd.DataFrame({'labels': noise.index, 'values': noise.values}) noise_df.iplot(kind='pie',labels='labels',values='values', title='奇怪标签统计') # - # 对于非正常标签的处理,可以直接舍弃,也可以手工打上真正标签(如果不嫌累)。 # # 这里我们直接舍弃就行。 train_labeled_copy['time'] = pd.to_datetime('2020年' + train_labeled['Publish_Time'], format='%Y年%m月%d日 %H:%M', errors='ignore') test_copy['time'] = pd.to_datetime('2020年' + train_labeled['Publish_Time'], format='%Y年%m月%d日 %H:%M', errors='ignore') # + train_labeled_copy['month'] = train_labeled_copy['time'].dt.month train_labeled_copy['day'] = train_labeled_copy['time'].dt.day train_labeled_copy['dayfromzero'] = (train_labeled_copy['month'] - 1) * 31 + train_labeled_copy['day'] test_copy['month'] = test_copy['time'].dt.month test_copy['day'] = test_copy['time'].dt.day test_copy['dayfromzero'] = (test_copy['month'] - 1) * 31 + test_copy['day'] # + fig, ax = plt.subplots(1, 2, figsize=(16, 8)) sns.kdeplot(train_labeled_copy.loc[train_labeled_copy['Labels'] == '0', 'dayfromzero'], ax=ax[0], label='sent(0)') sns.kdeplot(train_labeled_copy.loc[train_labeled_copy['Labels'] == '1', 'dayfromzero'], ax=ax[0], label='sent(1)') sns.kdeplot(train_labeled_copy.loc[train_labeled_copy['Labels'] == '-1', 'dayfromzero'], ax=ax[0], label='sent(-1)') train_labeled_copy.loc[train_labeled_copy['Labels'] == '0', 'dayfromzero'].hist(ax=ax[1]) train_labeled_copy.loc[train_labeled_copy['Labels'] == '1', 'dayfromzero'].hist(ax=ax[1]) train_labeled_copy.loc[train_labeled_copy['Labels'] == '-1', 'dayfromzero'].hist(ax=ax[1]) ax[1].legend(['sent(0)', 'sent(1)','sent(-1)']) plt.show() # - # 可以看出如下情况,顺便也帮大家找了找相关新闻节点(深藏功与名): # # - **1月18日**后,话题量有明显增长。 # - 1月19日:武汉CDC李刚:新冠人传人风险较低,传染力不强。 # - 1月20日:钟南山:新型肺炎存在人传人现象。 # - 1月20日:口罩出现抢购现象。 # - 1月23日:武汉封城。 # - 1月25日:火神山医院设计方案完成;雷神山医院建造决定。 # # - **二月九日**前后,话题量达到顶峰。 # - 2月7日:“吹哨人”李文亮医生不幸去世。 # - 2月10日:湖北省相关领导任免。 # # 18日前官方口径(包括网友)们的态度还比较乐观,大部分认为“新冠可防可控”、“传染力有限”,然后钟南山院士对新冠“肯定存在人传人”的表态可能是导致话题量飙升的重要原因。 # # 7日晚上李文亮医生不幸去世,微博和朋友圈都在刷屏,从曲线上看,相关话题量几乎到达顶峰。 # # (**悼念李文亮医生**) # ## 正文长度统计 # # 现在开始统计相关微博的长度,训练集和测试集都有。 train_labeled_copy['Chinese_Content_Length'] = train_labeled['Chinese_Content'].astype(str).apply(len) test_copy['Chinese_Content_Length'] = train_labeled['Chinese_Content'].astype(str).apply(len) # + fig, (ax1, ax2) = plt.subplots(1,2,figsize=(14,8)) sns.distplot(train_labeled_copy['Chinese_Content_Length'], ax=ax1, color='blue') sns.distplot(test_copy['Chinese_Content_Length'], ax=ax2, color='green') plt.rcParams['font.sans-serif']=['SimHei'] plt.rcParams['axes.unicode_minus'] = False ax1.set_title('训练集微博长度分布') ax2.set_title('测试集微博长度分布') plt.show() ### 正文词云 采用 `jieba` 和 `wordcloud` 对正文做一个词云。 # - stop = open('/home/zhw/PycharmProjects/nCovSentimentAnalysis/Docs/cn_stopwords.txt', 'r+', encoding='utf-8') stopword = stop.read().split("\n") stopeword = set(stopword) stop.close() # + def stripword(seg): """停用词处理""" wordlist = [] for key in seg.split(' '): #去除停用词和单字 if not (key.strip() in stopword) and (len(key.strip()) > 1): wordlist.append(key) return ' '.join(wordlist) def cutword(content): """分词,去除停用词,写得比较简陋""" seg_list = jieba.cut(content) line = " ".join(seg_list) word = stripword(line) return word # - train_labeled_copy['Chinese_Content_cut'] = train_labeled['Chinese_Content'].astype(str).apply(cutword) train_labeled_copy.head(3) font = r'/home/zhw/PycharmProjects/nCovSentimentAnalysis/Docs/MYSH.TTC' wc = WordCloud(font_path=font, max_words=2000, width=1800, height=1600, mode='RGBA', background_color=None).generate(str(train_labeled_copy['Chinese_Content_cut'].values)) plt.figure(figsize=(14, 12)) plt.imshow(wc, interpolation='bilinear') plt.axis('off') plt.show() # ### 图片统计 # 在训练数据(微博)中,有些是有图片的,有些是没有图片的。 # # 我们做一个简单统计: train_labeled_copy['Pic_Length'] = train_labeled_copy['Pictures'].apply(lambda x: len(eval(x))) test_copy['Pic_Length'] = test_copy['Pictures'].apply(lambda x: len(eval(x))) # + fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 8)) ax1.set_xlim(0, 9) ax2.set_xlim(0, 9) sns.distplot(train_labeled_copy['Pic_Length'], bins=25, ax=ax1, color='blue', kde=False) sns.distplot(test_copy['Pic_Length'], bins=25, ax=ax2, color='green', kde=False) plt.rcParams['font.sans-serif']=['SimHei'] plt.rcParams['axes.unicode_minus'] = False ax1.set_title('训练集图片数量分布') ax2.set_title('测试集图片数量分布') plt.show() # - # 可以看出分布非常近似,这里基本没什么问题。 # # 大多数人都是不发图片或者发一张图片。 # # 至于9图比7、8图的多,6图比5图多,大概是强迫症... # ### 视频统计 # # 视频计数在训练集和测试集中分布: train_labeled_copy['With_Video'] = train_labeled_copy['Videos'].apply(lambda x: len(eval(x))) test_copy['With_Video'] = test_copy['Videos'].apply(lambda x: len(eval(x))) # + fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 8)) sns.countplot(train_labeled_copy['With_Video'], ax=ax1, color='grey') sns.countplot(test_copy['With_Video'], ax=ax2, color='orange') plt.rcParams['font.sans-serif']=['SimHei'] plt.rcParams['axes.unicode_minus'] = False ax1.set_title('训练集视频分布') ax2.set_title('测试集视频分布') plt.show() # - # 看起来分布是一致的。 # # 接下来我们看看带视频和不带视频的情感标签分布。 # + train_labeled_copy_2 = train_labeled_copy[(train_labeled_copy["Labels"] == '0') | (train_labeled_copy["Labels"] == '1') | (train_labeled_copy["Labels"] == '-1')] sns.countplot(x='With_Video', hue='Labels', data= train_labeled_copy_2, order = train_labeled_copy['With_Video'].dropna().value_counts().index) plt.show() # + [markdown] pycharm={"name": "#%% md\n"} # 可以看出,大部分带视频的微博,其情感为中性。 # # 但是明显的,相对于不带视频的微博,带视频的微博中 `正面情感` 比例比 `负面情感` 的比例更高。 #
Code/EDA.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # "Airports in the United States" # - This was originally an r-markdown file, now converted to notebook format, from <NAME>'s [repo here](https://github.com/jflam/VSDemos). I've also adapted some of the plotting because javascript+html is quite a bit <i>different</i> in a jupyter notebook, especially with the R kernel. # # # For packages that will be used here you may have to go to the terminal or R GUI you use and set these appropriately: # 1. setRepositories() # * chooseCRANmirror() # + # add a local lib to the R library path (for permission purposes) local_lib = '/Users/micheleenharris/lib/R' # this must exist! .libPaths(c(.libPaths(), local_lib)) # package to grab url data install.packages('RCurl', repos = "http://cloud.r-project.org", lib = local_lib) # some interactive plotting packages install.packages(c("DT", "htmlwidgets", "leaflet"), repos = "http://cloud.r-project.org", lib = local_lib) # - # Let's load a list of airport data from https://github.com/jpatokal/openflights # + # TIP: often you'll want some data externally, but can not upload a file to jupyter system # In this case I pull directly from a raw file online (like a repo file here) library(RCurl) airports <- getURL("https://raw.githubusercontent.com/jflam/VSDemos/master/RTVSDemos/airports.dat") airports <- read.csv(text = airports, header = FALSE) colnames(airports) <- c("ID", "name", "city", "country", "IATA_FAA", "ICAO", "lat", "lon", "altitude", "timezone", "DST", "Region") # - # This data set contains airports from all over the world. Let's get only the # airports from the United States. # + usa_airports <- subset(airports, country == "United States") # DT is a really nifty interactive table library with pagination library(DT) table <- datatable(usa_airports[,c("name", "city", "country", "IATA_FAA", "lat", "lon", "altitude")]) # Here is the trick to get an interactive graphic to work in notebooks # 1. load htmlwidgets # 2. save interactive object to an html file with saveWidget # 3. load the html back into the notebook and into an iframe element for display with IRdisplay library(htmlwidgets) # lots of cool widgets associated w/ this (http://www.htmlwidgets.org/) fname <- "something.html" # selfcontained = FASLE means: save a file with external resources placed in an adjacent directory # do this just in case there are external resources saveWidget(table, file = fname, selfcontained = F) IRdisplay::display_html(paste("<iframe src=' ", fname, " ' width = 100% height = 400>")) # - # Let's generate a static map containing the location of all of the airports in the USA # + attributes={"classes": [], "id": "", "message": "FALSE,", "warning": "FALSE"} library(ggmap) map <- get_map(location = "Seattle, WA", zoom = 3) ggmap(map) + geom_point(aes(x = lon, y = lat), data = usa_airports, alpha = 0.25) # - # Let's generate a zoomable, pannable map using the leaflet package. Here you can # see airports that belong to the USA that aren't geographically within the borders # of the United States. # + library(leaflet) m <- leaflet(data = usa_airports) %>% addTiles() %>% # Add default OpenStreetMap map tiles addCircles(~lon, ~lat, popup = ~name) # Trick again - probably should make into a fxn library(htmlwidgets) fname = 'tmp.html' saveWidget(m, file = fname, selfcontained = F) IRdisplay::display_html(paste("<iframe src=' ", fname, " ' width = 100% height = 400>")) # + library(leaflet) pal <- colorQuantile("YlOrRd", NULL, n = 8) m <- leaflet(usa_airports) %>% addTiles() %>% addCircleMarkers(color = ~pal(lat)) # Trick again just for good measure library(htmlwidgets) fname = 'tmp.html' saveWidget(m, file = fname, selfcontained = F) IRdisplay::display_html(paste("<iframe src=' ", fname, " ' width = 100% height = 400>")) # - # Created by a Microsoft Employee based on work by another Microsoft Employee. # # The MIT License (MIT)<br> # Copyright (c) 2016 <NAME>
notebooks/04.Interactive plotting.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### String Formatter str = 'Hello World' print('Welcome {}'.format(str)) # Using f-strings print(f'Welcome {str}') # Width and alignments marvel = [('Film', 'Character', 'Actor'), ('Iron Man', '<NAME>', '<NAME>.'), ('The Incredible Hulk', '<NAME>', '<NAME>'), ('<NAME>', 'Thor', '<NAME>'), ('Captain America: The First Avenger', '<NAME>', '<NAME>'),] for film in marvel: print(film) # Alignment with variable space for film in marvel: print(f'{film[0]:{40}}, {film[1]:{20}}, {film[2]:{10}}') for film in marvel: print(f'{film[0]:^{40}}, {film[1]:{20}}, {film[2]:.>{20}}')
NLP/Working with Text.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- from matplotlib import pyplot as plt from skimage import io, color from skimage.transform import rescale, resize, downscale_local_mean img=io.imread("images/Osteosarcoma_01_25Sigma_noise.tif") img img.shape img_rescaled=rescale(img, 1.0/4.0, anti_aliasing=False) img_rescaled img_rescaled.shape img_resized=resize(img,(200,200), anti_aliasing=True) img_resized img_resized.shape plt.imshow(img, cmap='gray') plt.imshow(img_rescaled, cmap='gray') from skimage.filters import gaussian, sobel plt.imshow(img) gaussian_using_skimage=gaussian(img, sigma=1, mode='constant',cval=0.0) plt.imshow(gaussian_using_skimage) gaussian_using_skimage=gaussian(img, sigma=5, mode='constant',cval=0.0) plt.imshow(gaussian_using_skimage) img_gray=io.imread("images/Osteosarcoma_01.tif", as_gray=True) sobel_img=sobel(img_gray) plt.imshow
research_env/Basic Image Processing with Scikit Image.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Customer Car Purchase Prediction by watching Advertisement on Social Media # Here we used Social_network_Ads dataset which providing information regarding the person's age and estimated salary & if he/she is interested in buying a car by watching an advertisement on Social Network.(yes=1,No=0) # we will predict that what are the chances of new person of some age to be interested in buying car by watching the same Social Media Advertisement. Here we make Predictions using Machine Learning algorithms. # + # Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd # - # Importing the dataset dataset = pd.read_csv('C:\\Users\\chira\\Project\\Machine Learning Social Network Ads\\Social_Network_Ads.csv') X = dataset.iloc[:, [2, 3]].values y = dataset.iloc[:, 4].values dataset.describe() dataset["Age"].value_counts() dataset["EstimatedSalary"].value_counts() dataset["Purchased"].value_counts() q = list(dataset.Age) plt.boxplot(q) plt.show() dataset.groupby(['Age', 'Purchased']).mean() dataset.groupby(['Age', 'Purchased']).std() dataset.info() dataset.head(25) dataset.hist(figsize=(16, 20), bins=50, xlabelsize=8, ylabelsize=8); # ; avoid having the matplotlib verbose informations # ## Splitting the Dataset and Feature Scaling # Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) # Feature Scaling from sklearn.preprocessing import StandardScaler sc_X = StandardScaler() X_train = sc_X.fit_transform(X_train) X_test = sc_X.transform(X_test) # ## 1)Logistic Regression # fitting logistic regresion to the training set from sklearn.linear_model import LogisticRegression classi = LogisticRegression(random_state = 0) classi.fit(X_train, y_train) # + #predict the test set results y_pred = classi.predict(X_test) # - #making the confusion matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) print(cm) #confusion Matrix #visualising the training set results from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred = classifier.predict(X1X2_array_t) X1X2_pred_reshape = X1X2_pred.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape, alpha= 0.20, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Logistic Regression (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() #visualising the test set results from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred = classifier.predict(X1X2_array_t) X1X2_pred_reshape = X1X2_pred.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape, alpha= 0.50, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Logistic Regression (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() print(y_pred) #Test Set result Prediction using Logistic Regression # Use score method to get accuracy of model score = classi.score(X_test, y_test) print("Accuracy using Logistic Regression: ",score) import seaborn as sns plt.figure(figsize=(9,9)) sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r'); plt.ylabel('Estimated_Salary'); plt.xlabel('Age'); all_sample_title = 'Accuracy Score: {0}'.format(score) plt.title(all_sample_title, size = 15); # ## 2) K_nearest neighbors[KNN] # # fitting classifier to the training set from sklearn.neighbors import KNeighborsClassifier classi1 = KNeighborsClassifier(n_neighbors = 5, p=2, metric = 'minkowski') classi1.fit(X_train, y_train) #predict the test set results y_pred1 = classi1.predict(X_test) print(y_pred1) # Use score method to get accuracy of model score = classi1.score(X_test, y_test) print("Accuracy using KNN: ",score) #making the confusion matrix from sklearn.metrics import confusion_matrix cm1 = confusion_matrix(y_test, y_pred1) print(cm1) #visualising the training set results from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred = classi1.predict(X1X2_array_t) X1X2_pred_reshape = X1X2_pred.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape, alpha= 0.10, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('K-NN (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() #visualising the test set results from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred = classi1.predict(X1X2_array_t) X1X2_pred_reshape = X1X2_pred.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape, alpha= 0.10, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('K-NN (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() #heatmap plt.figure(figsize=(9,9)) sns.heatmap(cm1, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r'); plt.ylabel('Estimated_Salary'); plt.xlabel('Age'); all_sample_title = 'Accuracy Score: {0}'.format(score) plt.title(all_sample_title, size = 15); # ## 3) Decision tree classification # fitting classifier to the training set from sklearn.tree import DecisionTreeClassifier classi2 = DecisionTreeClassifier(criterion = 'entropy', random_state = 0) classi2.fit(X_train, y_train) #predict the test set results y_pred2 = classi2.predict(X_test) print(y_pred2) # Use score method to get accuracy of model score = classi2.score(X_test, y_test) print("Accuracy using Decision tree: ",score) #visualising the training set results from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred1 = classi2.predict(X1X2_array_t) X1X2_pred_reshape1 = X1X2_pred1.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape1, alpha= 0.20, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Decision Tree Classifier (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() #visualising the test set results from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred1 = classi2.predict(X1X2_array_t) X1X2_pred_reshape1 = X1X2_pred1.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape1, alpha= 0.50, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Decision Tree Classifier (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() cm2 = confusion_matrix(y_test, y_pred2) #heatmap plt.figure(figsize=(10,10)) sns.heatmap(cm2, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r'); plt.ylabel('Estimated_Salary'); plt.xlabel('Age'); all_sample_title = 'Accuracy Score: {0}'.format(score) plt.title(all_sample_title, size = 15); # ## 4) random forest classification # fitting random forest classifier to the training set from sklearn.ensemble import RandomForestClassifier classi3 = RandomForestClassifier(criterion = 'entropy', n_estimators = 10, random_state = 0) classi3.fit(X_train, y_train) #predict the test set results y_pred3 = classi3.predict(X_test) print(y_pred3) # Use score method to get accuracy of model score = classi3.score(X_test, y_test) print("Accuracy using Random Forest: ",score) cm3 = confusion_matrix(y_test, y_pred3) #heatmap plt.figure(figsize=(10,10)) sns.heatmap(cm3, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r'); plt.ylabel('Estimated_Salary'); plt.xlabel('Age'); all_sample_title = 'Accuracy Score: {0}'.format(score) plt.title(all_sample_title, size = 15); #visualising the training set results from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred2 = classi3.predict(X1X2_array_t) X1X2_pred_reshape2 = X1X2_pred2.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape2, alpha= 0.20, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Random Forest (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() #visualising the test set results from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred2 = classi3.predict(X1X2_array_t) X1X2_pred_reshape2 = X1X2_pred2.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape2, alpha= 0.50, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Random Forest (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() # ## 5) Support Vector Machine (SVM) classification # fitting SVM to the training set from sklearn.svm import SVC classi4 = SVC(kernel='rbf', random_state = 0) classi4.fit(X_train, y_train) #predict the test set results y_pred4 = classi4.predict(X_test) print(y_pred4) #making the confusion matrix from sklearn.metrics import confusion_matrix cm4 = confusion_matrix(y_test, y_pred4) print(cm4) # Use score method to get accuracy of model scoreee = classi4.score(X_test, y_test) print("Accuracy using SVM : ",scoreee) #visualising the training set results from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred3 = classi4.predict(X1X2_array_t) X1X2_pred_reshape3 = X1X2_pred3.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape3, alpha= 0.20, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() #visualising the test set results from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred3 = classi4.predict(X1X2_array_t) X1X2_pred_reshape3 = X1X2_pred3.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape3, alpha= 0.50, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() # ## 6) Naive bayes # fitting Naive Bayes to the training set from sklearn.naive_bayes import GaussianNB classi6 = GaussianNB() classi6.fit(X_train, y_train) #predict the test set results y_pred6 = classi6.predict(X_test) cm6 = confusion_matrix(y_test, y_pred6) #heatmap plt.figure(figsize=(10,10)) sns.heatmap(cm6, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r'); plt.ylabel('Estimated_Salary'); plt.xlabel('Age'); all_sample_title = 'Accuracy Score: {0}'.format(score) plt.title(all_sample_title, size = 15); #visualising the training set results from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred6 = classi6.predict(X1X2_array_t) X1X2_pred_reshape6 = X1X2_pred6.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape6, alpha= 0.20, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Naive Bayes (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() #visualising the test set results from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min()-1, stop=X_set[:, 0].max()+1, step=0.01), np.arange(start=X_set[:, 1].min()-1, stop=X_set[:, 1].max()+1, step=0.01)) X1_ravel = X1.ravel() X2_ravel = X2.ravel() X1X2_array = np.array([X1_ravel, X2_ravel]) X1X2_array_t = X1X2_array.T X1X2_pred6 = classi6.predict(X1X2_array_t) X1X2_pred_reshape6 = X1X2_pred6.reshape(X1.shape) result_plt = plt.contourf(X1, X2, X1X2_pred_reshape6, alpha= 0.50, cmap = ListedColormap(('red','green'))) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Naive Bayes (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() # Use score method to get accuracy of model scoree = classi6.score(X_test, y_test) print("Accuracy using Naive Bayes: ",scoree) # ## Inference # Among all implemented algorithms , Support Vector Machine(SVC) and KNN perform well on Social Media dataset. # Accuracy of KNN and SVM : 93%
Customer Car Purchase Prediction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import cv2 import numpy as np import tensorflow as tf import os import glob import h5py import time print(os.getcwd()) # Get the Image def imread(path): img = cv2.imread(path) return img def imsave(image, path, config): #checkimage(image) # Check the check dir, if not, create one if not os.path.isdir(os.path.join(os.getcwd(),config.result_dir)): os.makedirs(os.path.join(os.getcwd(),config.result_dir)) # NOTE: because normial, we need mutlify 255 back cv2.imwrite(os.path.join(os.getcwd(),path),image * 255.) def checkimage(image): cv2.imshow("test",image) cv2.waitKey(0) def modcrop(img, scale =3): """ To scale down and up the original image, first thing to do is to have no remainder while scaling operation. """ # Check the image is grayscale if len(img.shape) ==3: h, w, _ = img.shape h = (h // scale) * scale w = (w // scale) * scale img = img[0:h, 0:w, :] else: h, w = img.shape h = (h // scale) * scale w = (w // scale) * scale img = img[0:h, 0:w] return img def checkpoint_dir(config): if config.is_train: return os.path.join('./{}'.format(config.checkpoint_dir), "train.h5") else: return os.path.join('./{}'.format(config.checkpoint_dir), "test.h5") def preprocess(path ,scale = 3): img = imread(path) #img=cv2.resize(img,None,fx = 2 ,fy = 2, interpolation = cv2.INTER_CUBIC) label_ = modcrop(img, scale) bicbuic_img = cv2.resize(label_, None, fx = 1.0/scale, fy = 1.0/scale, interpolation = cv2.INTER_CUBIC)# Resize by scaling factor input_ = cv2.resize(bicbuic_img, None, fx = scale, fy = scale, interpolation = cv2.INTER_CUBIC)# Resize by scaling factor return input_, label_ def prepare_data(dataset="Train",Input_img=""): """ Args: dataset: choose train dataset or test dataset For train dataset, output data would be ['.../t1.bmp', '.../t2.bmp',..., 't99.bmp'] """ if dataset == "Train": data_dir = os.path.join(os.getcwd(), dataset) # Join the Train dir to current directory data = glob.glob(os.path.join(data_dir, "*.*")) # make set of all dataset file path else: if Input_img !="": data = [os.path.join(os.getcwd(),Input_img)] else: data_dir = os.path.join(os.path.join(os.getcwd(), dataset), "Set5") data = glob.glob(os.path.join(data_dir, "*.*")) # make set of all dataset file path print(data) return data def load_data(is_train, test_img): if is_train: data = prepare_data(dataset="Train") else: if test_img != "": return prepare_data(dataset="Test",Input_img=test_img) data = prepare_data(dataset="Test") return data def make_sub_data(data, padding, config): """ Make the sub_data set Args: data : the set of all file path padding : the image padding of input to label config : the all flags """ sub_input_sequence = [] sub_label_sequence = [] for i in range(len(data)): if config.is_train: input_, label_, = preprocess(data[i], config.scale) # do bicubic else: # Test just one picture input_, label_, = preprocess(data[i], config.scale) # do bicubic if len(input_.shape) == 3: # is color h, w, c = input_.shape else: h, w = input_.shape # is grayscale #checkimage(input_) nx, ny = 0, 0 for x in range(0, h - config.image_size + 1, config.stride): nx += 1; ny = 0 for y in range(0, w - config.image_size + 1, config.stride): ny += 1 sub_input = input_[x: x + config.image_size, y: y + config.image_size] # 33 * 33 sub_label = label_[x + padding: x + padding + config.label_size, y + padding: y + padding + config.label_size] # 21 * 21 # Reshape the subinput and sublabel sub_input = sub_input.reshape([config.image_size, config.image_size, config.c_dim]) sub_label = sub_label.reshape([config.label_size, config.label_size, config.c_dim]) # Normialize sub_input = sub_input / 255.0 sub_label = sub_label / 255.0 #cv2.imshow("im1",sub_input) #cv2.imshow("im2",sub_label) #cv2.waitKey(0) # Add to sequence sub_input_sequence.append(sub_input) sub_label_sequence.append(sub_label) # NOTE: The nx, ny can be ignore in train return sub_input_sequence, sub_label_sequence, nx, ny def read_data(path): """ Read h5 format data file Args: path: file path of desired file data: '.h5' file format that contains input values label: '.h5' file format that contains label values """ with h5py.File(path, 'r') as hf: input_ = np.array(hf.get('input')) label_ = np.array(hf.get('label')) return input_, label_ def make_data_hf(input_, label_, config): """ Make input data as h5 file format Depending on "is_train" (flag value), savepath would be change. """ # Check the check dir, if not, create one if not os.path.isdir(os.path.join(os.getcwd(),config.checkpoint_dir)): os.makedirs(os.path.join(os.getcwd(),config.checkpoint_dir)) if config.is_train: savepath = os.path.join(os.getcwd(), config.checkpoint_dir + '/train.h5') else: savepath = os.path.join(os.getcwd(), config.checkpoint_dir + '/test.h5') with h5py.File(savepath, 'w') as hf: hf.create_dataset('input', data=input_) hf.create_dataset('label', data=label_) def merge(images, size, c_dim): """ images is the sub image set, merge it """ h, w = images.shape[1], images.shape[2] img = np.zeros((h*size[0], w*size[1], c_dim)) for idx, image in enumerate(images): i = idx % size[1] j = idx // size[1] img[j * h : j * h + h,i * w : i * w + w, :] = image #cv2.imshow("srimg",img) #cv2.waitKey(0) return img def input_setup(config): """ Read image files and make their sub-images and saved them as a h5 file format """ # Load data path, if is_train False, get test data data = load_data(config.is_train, config.test_img) padding = abs(config.image_size - config.label_size) // 2 # Make sub_input and sub_label, if is_train false more return nx, ny sub_input_sequence, sub_label_sequence, nx, ny = make_sub_data(data, padding, config) # Make list to numpy array. With this transform arrinput = np.asarray(sub_input_sequence) # [?, 33, 33, 3] arrlabel = np.asarray(sub_label_sequence) # [?, 21, 21, 3] make_data_hf(arrinput, arrlabel, config) return nx, ny # - class SRCNN(object): def __init__(self, sess, image_size, label_size, c_dim): self.sess = sess self.image_size = image_size self.label_size = label_size self.c_dim = c_dim self.build_model() def build_model(self): self.images = tf.placeholder(tf.float32, [None, self.image_size, self.image_size, self.c_dim], name='images') self.labels = tf.placeholder(tf.float32, [None, self.label_size, self.label_size, self.c_dim], name='labels') self.weights = { 'w1': tf.Variable(tf.random_normal([9, 9, self.c_dim, 64], stddev=1e-3), name='w1'), 'w2': tf.Variable(tf.random_normal([1, 1, 64, 32], stddev=1e-3), name='w2'), 'w3': tf.Variable(tf.random_normal([5, 5, 32, self.c_dim], stddev=1e-3), name='w3') } self.biases = { 'b1': tf.Variable(tf.zeros([64], name='b1')), 'b2': tf.Variable(tf.zeros([32], name='b2')), 'b3': tf.Variable(tf.zeros([self.c_dim], name='b3')) } self.pred = self.model() self.loss = tf.reduce_mean(tf.square(self.labels - self.pred)) self.saver = tf.train.Saver() # To save checkpoint def model(self): conv1 = tf.nn.relu(tf.nn.conv2d(self.images, self.weights['w1'], strides=[1,1,1,1], padding='VALID') + self.biases['b1']) conv2 = tf.nn.relu(tf.nn.conv2d(conv1, self.weights['w2'], strides=[1,1,1,1], padding='VALID') + self.biases['b2']) conv3 = tf.nn.conv2d(conv2, self.weights['w3'], strides=[1,1,1,1], padding='VALID') + self.biases['b3'] # This layer don't need ReLU return conv3 def train(self, config): # NOTE : if train, the nx, ny are ingnored nx, ny = input_setup(config) data_dir = checkpoint_dir(config) input_, label_ = read_data(data_dir) # Stochastic gradient descent with the standard backpropagation #self.train_op = tf.train.GradientDescentOptimizer(config.learning_rate).minimize(self.loss) self.train_op = tf.train.AdamOptimizer(learning_rate=config.learning_rate).minimize(self.loss) tf.global_variables_initializer().run() counter = 0 time_ = time.time() self.load(config.checkpoint_dir) # Train if config.is_train: print("Now Start Training...") for ep in range(config.epoch): # Run by batch images batch_idxs = len(input_) // config.batch_size for idx in range(0, batch_idxs): batch_images = input_[idx * config.batch_size : (idx + 1) * config.batch_size] batch_labels = label_[idx * config.batch_size : (idx + 1) * config.batch_size] counter += 1 _, err = self.sess.run([self.train_op, self.loss], feed_dict={self.images: batch_images, self.labels: batch_labels}) if counter % 10 == 0: print("Epoch: [%2d], step: [%2d], time: [%4.4f], loss: [%.8f]" % ((ep+1), counter, time.time()-time_, err)) #print(label_[1] - self.pred.eval({self.images: input_})[1],'loss:]',err) if counter % 500 == 0: self.save(config.checkpoint_dir, counter) # Test else: print("Now Start Testing...") # print("nx","ny",nx,ny) result = self.pred.eval({self.images: input_}) # print(label_[1] - result[1]) image = merge(result, [nx, ny], self.c_dim) #image_LR = merge(input_, [nx, ny], self.c_dim) #checkimage(image_LR) print('Now saving image...') imsave(image, config.result_dir+'/result.png', config) def load(self, checkpoint_dir): """ To load the checkpoint use to test or pretrain """ print("\nReading Checkpoints.....\n\n") model_dir = "%s_%s" % ("srcnn", self.label_size)# give the model name by label_size checkpoint_dir = os.path.join(checkpoint_dir, model_dir) ckpt = tf.train.get_checkpoint_state(checkpoint_dir) # Check the checkpoint is exist if ckpt and ckpt.model_checkpoint_path: ckpt_path = str(ckpt.model_checkpoint_path) # convert the unicode to string self.saver.restore(self.sess, os.path.join(os.getcwd(), ckpt_path)) print("\n Checkpoint Loading Success! %s\n\n"% ckpt_path) else: print("\n! Checkpoint Loading Failed \n\n") def save(self, checkpoint_dir, step): """ To save the checkpoint use to test or pretrain """ model_name = "SRCNN.model" model_dir = "%s_%s" % ("srcnn", self.label_size) checkpoint_dir = os.path.join(checkpoint_dir, model_dir) if not os.path.exists(checkpoint_dir): os.makedirs(checkpoint_dir) self.saver.save(self.sess, os.path.join(checkpoint_dir, model_name), global_step=step) # + import tensorflow as tf import numpy as np import pprint import os class this_config(): def __init__(self, is_train=True): self.epoch = 1 self.image_size = 33 self.label_size = 21 self.c_dim = 3 self.is_train = is_train self.scale = 3 self.stride = 21 self.checkpoint_dir = "checkpoint" self.learning_rate = 1e-4 self.batch_size = 128 self.result_dir = "sample" # self.test_img = "/Users/yaoqi/Desktop/crash.jpg" self.test_img = '' arg = this_config() print("***") with tf.Session() as sess: FLAGS = arg srcnn = SRCNN(sess, image_size = FLAGS.image_size, label_size = FLAGS.label_size, c_dim = FLAGS.c_dim) srcnn.train(FLAGS) FLAGS.is_train = False srcnn.train(FLAGS)
old files/main_v0.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # OSeMOSYS-PLEXOS global model: Powerplant data # ### Import modules # + import pandas as pd pd.options.mode.chained_assignment = None # default='warn' import numpy as np import itertools import urllib # %reload_ext blackcellmagic # - # ### Import data files and user input # + #Checks whether PLEXOS-World 2015 data needs to be retrieved from the PLEXOS-World Harvard Dataverse. try: Open = open(r"data/PLEXOS_World_2015_Gold_V1.1.xlsx") except IOError: urllib.request.urlretrieve("https://dataverse.harvard.edu/api/access/datafile/4008393?format=original&gbrecs=true" , r"data/PLEXOS_World_2015_Gold_V1.1.xlsx") Open = open(r"data/PLEXOS_World_2015_Gold_V1.1.xlsx") finally: Open.close() df = pd.read_excel(r"data/PLEXOS_World_2015_Gold_V1.1.xlsx" , sheet_name = "Properties") df_dict = pd.read_excel(r"data/PLEXOS_World_2015_Gold_V1.1.xlsx" , sheet_name = "Memberships") df_dict = df_dict[df_dict["parent_class"] == "Generator"].rename( {"parent_object": "powerplant"}, axis=1 ) df_weo_data = pd.read_csv(r"data/weo_2018_powerplant_costs.csv") df_op_life = pd.read_csv(r"data/operational_life.csv") df_tech_code = pd.read_csv(r"data/naming_convention_tech.csv") df_trn_efficiencies = pd.read_excel(r"data/Costs Line expansion.xlsx") df_weo_regions = pd.read_csv(r"data/weo_region_mapping.csv") model_start_year = 2015 model_end_year = 2050 years = list(range(model_start_year, model_end_year + 1)) region_name = 'GLOBAL' emissions = [] # - # ### Create 'output' directory if it doesn't exist import os if not os.path.exists('osemosys_global_model/data'): os.makedirs('osemosys_global_model/data') # ### Create main generator table gen_cols_1 = ["child_class", "child_object", "property", "value"] df_gen = df[gen_cols_1] df_gen = df_gen[df_gen["child_class"] == "Generator"] df_gen.rename(columns={"child_object": "powerplant"}, inplace=True) df_gen.drop("child_class", axis=1, inplace=True) df_gen = pd.pivot_table(df_gen, index="powerplant", columns="property", values="value", aggfunc=np.sum, fill_value=0, ) df_gen["total_capacity"] = (df_gen["Max Capacity"].astype(float)) * ( df_gen["Units"].astype(int) ) # + gen_cols_2 = ["Commission Date", "Heat Rate", "Max Capacity", "total_capacity"] df_gen_2 = df_gen[gen_cols_2] ## Compile dataframe with powerplants, nodes, and fuels df_dict_fuel = df_dict[df_dict["collection"] == "Fuels"] df_dict_fuel = df_dict_fuel[["powerplant", "child_object"]] df_dict_nodes = df_dict[df_dict["collection"] == "Nodes"] df_dict_nodes = df_dict_nodes[["powerplant", "child_object"]] df_dict_2 = pd.merge(df_dict_fuel, df_dict_nodes, how="outer", on="powerplant") ## Merge original generator dataframe with nodes and fuels df_gen_2 = pd.merge(df_gen_2, df_dict_2, how="outer", on="powerplant") df_gen_2.rename( {"child_object_x": "fuel", "child_object_y": "node"}, axis=1, inplace=True ) ## Extract start year from Commission Date df_gen_2["Commission Date"] = pd.to_datetime(df_gen_2["Commission Date"]) df_gen_2["start_year"] = df_gen_2["Commission Date"].dt.year df_gen_2.drop("Commission Date", axis=1, inplace=True) ## Calculate efficiency from heat rate. Units of heat rate in MJ/kWh df_gen_2["efficiency"] = 3.6 / df_gen_2["Heat Rate"].astype(float) df_gen_2.drop("Heat Rate", axis=1, inplace=True) ## Calcluate years of operation from start year until 2015 df_gen_2["years_of_operation"] = model_start_year - df_gen_2["start_year"] ## Fix blank spaces in 'fuels' columns. Appearing for 'Oil' powerplants in certain countries df_gen_2.loc[df_gen_2["fuel"].isna(), "fuel"] = ( df_gen_2["node"].str.split("-").str[:2].str.join("-") + " " + df_gen_2["powerplant"].str.split("_", expand=True)[1] ) # + ## Create column for technology df_gen_2["technology"] = df_gen_2["powerplant"].str.split("_").str[1] df_gen_2["technology"] = df_gen_2["technology"].str.title() ## Divide Gas into CCGT and OCGT based on max capacity df_gen_2.loc[ (df_gen_2["technology"] == "Gas") & (df_gen_2["Max Capacity"].astype(float) > 130), "technology", ] = "Gas-CCGT" df_gen_2.loc[ (df_gen_2["technology"] == "Gas") & (df_gen_2["Max Capacity"].astype(float) <= 130), "technology", ] = "Gas-OCGT" # - # ### Create table with aggregated capacity # + df_gen_agg_node = df_gen_2[df_gen_2['start_year']<=model_start_year] df_gen_agg_node = df_gen_agg_node.groupby(['node', 'technology'], as_index=False)['total_capacity'].sum() df_gen_agg_node = df_gen_agg_node.pivot(index='node', columns='technology', values='total_capacity').fillna(0).reset_index() df_gen_agg_node.drop('Sto', axis=1, inplace=True) # Drop 'Sto' technology. Only for USA. # Add extra nodes which exist in 2050 but are not in the 2015 data node_list = list(df_gen_agg_node['node'].unique()) nodes_extra_df = pd.DataFrame(columns=['node']) nodes_extra_list = ['AF-SOM', 'AF-TCD', 'AS-TLS', 'EU-MLT', 'NA-BLZ', 'NA-HTI', 'SA-BRA-J1', 'SA-BRA-J2', 'SA-BRA-J3', 'SA-SUR',] nodes_extra_df['node'] = nodes_extra_list df_gen_agg_node = df_gen_agg_node.append(nodes_extra_df, ignore_index=True, sort='False').fillna(0).sort_values(by='node').set_index('node').round(2) #df_gen_agg_node.to_csv(r'output/test_output_2.csv') # - # ### Add region and country code columns df_gen_2['region_code'] = df_gen_2['node'].str[:2] df_gen_2['country_code'] = df_gen_2['node'].str[3:] # ### Add operational life column # + op_life_dict = dict(zip(list(df_op_life['tech']), list(df_op_life['years']))) df_gen_2['operational_life'] = df_gen_2['technology'].map(op_life_dict) df_gen_2['retirement_year_data'] = (df_gen_2['operational_life'] + df_gen_2['start_year']) df_gen_2['retirement_diff'] = ((df_gen_2['years_of_operation'] - df_gen_2['operational_life'])/ df_gen_2['operational_life']) ''' Set retirement year based on years of operation. If (years of operation - operational life) is more than 50% of operational life, set retirement year ''' df_gen_2.loc[df_gen_2['retirement_diff'] >= 0.5, 'retirement_year_model'] = 2025 df_gen_2.loc[(df_gen_2['retirement_diff'] < 0.5) & (df_gen_2['retirement_diff'] > 0), 'retirement_year_model'] = 2030 df_gen_2.loc[df_gen_2['retirement_diff'] <= 0, 'retirement_year_model'] = df_gen_2['retirement_year_data'] #df_gen_2.to_csv(r'output/test_output_3.csv') # - # ### Add naming convention # + tech_code_dict = dict(zip(list(df_tech_code['tech']), list(df_tech_code['code']))) df_gen_2['tech_code'] = df_gen_2['technology'].map(tech_code_dict) df_gen_2.loc[df_gen_2['node'].str.len() <= 6, 'node_code'] = (df_gen_2['node']. str.split('-'). str[1:]. str.join("") + 'XX') df_gen_2.loc[df_gen_2['node'].str.len() > 6, 'node_code'] = (df_gen_2['node']. str.split('-'). str[1:]. str.join("") ) df_gen_2 = df_gen_2.loc[~df_gen_2['tech_code'].isna()] # - # ### Calculate average InputActivityRatio by node+technology and only by technology # + df_eff = df_gen_2[['node_code', 'efficiency', 'tech_code']] # Average efficiency by node and technology df_eff_node = df_eff.groupby(['tech_code', 'node_code'], as_index = False).agg('mean') df_eff_node['node_average_iar'] = ((1 / df_eff_node['efficiency']). round(2)) df_eff_node.drop('efficiency', axis = 1, inplace = True) # Average efficiency by technology df_eff_tech = df_eff.groupby('tech_code', as_index = False).agg('mean') df_eff_tech['tech_average_iar'] = ((1 / df_eff_tech['efficiency']). round(2)) df_eff_tech.drop('efficiency', axis = 1, inplace = True) # - # ### Calculate residual capacity # + res_cap_cols = [ "node_code", "tech_code", "total_capacity", "start_year", "retirement_year_model", ] df_res_cap = df_gen_2[res_cap_cols] for each_year in range(model_start_year, model_end_year+1): df_res_cap[str(each_year)] = 0 df_res_cap = pd.melt( df_res_cap, id_vars=res_cap_cols, value_vars=[x for x in df_res_cap.columns if x not in res_cap_cols], var_name="model_year", value_name="value", ) df_res_cap["model_year"] = df_res_cap["model_year"].astype(int) df_res_cap.loc[ (df_res_cap["model_year"] >= df_res_cap["start_year"]) & (df_res_cap["model_year"] <= df_res_cap["retirement_year_model"]), "value", ] = df_res_cap["total_capacity"] df_res_cap = df_res_cap.groupby( ["node_code", "tech_code", "model_year"], as_index=False )["value"].sum() # Add column with naming convention df_res_cap['node_code'] = df_res_cap['node_code'] df_res_cap['tech'] = ('PWR' + df_res_cap['tech_code'] + df_res_cap['node_code'] + '01' ) # Convert total capacity from MW to GW df_res_cap['value'] = df_res_cap['value'].div(1000) df_res_cap_plot = df_res_cap[['node_code', 'tech_code', 'model_year', 'value']] # Rename 'model_year' to 'year' and 'total_capacity' to 'value' df_res_cap.rename({'tech':'TECHNOLOGY', 'model_year':'YEAR', 'value':'VALUE'}, inplace = True, axis=1) # Drop 'tech_code' and 'node_code' df_res_cap.drop(['tech_code', 'node_code'], inplace = True, axis=1) # Add 'REGION' column and fill 'GLOBAL' throughout df_res_cap['REGION'] = region_name #Reorder columns df_res_cap = df_res_cap[['REGION', 'TECHNOLOGY', 'YEAR', 'VALUE']] df_res_cap.to_csv(r"osemosys_global_model/data/ResidualCapacity.csv", index = None) # - # ### Interactive visualisation of residual capacity by node # + import matplotlib.pyplot as plt import seaborn as sns; sns.set(color_codes = True) from ipywidgets import interact, interactive, fixed, interact_manual, Layout import ipywidgets as widgets #importing plotly and cufflinks in offline mode import plotly as py #import plotly.graph_objs as go import cufflinks import plotly.offline as pyo from plotly.offline import plot, iplot, init_notebook_mode pyo.init_notebook_mode() cufflinks.go_offline() cufflinks.set_config_file(world_readable=True, theme='white') color_codes = pd.read_csv(r'data\color_codes.csv', encoding='latin-1') color_dict = dict([(n,c) for n,c in zip(color_codes.tech, color_codes.colour)]) def f(node): df_plot = df_res_cap_plot.loc[df_res_cap_plot['node_code']==node] df_plot.drop('node_code', axis = 1, inplace = True) df_plot = df_plot.pivot_table(index='model_year', columns='tech_code', values='value', aggfunc='sum').reset_index() #plt.figure(figsize=(10, 10), dpi= 80, facecolor='w', edgecolor='k') #ax = sns.barplot(df_plot) return df_plot.iplot(x = 'model_year', kind = 'bar', barmode = 'stack', xTitle = 'Year', yTitle = 'Gigawatts', color=[color_dict[x] for x in df_plot.columns if x != 'model_year'], title = 'Residual Capacity', showlegend = True) interact(f, node=widgets.Dropdown(options = (df_res_cap_plot['node_code'] .unique() ) ) ) # - # ### Add input and output activity ratios # #### Create master table for activity ratios # + node_list = list(df_gen_2['node_code'].unique()) # Add extra nodes which are not present in 2015 but will be by 2050 for each_node in nodes_extra_list: if len(each_node) <= 6: node_list.append("".join(each_node.split('-')[1:]) + 'XX') else: node_list.append("".join(each_node.split('-')[1:])) master_fuel_list = list(df_gen_2['tech_code'].unique()) mode_list = [1,2] df_ratios = pd.DataFrame(list(itertools.product(node_list, master_fuel_list, mode_list, years) ), columns = ['node_code', 'tech_code', 'MODE_OF_OPERATION', 'YEAR'] ) df_ratios['TECHNOLOGY'] = ('PWR' + df_ratios['tech_code'] + df_ratios['node_code'] + '01' ) thermal_fuel_list = ['COA', 'COG', 'OCG', 'CCG', 'PET', 'URN', 'OIL', 'OTH' ] thermal_fuel_list_iar = ['COA', 'COG', 'PET', 'URN', 'OIL', 'OTH' ] renewables_list = ['BIO', 'GEO', 'HYD', 'SPV', 'CSP', 'WAS', 'WAV', 'WON', 'WOF'] # - # #### OutputActivityRatio - Power Generation Technologies # + df_oar = df_ratios.copy() mask = df_oar['TECHNOLOGY'].apply(lambda x: x[3:6] in thermal_fuel_list) df_oar['FUEL'] = 0 df_oar['FUEL'][mask] = 1 df_oar = df_oar.loc[~((df_oar['MODE_OF_OPERATION'] > 1) & (df_oar['FUEL'] == 0))] df_oar['FUEL'] = ('ELC' + df_oar['TECHNOLOGY'].str[6:11] + '01' ) df_oar['VALUE'] = 1 # Add 'REGION' column and fill 'GLOBAL' throughout df_oar['REGION'] = region_name # Select columns for final output table df_oar_final = df_oar[['REGION', 'TECHNOLOGY', 'FUEL', 'MODE_OF_OPERATION', 'YEAR', 'VALUE',]] # Don't write yet - we'll write the IAR and OAR at the end... # df_oar_final.to_csv(r"output/OutputActivityRatio.csv", index = None) # - # #### InputActivityRatio - Power Generation Technologies # + # Copy OAR table with all columns to IAR df_iar = df_oar.copy() df_iar['FUEL'] = 0 # Deal with GAS techs first... OCG and CCG # OCG Mode 1: Domestic GAS df_iar.loc[(df_iar['MODE_OF_OPERATION'] == 1) & (df_iar['TECHNOLOGY'].str[3:6].isin(['OCG'])), 'FUEL'] = 'GAS'+df_iar['TECHNOLOGY'].str[6:9] # OCG Mode 2: International GAS df_iar.loc[(df_iar['MODE_OF_OPERATION'] == 2) & (df_iar['TECHNOLOGY'].str[3:6].isin(['OCG'])), 'FUEL'] = 'GASINT' # CCG Mode 1: Domestic GAS df_iar.loc[(df_iar['MODE_OF_OPERATION'] == 1) & (df_iar['TECHNOLOGY'].str[3:6].isin(['CCG'])), 'FUEL'] = 'GAS'+df_iar['TECHNOLOGY'].str[6:9] # CCG Mode 2: International GAS df_iar.loc[(df_iar['MODE_OF_OPERATION'] == 2) & (df_iar['TECHNOLOGY'].str[3:6].isin(['CCG'])), 'FUEL'] = 'GASINT' # For non-GAS thermal fuels, domestic fuel input by country in mode 1 and # 'international' fuel input in mode 2 df_iar.loc[(df_iar['MODE_OF_OPERATION'] == 1) & (df_iar['TECHNOLOGY'].str[3:6].isin(thermal_fuel_list_iar)), 'FUEL'] = df_iar['TECHNOLOGY'].str[3:9] df_iar.loc[(df_iar['MODE_OF_OPERATION'] == 2) & (df_iar['TECHNOLOGY'].str[3:6].isin(thermal_fuel_list_iar)), 'FUEL'] = df_iar['TECHNOLOGY'].str[3:6] + 'INT' # For renewable fuels, input by node in mode 1 df_iar.loc[(df_iar['MODE_OF_OPERATION'] == 1) & (df_iar['TECHNOLOGY'].str[3:6].isin(renewables_list)), 'FUEL'] = df_iar['TECHNOLOGY'].str[3:11] # Remove mode 2 when not used df_iar = df_iar.loc[df_iar['FUEL'] != 0] # Join efficiency columns: one with node and technology average, and the # other with technology average df_iar = df_iar.join(df_eff_node.set_index(['tech_code', 'node_code']), on=['tech_code', 'node_code']) df_iar = df_iar.join(df_eff_tech.set_index('tech_code'), on='tech_code') # When available, choose node and technology average. Else, # choose technology average df_iar['VALUE'] = df_iar['node_average_iar'] df_iar.loc[df_iar['VALUE'].isna(), 'VALUE'] = df_iar['tech_average_iar'] # Add 'REGION' column and fill 'GLOBAL' throughout df_iar['REGION'] = region_name # Select columns for final output table df_iar_final = df_iar[['REGION', 'TECHNOLOGY', 'FUEL', 'MODE_OF_OPERATION', 'YEAR', 'VALUE',]] # Don't write this yet - we'll write both IAR and OAR at the end... # df_iar_final.to_csv(r"output/InputActivityRatio.csv", index = None) # - # #### OutputActivityRatios - Upstream # + thermal_fuels = ['COA', 'COG', 'GAS', 'PET', 'URN', 'OIL', 'OTH' ] # We have to create a technology to produce every fuel that is input into any of the power technologies: df_oar_upstream = df_iar_final.copy() # All mining and resource technologies have an OAR of 1... df_oar_upstream['VALUE'] = 1 # Renewables - set the technology as RNW + FUEL df_oar_upstream.loc[df_oar_upstream['FUEL'].str[0:3].isin(renewables_list), 'TECHNOLOGY'] = 'RNW'+df_oar_upstream['FUEL'] # If the fuel is a thermal fuel, we need to create the OAR for the mining technology... BUT NOT FOR THE INT FUELS... df_oar_upstream.loc[df_oar_upstream['FUEL'].str[0:3].isin(thermal_fuels) & ~(df_oar_upstream['FUEL'].str[3:6] == "INT"), 'TECHNOLOGY'] = 'MIN'+df_oar_upstream['FUEL'] # Above should get all the outputs for the MIN technologies, but we need to adjust the mode 2 ones to just the fuel code (rather than MINCOAINT) df_oar_upstream.loc[df_oar_upstream['MODE_OF_OPERATION']==2, 'TECHNOLOGY'] = 'MIN'+df_oar_upstream['FUEL'].str[0:3]+df_oar_upstream['TECHNOLOGY'].str[6:9] df_oar_upstream.loc[df_oar_upstream['MODE_OF_OPERATION']==2, 'FUEL'] = df_oar_upstream['FUEL'].str[0:3] # Now remove the duplicate fuels that the above created (because there's now a COA for each country, not each region, and GAS is repeated twice for each region as well): df_oar_upstream.drop_duplicates(keep='first',inplace=True) # Now we have to create the MINXXXINT technologies. They are all based on the MODE_OF_OPERATION == 2: df_oar_int = pd.DataFrame(df_oar_upstream.loc[df_oar_upstream['MODE_OF_OPERATION'] == 2, :]) # At this point we should have only the internationally traded fuels since they're all mode 2. So we can make the tech MINXXXINT and that's that. df_oar_int['TECHNOLOGY'] = 'MIN'+df_oar_int['FUEL']+'INT' # And rename the fuel to XXXINT df_oar_int['FUEL'] = df_oar_int['FUEL']+'INT' df_oar_int['MODE_OF_OPERATION'] = 1 # This is probably not strictly necessary as long as they're always the same in and out... # and de-duplicate this list: df_oar_int.drop_duplicates(keep='first',inplace=True) # - # #### Input Activity Ratios - Upstream # All we need to do is take in the thermal fuels for the MINXXXINT technologies. This already exists as df_oar_int with the XXINT fuel so we can simply copy that: df_iar_int = df_oar_int.copy() df_iar_int['FUEL'] = df_iar_int['FUEL'].str[0:3] # #### Downstream Activity Ratios # + # Build transmission system outputs df_iar_trn = df_oar_final.copy() # Change the technology name to PWRTRNXXXXX df_iar_trn["TECHNOLOGY"] = "PWRTRN" + df_iar_trn["FUEL"].str[3:8] # Make all modes of operation 1 df_iar_trn["MODE_OF_OPERATION"] = 1 # And remove all the duplicate entries df_iar_trn.drop_duplicates(keep="first", inplace=True) # OAR for transmission technologies is IAR, but the fuel is 02 instead of 01: df_oar_trn = df_iar_trn.copy() df_oar_trn["FUEL"] = df_oar_trn["FUEL"].str[0:8] + "02" # + # Build international transmission system from original input data, but for Line rather than Generator: int_trn_cols = ["child_class", "child_object", "property", "value"] df_int_trn = df[int_trn_cols] df_int_trn = df_int_trn[df_int_trn["child_class"] == "Line"] # For IAR and OAR we can drop the value: df_int_trn = df_int_trn.drop(["child_class", "value"], axis=1) # Create MofO column based on property: df_int_trn["MODE_OF_OPERATION"] = 1 df_int_trn.loc[df_int_trn["property"] == "Min Flow", "MODE_OF_OPERATION"] = 2 # Use the child_object column to build the technology names: df_int_trn["codes"] = df_int_trn["child_object"].str.split(pat="-") # If there are only two locations, then the node is XX df_int_trn.loc[df_int_trn["codes"].str.len() == 2, "TECHNOLOGY"] = ( "TRN" + df_int_trn["codes"].str[0] + "XX" + df_int_trn["codes"].str[1] + "XX" ) # If there are four locations, the node is already included df_int_trn.loc[df_int_trn["codes"].str.len() == 4, "TECHNOLOGY"] = ( "TRN" + df_int_trn["codes"].str[0] + df_int_trn["codes"].str[1] + df_int_trn["codes"].str[2] + df_int_trn["codes"].str[3] ) # If there are three items, and the last item is two characters, then the second item is an XX: df_int_trn.loc[ (df_int_trn["codes"].str.len() == 3) & (df_int_trn["codes"].str[2].str.len() == 2), "TECHNOLOGY", ] = ( "TRN" + df_int_trn["codes"].str[0] + "XX" + df_int_trn["codes"].str[1] + df_int_trn["codes"].str[2] ) # If there are three items, and the last item is three characters, then the last item is an XX: df_int_trn.loc[ (df_int_trn["codes"].str.len() == 3) & (df_int_trn["codes"].str[2].str.len() == 3), "TECHNOLOGY", ] = ( "TRN" + df_int_trn["codes"].str[0] + df_int_trn["codes"].str[1] + df_int_trn["codes"].str[2] + "XX" ) # Set the value (of either IAR or OAR) to 1 df_int_trn["VALUE"] = 1 df_int_trn["REGION"] = region_name df_int_trn = df_int_trn.drop(["property", "child_object", "codes"], axis=1) df_int_trn["YEAR"] = model_start_year # Add in the years: df_temp = df_int_trn.copy() for year in range(model_start_year + 1, model_end_year + 1): df_temp["YEAR"] = year df_int_trn = df_int_trn.append(df_temp) df_int_trn = df_int_trn.reset_index(drop=True) # + # Now create the input and output activity ratios df_int_trn_oar = df_int_trn.copy() df_int_trn_iar = df_int_trn.copy() # IAR Mode 1 is input from first country: df_int_trn_iar.loc[df_int_trn_iar["MODE_OF_OPERATION"] == 1, "FUEL"] = ( "ELC" + df_int_trn_iar["TECHNOLOGY"].str[3:8] + "02" ) # IAR Mode 2 is input from second country: df_int_trn_iar.loc[df_int_trn_iar["MODE_OF_OPERATION"] == 2, "FUEL"] = ( "ELC" + df_int_trn_iar["TECHNOLOGY"].str[8:13] + "02" ) # OAR Mode 2 is output to first country: df_int_trn_oar.loc[df_int_trn_oar["MODE_OF_OPERATION"] == 2, "FUEL"] = ( "ELC" + df_int_trn_oar["TECHNOLOGY"].str[3:8] + "01" ) # OAR Mode 1 is out to the second country: df_int_trn_oar.loc[df_int_trn_oar["MODE_OF_OPERATION"] == 1, "FUEL"] = ( "ELC" + df_int_trn_oar["TECHNOLOGY"].str[8:13] + "01" ) # Drop unneeded columns df_trn_efficiencies = df_trn_efficiencies.drop( [ "Line", "KM distance", "HVAC/HVDC/Subsea", "Build Cost ($2010 in $000)", "Annual FO&M (3.5% of CAPEX) ($2010 in $000)", "Unnamed: 8", "Line Max Size (MW)", "Unnamed: 10", "Unnamed: 11", "Unnamed: 12", "Subsea lines", ], axis=1, ) # Drop NaN values df_trn_efficiencies = df_trn_efficiencies.dropna(subset=["From"]) # Create To and From Codes: # If from column has length 6 then it's the last three chars plus XX df_trn_efficiencies.loc[df_trn_efficiencies["From"].str.len() == 6, "From"] = ( df_trn_efficiencies["From"].str[3:6] + "XX" ) # If from column has length 9 then it's the 3:6 and 7:9 three chars plus XX df_trn_efficiencies.loc[df_trn_efficiencies["From"].str.len() == 9, "From"] = ( df_trn_efficiencies["From"].str[3:6] + df_trn_efficiencies["From"].str[7:9] ) # If from column has length 6 then it's the last three chars plus XX df_trn_efficiencies.loc[df_trn_efficiencies["To"].str.len() == 6, "To"] = ( df_trn_efficiencies["To"].str[3:6] + "XX" ) # If from column has length 9 then it's the 3:6 and 7:9 three chars plus XX df_trn_efficiencies.loc[df_trn_efficiencies["To"].str.len() == 9, "To"] = ( df_trn_efficiencies["To"].str[3:6] + df_trn_efficiencies["To"].str[7:9] ) # Combine From and To columns. # If the From is earlier in the alphabet the technology is in order, add tech with mode 1. df_trn_efficiencies["TECHNOLOGY"] = ("TRN" + df_trn_efficiencies["From"] + df_trn_efficiencies["To"]) # Drop to and from columns df_trn_efficiencies = df_trn_efficiencies.drop(["From", "To"], axis=1) # Rename column 'VALUES' df_trn_efficiencies = df_trn_efficiencies.rename(columns={"Losses": "VALUE"}) # And adjust OAR values to be output amounts vs. losses: df_trn_efficiencies['VALUE'] = 1.0 - df_trn_efficiencies['VALUE'] # and add values into OAR matrix df_int_trn_oar = df_int_trn_oar.drop(["VALUE"], axis=1) df_int_trn_oar = pd.merge( df_int_trn_oar, df_trn_efficiencies, how="outer", on="TECHNOLOGY" ) # - # #### Output IAR and OAR # + # Combine the pieces from above and output to csv: df_oar_final = df_oar_final.append(df_oar_upstream) # add upstream production technologies df_oar_final = df_oar_final.append(df_oar_int) # Add in path through international markets df_oar_final = df_oar_final.append(df_oar_trn) # Add in domestic transmission df_oar_final = df_oar_final.append(df_int_trn_oar) # Add in international transmission # Select columns for final output table df_oar_final = df_oar_final.dropna() df_oar_final = df_oar_final[['REGION', 'TECHNOLOGY', 'FUEL', 'MODE_OF_OPERATION', 'YEAR', 'VALUE',]] df_iar_final = df_iar_final.append(df_iar_int) # Add in path through international markets df_iar_final = df_iar_final.append(df_iar_trn) # Add in domestic transmission df_iar_final = df_iar_final.append(df_int_trn_iar) # Add in international transmission # Select columns for final output table df_iar_final = df_iar_final.dropna() df_iar_final = df_iar_final[['REGION', 'TECHNOLOGY', 'FUEL', 'MODE_OF_OPERATION', 'YEAR', 'VALUE',]] df_oar_final.to_csv(r"osemosys_global_model/data/OutputActivityRatio.csv", index = None) df_iar_final.to_csv(r"osemosys_global_model/data/InputActivityRatio.csv", index = None) # - # ### Costs: Capital, fixed, and variable # + df_costs = pd.melt(df_weo_data, id_vars = ['technology', 'weo_region', 'parameter'], value_vars = ['2017', '2030', '2040'], var_name = ['YEAR']) df_costs['parameter'] = df_costs['parameter'].str.split('\r\n').str[0] df_costs['value'] = df_costs['value'].replace({'n.a.':0}) df_costs['value'] = df_costs['value'].astype(float) df_costs = df_costs.pivot_table(index = ['technology', 'parameter', 'YEAR'], columns = 'weo_region', values = 'value').reset_index() df_costs['AS_average'] = (df_costs['China'] + df_costs['India'] + df_costs['Japan'] + df_costs['Middle East']).div(4) df_costs['NA_average'] = (df_costs['United States']) df_costs['SA_average'] = (df_costs['Brazil']) df_costs['Global_average'] = (df_costs['Africa'] + df_costs['Brazil'] + df_costs['Europe'] + df_costs['China'] + df_costs['India'] + df_costs['Japan'] + df_costs['Middle East'] + df_costs['Russia'] + df_costs['United States']).div(9) df_costs = pd.melt(df_costs, id_vars = ['technology', 'parameter', 'YEAR'], value_vars = [x for x in df_costs.columns if x not in ['technology', 'parameter', 'YEAR'] ] ) df_costs['YEAR'] = df_costs['YEAR'].astype(int) costs_dict = {'Biomass - waste incineration - CHP':'WAS', 'Biomass Power plant':'BIO', 'CCGT':'CCG', 'CCGT - CHP':'COG', 'Concentrating solar power':'CSP', 'Gas turbine':'OCG', 'Geothermal':'GEO', 'Hydropower - large-scale':'HYD', 'Marine':'WAV', 'Nuclear':'URN', 'Solar photovoltaics - Large scale':'SPV', 'Steam Coal - SUBCRITICAL':'COA', 'Steam Coal - SUPERCRITICAL':'COA', 'Steam Coal - ULTRASUPERCRITICAL':'COA', 'Wind onshore':'WON'} # Missing OIL, OTH, PET, WOF df_costs = df_costs.loc[df_costs['technology'].isin(costs_dict.keys())] df_costs['technology_code'] = df_costs['technology'].replace(costs_dict) weo_regions_dict = dict([(k, v) for k, v in zip(df_weo_regions['technology_code'], df_weo_regions['weo_region'] ) ] ) for each_cost in ['Capital', 'O&M']: df_costs_temp = df_costs.loc[df_costs['parameter'].str.contains(each_cost)] df_costs_temp.drop(['technology', 'parameter'], axis = 1, inplace = True) df_costs_final = df_oar_final[['REGION', 'TECHNOLOGY', 'YEAR' ]] df_costs_final['YEAR'] = df_costs_final['YEAR'].astype(int) df_costs_final = df_costs_final.drop_duplicates() df_costs_final = (df_costs_final .loc[(df_costs_final['TECHNOLOGY'] .str.startswith('PWR') ) & (~df_costs_final['TECHNOLOGY'] .str.contains('TRN') ) ] ) df_costs_final['technology_code'] = df_costs_final['TECHNOLOGY'].str[3:6] df_costs_final['weo_region'] = df_costs_final['TECHNOLOGY'].str[6:9] df_costs_final['weo_region'] = (df_costs_final['weo_region'] .replace(weo_regions_dict)) df_costs_final = pd.merge(df_costs_final, df_costs_temp, on = ['technology_code', 'weo_region', 'YEAR'], how = 'left' ) df_costs_final.drop(['technology_code', 'weo_region'], axis = 1, inplace = True) df_costs_final = df_costs_final.fillna(-9) df_costs_final = pd.pivot_table(df_costs_final, index = ['REGION', 'YEAR'], columns = 'TECHNOLOGY', values = 'value').reset_index() df_costs_final = df_costs_final.replace([-9],[np.nan]) #df_costs_final.set_index(['REGION', 'YEAR'], # inplace = True) df_costs_final = df_costs_final.interpolate(method = 'linear', limit_direction='forward').round(2) df_costs_final = df_costs_final.interpolate(method = 'linear', limit_direction='backward').round(2) df_costs_final = pd.melt(df_costs_final, id_vars = ['REGION', 'YEAR'], value_vars = [x for x in df_costs_final.columns if x not in ['REGION', 'YEAR'] ], var_name = 'TECHNOLOGY', value_name = 'VALUE' ) df_costs_final = df_costs_final[['REGION', 'TECHNOLOGY', 'YEAR', 'VALUE']] df_costs_final = df_costs_final[~df_costs_final['VALUE'].isnull()] if each_cost in ['Capital']: df_costs_final.to_csv(r'osemosys_global_model/data/CapitalCost.csv', index = None) if each_cost in ['O&M']: df_costs_final.to_csv(r'osemosys_global_model/data/FixedCost.csv', index = None) # - # ## Create sets for TECHNOLOGIES, FUELS # + def create_sets(x): set_elements = list(df_iar_final[x].unique()) + list(df_oar_final[x].unique()) set_elements = list(set(set_elements)) set_elements.sort() set_elements_df = pd.DataFrame(set_elements, columns = ['VALUE']) return set_elements_df.to_csv(os.path.join(r'osemosys_global_model/data/', str(x) + '.csv' ), index = None ) create_sets('TECHNOLOGY') create_sets('FUEL') # - # ## Create set for YEAR, REGION, MODE_OF_OPERATION # + years_df = pd.DataFrame(years, columns = ['VALUE']) years_df.to_csv(r'osemosys_global_model/data/YEAR.csv', index = None) mode_list_df = pd.DataFrame(mode_list, columns = ['VALUE']) mode_list_df.to_csv(r'osemosys_global_model/data/MODE_OF_OPERATION.csv', index = None) regions_df = pd.DataFrame(columns = ['VALUE']) regions_df.loc[0] = region_name regions_df.to_csv(r'osemosys_global_model/data/REGION.csv', index = None) # - # ## Create set for EMISSION emissions_df = pd.DataFrame(emissions, columns = ['VALUE']) emissions_df.to_csv(r'osemosys_global_model/data/EMISSION.csv', index = None)
notebooks/OPG_powerplant_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Sukrut11/Advanced-House-Price-Regression/blob/main/Advance_House_Price_Prediction_Feature_Selection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="oQDXXqqd6FLx" # ##Feature Selection Advanced House Price Prediction # The main aim of this project is to predict the house price based on various features which we will discuss as we go ahead # + id="PUck5iWOvMFr" import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # to visualise al the columns in the dataframe pd.pandas.set_option('display.max_columns', None) from sklearn.linear_model import Lasso from sklearn.feature_selection import SelectFromModel # + id="BrzlOSnDxNn4" dataset = pd.read_csv("/content/x_train.csv") # + id="OyYIXflE2kkI" colab={"base_uri": "https://localhost:8080/", "height": 270} outputId="209c7855-fedb-429f-fcba-d72895436d9b" dataset.head() # + id="i9s1K6EM6Itg" ## Capture the dependent feature y_train=dataset[['SalePrice']] # Storing the dependent feature in y_train variable # + id="BcJfFi7G6NxH" ## drop dependent feature from dataset X_train=dataset.drop(['Id','SalePrice'],axis=1) # Removing the Id column which is not necessary and SalePrice which is our output column # + colab={"base_uri": "https://localhost:8080/"} id="phRqw9JU6w6_" outputId="a196212c-dac8-430a-fa44-9bb02264b4c3" ### Apply Feature Selection # first, I specify the Lasso Regression model, and I # select a suitable alpha (equivalent of penalty). # The bigger the alpha the less features that will be selected. # Then I use the selectFromModel object from sklearn, which # will select the features which coefficients are non-zero feature_sel_model = SelectFromModel(Lasso(alpha=0.005, random_state=0)) # remember to set the seed, the random state in this function feature_sel_model.fit(X_train, y_train) # + colab={"base_uri": "https://localhost:8080/"} id="6UYEdUMX6y-P" outputId="14101734-d10c-4300-aad6-af6639a06f15" feature_sel_model.get_support() # True- Important Feature and False- Not Important Feature # + colab={"base_uri": "https://localhost:8080/"} id="iGkIJuoo668n" outputId="8b767962-2e4d-4c3d-bc58-3bcaf0f606d9" # let's print the number of total and selected features # this is how we can make a list of the selected features selected_feat = X_train.columns[(feature_sel_model.get_support())] # let's print some stats print('total features: {}'.format((X_train.shape[1]))) print('selected features: {}'.format(len(selected_feat))) print('features with coefficients shrank to zero: {}'.format( np.sum(feature_sel_model.estimator_.coef_ == 0))) # + colab={"base_uri": "https://localhost:8080/"} id="6Se4Taj768tP" outputId="128c1757-d886-41d0-cd87-d66900aa8ac5" selected_feat # + id="1LMMXghC7YhP" X_train= X_train[selected_feat] # + colab={"base_uri": "https://localhost:8080/", "height": 270} id="-tkkDPim7b3P" outputId="d63294e1-5e84-4088-984a-d9d83502e2c0" X_train.head()
Advance_House_Price_Prediction_Feature_Selection.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from em_examples import DCLayers from IPython.display import display # %matplotlib inline from matplotlib import rcParams rcParams['font.size'] = 14 # # Purpose # # ## Investigating DC Resistivity Data # # Using the widgets contained in this notebook we will explore the physical principals governing DC resistivity including the behavior of currents, electric field, electric potentials in a two layer earth. # # The measured data in a DC experiment are potential differences, we will demonstrate how these provide information about subsurface physical properties. # ### Background: Computing Apparent Resistivity # # In practice we cannot measure the potentials everywhere, we are limited to those locations where we place electrodes. For each source (current electrode pair) many potential differences are measured between M and N electrode pairs to characterize the overall distribution of potentials. The widget below allows you to visualize the potentials, electric fields, and current densities from a dipole source in a simple model with 2 layers. For different electrode configurations you can measure the potential differences and see the calculated apparent resistivities. # # In a uniform halfspace the potential differences can be computed by summing up the potentials at each measurement point from the different current sources based on the following equations: # # \begin{align} # V_M = \frac{\rho I}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} \right] \\ # V_N = \frac{\rho I}{2 \pi} \left[ \frac{1}{AN} - \frac{1}{NB} \right] # \end{align} # where $AM$, $MB$, $AN$, and $NB$ are the distances between the corresponding electrodes. # # The potential difference $\Delta V_{MN}$ in a dipole-dipole survey can therefore be expressed as follows, # # \begin{equation} # \Delta V_{MN} = V_M - V_N = \rho I \underbrace{\frac{1}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} - \frac{1}{AN} + \frac{1}{NB} \right]}_{G} # \end{equation} # # and the resistivity of the halfspace $\rho$ is equal to, # # $$ # \rho = \frac{\Delta V_{MN}}{IG} # $$ # # In this equation $G$ is often referred to as the geometric factor. # # In the case where we are not in a uniform halfspace the above equation is used to compute the apparent resistivity ($\rho_a$) which is the resistivity of the uniform halfspace which best reproduces the measured potential difference. # # In the top plot the location of the A electrode is marked by the red +, the B electrode is marked by the blue -, and the M/N potential electrodes are marked by the black dots. The $V_M$ and $V_N$ potentials are printed just above and to the right of the black dots. The calculted apparent resistivity is shown in the grey box to the right. The bottom plot can show the resistivity model, the electric fields (e), potentials, or current densities (j) depending on which toggle button is selected. Some patience may be required for the plots to update after parameters have been changed. # # LayeredEarth app # # ## Parameters: # # - **A**: (+) Current electrode location # - **B**: (-) Current electrode location # - **M**: (+) Potential electrode location # - **N**: (-) Potential electrode location # - **$\rho_1$**: Resistivity of the first layer # - **$\rho_2$**: Resistivity of the second layer # - **h**: Thickness of the first layer # - **Plot**: Choice of 2D plot (Model, Potential, Electric field, Currents) # out = DCLayers.plot_layer_potentials_app() display(out)
notebooks/eletrorresistividade/DC_LayeredEarth.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Errors in Naming # **Question 1.** When you run the following cell, Python will produce an slightly-cryptic error message. Explain in the text cell below, in your own words, what's wrong with the code. (Remember, double-click the cell to edit it, and then click the Run button when you're done.) 4 = 2 + 2 # *Write your answer here, replacing this text.* # **Question 2.** When you run the following cell, Python will produce an slightly-cryptic error message. **Fix the error,** and then **explain below** in your own words what was wrong with the code. two = 2 four = two plus two # *Write your answer here, replacing this text.*
hw02/03_naming_errors.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Object-Oriented Python # # During this session, we will be exploring the Oriented-Object paradigm in Python using all what we did with Pandas in previous sessions. We will be working with the same data of aircraft supervising latest Tour de France. # + import pandas as pd df = pd.read_json("data/tour_de_france.json.gz") # - # There are three main principles around OOP: # - **encapsulation**: objects embed properties (attributes, methods); # - **interface**: objects expose and document services, they hide all about their inner behaviour; # - **factorisation**: objects/classes with similar behaviour are grouped together. # # A common way of working with Python is to implement **protocols**. Protocols are informal interfaces defined by a set of methods allowing an object to play a particular role in the system. For instance, for an object to behave as an iterable you don't need to subclass an abstract class Iterable or implement explicitely an interface Iterable: it is enough to implement the special methods `__iter__` method or even just the `__getitem__` (we will go through these concepts hereunder). # # Let's have a look at the special method `sorted`: it expects an **iterable** structure of **comparable** objects to return a sorted list of these objects. Let's have a look: sorted([-2, 4, 0]) # However it fails when object are not comparable: sorted([-1, 1+1j, 1-2j]) # Then we can write our own ComparableComplex class and implement a comparison based on modules. The **comparable** protocol expects the `<` operator to be defined (special keyword: `__lt__`) # + class ComparableComplex(complex): def __lt__(a, b): return abs(a) < abs(b) # Now this works: note the input is not a list but a generator. sorted(ComparableComplex(i) for i in [-1, 1 + 1j, 1 - 2j]) # - # We will be working with different views of pandas DataFrame for trajectories and collection of trajectories. Before we start any further, let's remember two ways to factorise behaviours in Object-Oriented Programming: **inheritance** and **composition**. # # The best way to do is not always obvious and it often takes experience to find the good and bad sides of both paradigms. # # In our previous examples, our ComparableComplex *offered not much more* than complex numbers. As long as we don't need to compare them, we could have *put them in a list together* with regular complex numbers *without loss of generality*: after all a ComparableComplex **is** a complex. That's a good smell for **inheritance**. # # If we think about our trajectories, we will build them around pandas DataFrames. Trajectories will probably have a single attribute: the dataframe. It could be tempting to inherit from `pd.DataFrame`; it will probably work fine in the beginning but problems will occur sooner than expected (most likely with inconsistent interfaces). We **model** trajectories and collections of trajectories with dataframes, but a trajectory **is not** a dataframe. Be reasonable and go for **composition**. # # So now we can start. # # - The `__init__` special method defines a constructor. `self` is necessary: it represents the current object. # Note that **the constructor does not return anything**. # + class FlightCollection: def __init__(self, data): self.data = data class Flight: def __init__(self, data): self.data = data # - FlightCollection(df) # ## Special methods # # There is nothing much we did at this point: just two classes holding a dataframe as an attribute. Even the output representation is the default one based on the class name and the object's address in memory. # # - we can **override** the special `__repr__` method (which **returns** a string—**do NOT** `print`!) in order to display a more relevant output. You may use the number of lines in the underlying dataframe for instance. # # <div class='alert alert-warning'> # <b>Exercice:</b> Write a relevant <code>__repr__</code> method. # </div> # # %load solutions/flight_repr.py FlightCollection(df) # Note that we passed the dataframe in the constructor. We want to keep it that way (we will see later why). However we may want to create a different type of constructor to read directly from the JSON file. There is a special kind of keyword for that. # # - `@classmethod` is a decorator to put before a method. It makes it an **class method**, i.e. you call it on the class and not on the object. The first parameter is no longer `self` (the instance) but by convention `cls` (the class). # # <div class='alert alert-warning'> # <b>Exercice:</b> Write a relevant <code>read_json</code> class method. # </div> # # %load solutions/flight_json.py collection = FlightCollection.read_json("data/tour_de_france.json.gz") # Now we want to make this `FlightCollection` iterable. # # - The special method to implement is `__iter__`. This method takes no argument and **yields** elements one after the other. # # <div class='alert alert-warning'> # <b>Exercice:</b> Write a relevant <code>__iter__</code> method which yields Flight instances. # </div> # # Of course, you should reuse the code of last session about iteration. # # %load solutions/flight_iter.py # + collection = FlightCollection.read_json("data/tour_de_france.json.gz") for flight in collection: print(flight) # - # <div class='alert alert-warning'> # <b>Exercice:</b> Write a relevant <code>__repr__</code> method for Flight including callsign, aircraft icao24 code and day of the flight. # </div> # # %load solutions/flight_nice_repr.py for flight in collection: print(flight) # <div class='alert alert-success'> # <b>Note:</b> Since our FlightCollection is iterable, we can pass it to any method accepting iterable structures. # </div> list(collection) # <div class='alert alert-warning'> # <b>Warning:</b> However, it won't work here, because Flight instances cannot be compared, unless we specify on which criterion we want to compare. # </div> sorted(collection) sorted(collection, key=lambda x: x.min("timestamp")) # <div class='alert alert-warning'> # <b>Exercice:</b> Implement the proper missing method so that a FlightCollection can be sorted. # </div> # # %load solutions/flight_sort.py sorted(collection) # ## Data visualisation # # See the following snippet of code for plotting trajectories on a map. # + # %matplotlib inline import matplotlib.pyplot as plt from cartopy.crs import EuroPP, PlateCarree fig, ax = plt.subplots(figsize=(10, 10), subplot_kw=dict(projection=EuroPP())) ax.coastlines("50m") for flight in collection: flight.data.plot( ax=ax, x="longitude", y="latitude", legend=False, transform=PlateCarree(), color="steelblue", ) ax.set_extent((-5, 10, 42, 52)) ax.set_yticks([]) # - # <div class='alert alert-warning'> # <b>Exercice:</b> Implement a plot method to make the job even more simple. # </div> # # %load solutions/flight_plot.py # + fig, ax = plt.subplots(figsize=(10, 10), subplot_kw=dict(projection=EuroPP())) ax.coastlines("50m") for flight in collection: flight.plot(ax, color="steelblue") ax.set_extent((-5, 10, 42, 52)) ax.set_yticks([]) # - # ## Indexation # # Until now, we implemented all what is necessary to iterate on structures. # This means we have all we need to yield elements one after the other. # # Note that: # - Python does not assume your structure has a length. # (There are some infinite iterators, like the one yielding natural integers one after the other.) # - Python cannot guess for you how you want to index your flights. # len(collection) collection['ASR172B'] # There are many ways to proceed with indexing. We may want to select flights with a specific callsign, or a specific icao24 code. Also, if only one Flight is returned, we want a Flight object. If two or more segments are contained in the underlying dataframe, we want to stick to a FlightCollection. # # <div class="alert alert-warning"> # <b>Exercice:</b> Implement a <code>__len__</code> special method, then a <code>__getitem__</code> special method that will return a Flight or a FlightCollection (depending on the selection) wrapping data corresponding to the given callsign or icao24 code. # </div> # # %load solutions/flight_index.py collection = FlightCollection.read_json("data/tour_de_france.json.gz") collection collection["3924a0"] collection["ASR172B"] # + from collections import defaultdict count = defaultdict(int) for flight in collection["ASR172B"]: count[flight.icao24] += 1 count # - # As we can see here, this method for indexing is not convenient enough. We could select the only flight `collection["ASR172B"]["3924a0"]` but with current implementation, there is no way to separate the 18 other flights. # # <div class='alert alert-warning'> # <b>Exercice:</b> Implement a different <code>__getitem__</code> method that checks the type of the index: filter on callsign/icao24 if the key is a <code>str</code>, filter on the day of the flight if the key is a <code>pd.Timestamp</code>. # </div> # # %load solutions/flight_index_time.py collection = FlightCollection.read_json("data/tour_de_france.json.gz") collection["ASR172B"][pd.Timestamp("2019-07-18")] # <div class='alert alert-warning'> # <b>Exercice:</b> Plot all trajectories flying on July 18th. How can they be sure to not collide with each other? # </div> # # %load solutions/flight_plot_july18.py
04-object_oriented_pandas.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Post Processing for Experiment: Instruction Latency with All Techniques # # Generate inputs to the plotting script "sec6-instruction-latency-plot". # + from __future__ import print_function import glob import matplotlib import cStringIO rc_fonts = { "font.weight": 800, "font.family": "serif", "font.serif": ["Times"], # use latex's default "font.sans-serif": ["DejaVu Sans"], "text.usetex": True, } matplotlib.rcParams.update(rc_fonts) import matplotlib.pyplot as plt import numpy as np import pandas as pd import scipy.optimize from rmexp import dbutils, schema from rmexp.schema import models from rmexp import dataset_analysis as analysis import cPickle as pickle # + def get_delays_per_app(exp, app): print("--------------{} average delay----------".format(exp)) delay_info_map = analysis.get_exp_app_inst_delay(exp, app) # flatten the info map all_delays = [] for client_id, client_delay_info in delay_info_map.items(): all_delays.extend(client_delay_info) print(all_delays) return all_delays # import pdb; pdb.set_trace() # avg_delay_per_client = {k: np.nan_to_num(np.mean(v[1])) for k, v in delay_infos.items()} # print(avg_delay_per_client) # avg_delay = np.mean([v for k, v in avg_delay_per_client.items()]) # print(avg_delay) # flatten_delays = [] # map(lambda x: flatten_delays.extend(x[1][1]), delay_infos.items()) # return flatten_delays apps = ['lego', 'pingpong', 'pool', 'face', 'ikea'] exps = [4, 6, 8] baseline_exp_format = 'sec6-fppli{}-baseline' ours_exp_format = 'sec6-fppli{}-cpushares' # bn = 'sec6-baseline-{}'.format(exp) # on = 'sec6-ours-{}'.format(exp) data = {} for app in apps: print("==========={}===========".format(app)) data[app] = {} for exp_idx, exp in enumerate(exps): bn = baseline_exp_format.format(exp) on = ours_exp_format.format(exp) delay_baseline = get_delays_per_app(bn, app) delay_ours = get_delays_per_app(on, app) data[app][bn] = delay_baseline data[app][on] = delay_ours with open('sec6-inst-latency.pkl', 'wb') as f: pickle.dump(data, f, protocol=pickle.HIGHEST_PROTOCOL)
visualization/sec6-instruction-latency-exp.ipynb