code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# HU Extension --- Final Project --- S89A DL for NLP # Michael Lee & Micah Nickerson # PART 2B - ADVERSARIAL ATTACK GENERATOR This is a notebook used to create the different adversarial attack **word perturbations**. ``` adversarial_dir = "Data Sets/adversarial_asap" test_set_file = adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-ML.xls" #verify data paths print(test_set_file) # Attack 1: Shuffling Words #load excel into dataframe test_set_shuffle = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_shuffle = test_set_shuffle.drop(['domain2_predictionid'], axis=1) for i in test_set_shuffle.index: words= test_set_shuffle.at[i, 'essay'].split() random.shuffle(words) new_sentence = ' '.join(words) test_set_shuffle.at[i,'essay'] = new_sentence test_set_shuffle.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-SHUFFLE.xls") ``` ### Anchor: Library ``` # Attack 2a: Appending - "Library" #load excel into dataframe test_set_append = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_append = test_set_append.drop(['domain2_predictionid'], axis=1) for i in test_set_append.index: words= test_set_append.at[i, 'essay'].split() words.append("library") new_sentence = ' '.join(words) test_set_append.at[i,'essay'] = new_sentence test_set_append.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-APPEND_LIBRARY.xls") # Attack 3a: Progressive Overload - "Library" #load excel into dataframe test_set_progressive = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_progressive = test_set_progressive.drop(['domain2_predictionid'], axis=1) for i in test_set_progressive.index: words= test_set_progressive.at[i, 'essay'].split() if i < 591: continue if i < 641: for x in range(0,i-590): words[x] = "library" new_sentence = ' '.join(words) test_set_progressive.at[i,'essay'] = new_sentence test_set_progressive.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-PROGRESSIVE_LIBRARY.xls") # Attack 4a: Single Substitution - "Library" #load excel into dataframe test_set_single = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_single = test_set_single.drop(['domain2_predictionid'], axis=1) for i in test_set_single.index: words= test_set_single.at[i, 'essay'].split() if i < 591: continue if i < 641: words[i-591] = "library" new_sentence = ' '.join(words) test_set_single.at[i,'essay'] = new_sentence test_set_single.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-SINGLE_LIBRARY.xls") # Attack 5a: Insertion of anchor in random locations #load excel into dataframe test_set_insertion = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_insertion = test_set_insertion.drop(['domain2_predictionid'], axis=1) for i in test_set_insertion.index: words= test_set_insertion.at[i, 'essay'].split() x = random.randint(0,len(words)) words.insert(x, 'library') new_sentence = ' '.join(words) test_set_insertion.at[i,'essay'] = new_sentence test_set_insertion.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-INSERTION_LIBRARY.xls") ``` ### Anchor: Censorship ``` # Attack 2b: Appending - "Censorship" #load excel into dataframe test_set_append = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_append = test_set_append.drop(['domain2_predictionid'], axis=1) for i in test_set_append.index: words= test_set_append.at[i, 'essay'].split() words.append("censorship") new_sentence = ' '.join(words) test_set_append.at[i,'essay'] = new_sentence test_set_append.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-APPEND_CENSORSHIP.xls") # Attack 3b: Progressive Overload - "Censorship" #load excel into dataframe test_set_progressive = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_progressive = test_set_progressive.drop(['domain2_predictionid'], axis=1) for i in test_set_progressive.index: words= test_set_progressive.at[i, 'essay'].split() if i < 591: continue if i < 641: for x in range(0,i-590): words[x] = "censorship" new_sentence = ' '.join(words) test_set_progressive.at[i,'essay'] = new_sentence test_set_progressive.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-PROGRESSIVE_CENSORSHIP.xls") # Attack 4b: Single Substitution - "Censorship" #load excel into dataframe test_set_single = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_single = test_set_single.drop(['domain2_predictionid'], axis=1) for i in test_set_single.index: words= test_set_single.at[i, 'essay'].split() if i < 591: continue if i < 641: words[i-591] = "censorship" new_sentence = ' '.join(words) test_set_single.at[i,'essay'] = new_sentence test_set_single.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-SINGLE_CENSORSHIP.xls") # Attack 5b: Insertion of "censorship" in random locations #load excel into dataframe test_set_insertion = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_insertion = test_set_insertion.drop(['domain2_predictionid'], axis=1) for i in test_set_insertion.index: words= test_set_insertion.at[i, 'essay'].split() x = random.randint(0,len(words)) words.insert(x, 'censorship') new_sentence = ' '.join(words) test_set_insertion.at[i,'essay'] = new_sentence test_set_insertion.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-INSERTION_CENSORSHIP.xls") ``` ### Anchor: The ``` # Attack 2c: Appending - "The" #load excel into dataframe test_set_append = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_append = test_set_append.drop(['domain2_predictionid'], axis=1) for i in test_set_append.index: words= test_set_append.at[i, 'essay'].split() words.append("the") new_sentence = ' '.join(words) test_set_append.at[i,'essay'] = new_sentence test_set_append.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-APPEND_THE.xls") # Attack 3c: Progressive Overload - "The" #load excel into dataframe test_set_progressive = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_progressive = test_set_progressive.drop(['domain2_predictionid'], axis=1) for i in test_set_progressive.index: words= test_set_progressive.at[i, 'essay'].split() if i < 591: continue if i < 641: for x in range(0,i-590): words[x] = "the" new_sentence = ' '.join(words) test_set_progressive.at[i,'essay'] = new_sentence test_set_progressive.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-PROGRESSIVE_THE.xls") # Attack 4c: Single Substitution - "The" #load excel into dataframe test_set_single = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_single = test_set_single.drop(['domain2_predictionid'], axis=1) for i in test_set_single.index: words= test_set_single.at[i, 'essay'].split() if i < 591: continue if i < 641: words[i-591] = "censorship" new_sentence = ' '.join(words) test_set_single.at[i,'essay'] = new_sentence test_set_single.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-SINGLE_THE.xls") # Attack 5c: Insertion of "the" in random locations #load excel into dataframe test_set_insertion = pd.read_excel(test_set_file, sheet_name='valid_set') #remove empty n/a cells test_set_insertion = test_set_insertion.drop(['domain2_predictionid'], axis=1) for i in test_set_insertion.index: words= test_set_insertion.at[i, 'essay'].split() x = random.randint(0,len(words)) words.insert(x, 'the') new_sentence = ' '.join(words) test_set_insertion.at[i,'essay'] = new_sentence test_set_insertion.to_excel(adversarial_dir+"/valid_set_plus_ADVERSARIAL_ESSAYS-INSERTION_THE.xls") ```
github_jupyter
# Heat Maps & Market Maps We'll build off our use of `bqplot` to make a dashboard with randomly sampled data last lecture to make interactive dashboards out of "real" data using the `UFO Dataset`. We'll also look at the `Market Map` marks in `bqplot` for another representation of mappable data in a dashboard. Let's import our usual stuff: ``` import pandas as pd import bqplot import numpy as np import traitlets import ipywidgets import matplotlib.pyplot as plt ####%matplotlib inline ``` ## Review from last time Last time we generated random data in 3D: We started by making a 3D dataset: ``` data3d = np.random.random( (10,10, 20)) data3d.shape # x, y, z ``` Recall we selected specfic x/y indices to plot. For example if x=y=0: ``` data3d[0,0,:] data3d[0,0,:].mean() ``` Now that we've decided how our label will look. What about our heat map? We know this is expecting 2d data as an input. So, we'll do the same thing, we'll take a mean, but we'll do it over the whole array: ``` data3d.mean(axis=2) data3d.mean(axis=2).shape ``` Let's build up our dashboard again, I'll copy-paste into Slack/zoom: ``` mySelectedLabel = ipywidgets.Label() # start with our label ``` So, all we need to to begin with is just add this sort of mean into our observation function: ``` # first, just print out what is changing, what is selected # only support 1 selected grid def on_selected(change): if len(change['owner'].selected) == 1: #print(change['owner'].selected[0]) i, j = change['owner'].selected[0] v = data3d[i,j,:].mean() # CHANGE HERE mySelectedLabel.value = 'Mean Data Value = ' + str(v) # 1. Data -- now 3d # 2. Scale - color scale col_sc = bqplot.ColorScale(scheme="Reds") # this is because the "bins" are just bins -- their order is NOT numerically important x_sc = bqplot.OrdinalScale() y_sc = bqplot.OrdinalScale() # 3. Axis -- for colors, the axis is a colorbar! ax_col = bqplot.ColorAxis(scale = col_sc, orientation='vertical', side='right') ax_x = bqplot.Axis(scale = x_sc) # same x/y ax we had before ax_y = bqplot.Axis(scale = y_sc, orientation='vertical') # 4. Mark -- heatmap -- CHANGE HERE heat_map = bqplot.GridHeatMap(color = data3d.mean(axis=2), scales = {'color':col_sc, 'row':y_sc, 'column':x_sc}, interactions={'click':'select'}, anchor_style={'fill':'blue'}, selected_style={'opacity':1.0}, unselected_style={'opacity':0.8}) # 5. Interactions -- going to be built into the GridHeatMap mark (how things *look* when selection happens) # BUT I'm going to define what happens when the interaction takes place (something is selected) heat_map.observe(on_selected, 'selected') # Finally, a figure! fig = bqplot.Figure(marks = [heat_map], axes=[ax_col, ax_x, ax_y]) # have to add this axis to my figure object! #fig # combine the widget & figure and display both at the same time! myDashboard = ipywidgets.VBox([mySelectedLabel, fig]) myDashboard # show the dashboard ``` Ok, so we have the first few components of our dashboard with our new 3D dataset. Let's work on the histogram. We'll put this to the right of our label+heatmap, but first things first, the histogram: ``` # 1. Data -- what is the data for the histogram i, j = 0, 0 # just as an example data3d[i,j] # 20 elements ``` This is the data that we want to feed into our histogram so that if we select on i,j with our heatmap it will show us the distribution of values along the 3rd dimension. ``` # 2. Scales -- linear for a histogram of numerical data x_sch = bqplot.LinearScale() y_sch = bqplot.LinearScale() # 3. Axis x_axh = bqplot.Axis(scale = x_sch, label = 'Value of 3rd axis') y_axh = bqplot.Axis(scale = y_sch, orientation = 'vertical', label='Frequency') ``` Marks will be the bqplot.Hist mark for histograms: ``` bqplot.Hist? hist = bqplot.Hist(sample = data3d[i,j,:], # note: we are "hard coding" the x/y indicies as i,j normalized = False, # normalized=False means we get counts in each bin scales = {'sample': x_sch, 'count': y_sch}, # sample is data values, count is frequency bins = 5) # number of bins ``` Note here that we specified this plot in a different way than the `GridHeatMap` and `Scatter` -- each type of `bqplot` plot has different parameters associated with the type of plot we are using. Let's combine this as a figure and take a look! ``` figh = bqplot.Figure(marks = [hist], axes = [x_axh, y_axh]) figh # your's might look different because you have different random numbers! ``` Let's pause here and think about how to link up our histogram i,j with our selections on the heatmap. First, what values of the histogram can we update? Let's check: ``` hist.keys hist.sample ``` Hey! Here is where our data values are stored! Like with when we observe changes in our heat map and update the values of our ipywidget's value we want to also update this sample's data! Let's update our `on_selected` function to reflect this: ``` def on_selected(change): if len(change['owner'].selected) == 1: #only 1 selected i, j = change['owner'].selected[0] # grab the x/y coordinates v = data3d[i,j].mean() # grab data value at x/y index and mean along z mySelectedLabel.value = 'Data Sum = ' + str(v) # set our label # NOW ALSO: update our histogram hist.sample = data3d[i,j,:] ``` We don't have to go through the exersise of rebuilding our heatmap and histogram in general, but let's just do it for the sake of completeness and not accidentally re-linking thinks we shouldn't: #1 heatmap: ``` # (1) Scales: x/y, colors col_sc = bqplot.ColorScale(scheme = "Reds") x_sc = bqplot.OrdinalScale() y_sc = bqplot.OrdinalScale() # (2) Axis: x/y, colors c_ax = bqplot.ColorAxis(scale = col_sc, orientation = 'vertical', side = 'right') x_ax = bqplot.Axis(scale = x_sc) y_ax = bqplot.Axis(scale = y_sc, orientation = 'vertical') # (3) Marks: heatmap heat_map = bqplot.GridHeatMap(color = data3d.mean(axis=2), scales = {'color': col_sc, 'row': y_sc, 'column': x_sc}, interactions = {'click': 'select'}, # make interactive on click of each box anchor_style = {'fill':'blue'}, # to make our selection blue selected_style = {'opacity': 1.0}, # make 100% opaque if box is selected unselected_style = {'opacity': 0.8}) # make a little see-through if not # (4) Link selection on heatmap to other things heat_map.observe(on_selected, 'selected') # (5) Paint heatmap canvas, don't display yet: fig_heatmap = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax]) ``` #2 histogram: ``` # (1) scales: x/y, linear x_sch = bqplot.LinearScale() # range of z-axis data y_sch = bqplot.LinearScale() # frequency of z-axis data in bins # (2) axis: x/y x_axh = bqplot.Axis(scale = x_sch, label = 'Value of 3rd axis') y_axh = bqplot.Axis(scale = y_sch, orientation = 'vertical', label='Frequency') # (3) Marks: histogram - start with just 0,0 in i/j -- can do other place holders hist = bqplot.Hist(sample = data3d[0,0,:], normalized = False, # normalized=False means we get counts in each bin scales = {'sample': x_sch, 'count': y_sch}, # sample is data values, count is frequency bins = 5) # number of bins # (4) NO LINKING ON HISTOGRAM SIDE # (5) Paint histogram canvas, don't display yet fig_hist = bqplot.Figure(marks = [hist], axes = [x_axh, y_axh]) ``` Create dashboard layout and display: ``` # side by side figures figures = ipywidgets.HBox([fig_heatmap, fig_hist]) # label on top myDashboard = ipywidgets.VBox([mySelectedLabel, figures]) myDashboard ``` Ok close, but its all smooshed! We can play with the layout of our plots before we display. To do this we use some more CSS-like styling options, in particular, `layout`: ``` # mess with figure layout: fig_heatmap.layout.min_width = '500px' # feel free to change for your screen fig_hist.layout.min_width = '500px' # side by side figures figures = ipywidgets.HBox([fig_heatmap, fig_hist]) # label on top myDashboard = ipywidgets.VBox([mySelectedLabel, figures]) myDashboard ``` Note that update was "back-reactive" in that it changed the figure layout above as well! Super sweet! #### Further complications: linking in different directions We can also apply some other links to further enhance our dashboard. One that we've messed with before is allowing the user to select the number of bins of a histogram. There are a few ways to do this, but one "easier" way is to just link the histogram "bins" with the value of a bins-slider. If we recall: `bins` was another key that was listed in hist: ``` hist.keys hist.bins = 5 # this changes the bins of our histogram above in a back-reactive way -- traitlets magic! ``` Let's add a little integer slider to allow our user to select the number of bins for the histogram: ``` bins_slider = ipywidgets.IntSlider(value=5, min=1, max=data3d.shape[2]) # don't make more bins than data points! ``` A reminder of what this looks like: ``` bins_slider ``` We can use `link` or `jslink` to link the value of this slider to our histogram's number of bins: ``` ipywidgets.jslink((bins_slider, 'value'), (hist, 'bins')) ``` While this change is "backreactive", let's redo our figure layout so we can see everything a bit better: ``` # mess with figure layout: fig_heatmap.layout.min_width = '500px' # feel free to change for your screen fig_hist.layout.min_width = '500px' # side by side figures figures = ipywidgets.HBox([fig_heatmap, fig_hist]) # label on top to the left, bins slider to the right controls = ipywidgets.HBox([mySelectedLabel, bins_slider]) # combined myDashboard = ipywidgets.VBox([controls, figures]) myDashboard ``` ## Dashboarding with "real" data using the UFO dataset Let's read in the UFO dataset: ``` ufos = pd.read_csv("/Users/jillnaiman/Downloads/ufo-scrubbed-geocoded-time-standardized-00.csv", names = ["date", "city", "state", "country", "shape", "duration_seconds", "duration", "comment", "report_date", "latitude", "longitude"], parse_dates = ["date", "report_date"]) # or from the web (but takes longer): # ufos = pd.read_csv("https://uiuc-ischool-dataviz.github.io/spring2019online/week04/data/ufo-scrubbed-geocoded-time-standardized-00.csv", # names = ["date", "city", "state", "country", # "shape", "duration_seconds", "duration", # "comment", "report_date", # "latitude", "longitude"], # parse_dates = ["date", "report_date"]) ``` ### Aside: downsampling We have covered downsampling before, but we will repeat it here in case folks have slower computers and don't want to use the full dataset while in class. We can remind ourselves of how many entries are in this dataset: ``` len(ufos) ``` 80,000 entries is a lot! So, to speed up our interactivity, we can randomly sample this dataset for plotting purposes. Lets down sample to 1000 samples: ``` nsamples = 1000 #nsamples = 5000 # if you want a larger sample downSampleMask = np.random.choice(range(len(ufos)-1), nsamples, replace=False) downSampleMask # so, downsample mask is now a list of random indicies for # the UFO dataset. Your's will not be the same because we have not set a seed. ``` Let's create a subset of our data with the `.loc` function: ``` ufosDS = ufos.loc[downSampleMask] len(ufosDS) # so much shorter ``` We can also see that this is saved as a dataframe: ``` ufosDS ``` Lets make a super quick scatter plot to remind ourselves what this looks like: ``` # Set up x/y scales x_sc = bqplot.LinearScale() y_sc = bqplot.LinearScale() # Set p axis x_ax = bqplot.Axis(scale = x_sc, label='Longitude') y_ax = bqplot.Axis(scale = y_sc, orientation = 'vertical', label='Latitude') #(1) set up marks scatters = bqplot.Scatter(x = ufosDS['longitude'], y = ufosDS['latitude'], scales = {'x': x_sc, 'y': y_sc}) fig = bqplot.Figure(marks = [scatters], axes = [x_ax, y_ax]) fig ``` Note I haven't added in colors or interactions. Let's at least add some colors in: ``` # lets make a super quick scatter plot to remind ourselves what this looks like: x_sc = bqplot.LinearScale() y_sc = bqplot.LinearScale() x_ax = bqplot.Axis(scale = x_sc, label='Longitude') y_ax = bqplot.Axis(scale = y_sc, orientation = 'vertical', label='Latitude') # Let's add in a color scale c_sc = bqplot.ColorScale() # color scale # color axes: c_ax = bqplot.ColorAxis(scale = c_sc, label='Duration in sec', orientation = 'vertical', side = 'right') # now replot: scatters = bqplot.Scatter(x = ufosDS['longitude'], y = ufosDS['latitude'], color=ufosDS['duration_seconds'], scales = {'x': x_sc, 'y': y_sc, 'color':c_sc}) fig = bqplot.Figure(marks = [scatters], axes = [x_ax, y_ax, c_ax]) fig ``` You'll note that this is a pretty muted color map. This is because we are coloring by duration, and if you recall there is a *huge* range in durations: ``` ufosDS['duration_seconds'].min(), ufosDS['duration_seconds'].max() ``` To account for this, let's take the log-base-10 of the duration when we plot. We should make sure we specify this on our color axis (color bar) label: ``` # lets make a super quick scatter plot to remind ourselves what this looks like: x_sc = bqplot.LinearScale() y_sc = bqplot.LinearScale() x_ax = bqplot.Axis(scale = x_sc, label='Longitude') y_ax = bqplot.Axis(scale = y_sc, orientation = 'vertical', label='Latitude') # (2) recall we can also color by things like duration c_sc = bqplot.ColorScale() # color scale # updated color axis with log scaling c_ax = bqplot.ColorAxis(scale = c_sc, label='log(sec)', orientation = 'vertical', side = 'right') scatters = bqplot.Scatter(x = ufosDS['longitude'], y = ufosDS['latitude'], color=np.log10(ufosDS['duration_seconds']), # here we take log, base 10 scales = {'x': x_sc, 'y': y_sc, 'color':c_sc}) fig = bqplot.Figure(marks = [scatters], axes = [x_ax, y_ax, c_ax]) fig ``` ## Heatmap Dashboard with our UFO dataset. Now we are going to use our heatmap idea to plot this data again. Note this will shmear out a lot of the nice map stuff we see above since we will be binning in lat/long. Don't worry! We'll talk about making maps in the 2nd part of class. What should we color by? Lets do by duration again. To get this to work with our heatmap, we're going to have to do some rebinning. Right now, our data is all in 1 long list we need to rebin things in a 2d histogram where the x axis is longitude & y is latitude. There are a few ways to do this, we'll use numpy to do our binning and use this as input into `bqplot.GridHeatMap`. Before that, we can get a sense of what we think things will look like using matplotlib's `hist2d`. We'll use the siting duration as *weights* into our histogram - so bins that have several long sitings will be counted as significant as well as bins that have multiple short sitings. We'll skip down to the code to do the histogramming, but here is an aside with more details if you'd like to look at it: ### ASIDE ``` plt.hist2d(ufos['longitude'], ufos['latitude'], weights=ufos['duration_seconds'], bins=20, cmap='RdPu') cb = plt.colorbar() cb.set_label('counts in bin') ``` Note that here I am using the whole UFO dataset again, since we are rebinning anyway. Feel free to use `ufoDS` if it works better on your computer. Again, we know that the duration should be log scaled, and we can do that with the `SymLogNorm` color scale in matplotlib if we want: ``` import matplotlib.colors as mpl_colors plt.hist2d(ufos['longitude'], ufos['latitude'], weights=ufos['duration_seconds'], bins=20, cmap='RdPu', norm = mpl_colors.SymLogNorm(10)) # ignorning the warning (10 -> e) cb = plt.colorbar() cb.set_label('counts in bin') ``` Now this is starting to look a bit more like our scatter plot, but we can more easily make out areas of long duration (like in the US). Ok, we want to incorporate interactivity, so let's use the `bqplot` engine + our ideas of the heatmap marks that we used last time to create our own clickable map. Let's use `numpy`'s 2d histogram function to do that for us: ``` # ***START WITH 10 EACH** nlong = 20 nlat = 20 #(1) hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'], ufos['latitude'], weights=ufos['duration_seconds'], bins=[nlong,nlat]) # this returns the TOTAL duration of ufo events in each bin # Let's take a quick look at this data hist2d hist2d.max(), hist2d.min() # a pretty big range! ``` Let's take a quick look at this with `imshow` in `matplotlib`: ``` plt.imshow(hist2d, cmap='RdPu', norm = mpl_colors.SymLogNorm(10)) ``` Note that the x/y labels are just the bin indicies. But even so, we can see that this is rotated to what we want to acutally plot! Different methods of histogramming will give you different shaped outputs. Worse still, depending on what viz engine you're using, it expects different orientations of the data going in! My suggestion is to experiement and make sure you're data is in the correct orientation by plotting it a few times. For `bqplot`, we actually want our orientation to be *upsidedown* which we can get by taking the transpose of `hist2d`: ``` plt.imshow(hist2d.T, cmap='RdPu', norm = mpl_colors.SymLogNorm(10)) ``` Ok, let's make our histogramming more complex. As an aside: we want to treat the histogram as a probability instead of a total weighted count: ``` hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'], ufos['latitude'], weights=ufos['duration_seconds'], density=True, bins = [nlong,nlat]) hist2d.max(), hist2d.min() ``` What are the shapes of the different outputs here? ``` hist2d.shape, long_edges.shape, lat_edges.shape ``` Note that the long/lat edges have 1 more count than the histogram size. This is because they are indeed edges. To get bin centers, which is what we want to do for plotting we can do: ``` long_centers = (long_edges[:-1] + long_edges[1:]) / 2 long_centers lat_centers = (lat_edges[:-1] + lat_edges[1:]) / 2 lat_centers ``` We might want to control where our bins are, we can do this by specifying bin edges ourselves: ``` long_bins = np.linspace(-150, 150, nlong+1) lat_bins = np.linspace(-40, 70, nlat+1) print(long_bins, long_bins.shape) print(lat_bins, lat_bins.shape) ``` Let's take these bins as our inputs and regenerate our histogram: ``` hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'], ufos['latitude'], weights=ufos['duration_seconds'], bins = [long_bins,lat_bins]) ``` And grab our centers of lat and long for plotting as well (note: if you're like 30% with inline programming, that is totally fine!): ``` long_centers = (long_edges[:-1] + long_edges[1:]) / 2 lat_centers = (lat_edges[:-1] + lat_edges[1:]) / 2 ``` We know that we want to input this into `bqplot`'s grid heatmap, so we need to take the transpose: ``` hist2d = hist2d.T ``` What is the range of values in our plot? ``` hist2d.min(), hist2d.max(), hist2d[hist2d>0].min() # this is the *total duration* of sitings in a bin ``` We still have a big range in count values because we weighted by the non-log duration above. So we'll instead take the log of our output histogram. For aesthetic value, we want to have areas where there are no counts (like the ocean) show up as zero. We can do that by a little trick -- setting the 0 values to `NaN`. We will then take the log for color scaling: ``` np.log10(hist2d).min() ``` The above gives us an error which can mess up our color maps. So we'll be tricky: ``` hist2d[hist2d <= 0] = np.nan # set zeros to NaNs # then take log hist2d = np.log10(hist2d) #hist2d[0:10] ``` ### END ASIDE In the interest of time, we won't be doing the binning in class, but here is a function that will do this for us: ``` def generate_histogram_from_lat_long(ufos, nlong=20, nlat=20, longmin=-150, longmax=150, latmin=-40, latmax=70, takeLog=True): long_bins = np.linspace(longmin, longmax, nlong+1) lat_bins = np.linspace(latmin, latmax, nlat+1) hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'], ufos['latitude'], weights=ufos['duration_seconds'], bins = [long_bins,lat_bins]) hist2d = hist2d.T if takeLog: hist2d[hist2d <= 0] = np.nan # set zeros to NaNs # then take log hist2d = np.log10(hist2d) long_centers = (long_edges[:-1] + long_edges[1:]) / 2 lat_centers = (lat_edges[:-1] + lat_edges[1:]) / 2 return hist2d, long_centers, lat_centers, long_edges, lat_edges ``` Now we'll just use this! ``` hist2d, long_centers, lat_centers, long_edges, lat_edges = generate_histogram_from_lat_long(ufos) ``` Now that we have all that fancy binning out of the way, lets proceed as normal: ``` # (1) add scales - colors, x & y col_sc = bqplot.ColorScale(scheme="RdPu", min=np.nanmin(hist2d), max=np.nanmax(hist2d)) x_sc = bqplot.LinearScale() y_sc = bqplot.LinearScale() # (2) create axis - for colors, x & y c_ax = bqplot.ColorAxis(scale = col_sc, orientation = 'vertical', side = 'right') x_ax = bqplot.Axis(scale = x_sc, label='Longitude') y_ax = bqplot.Axis(scale = y_sc, orientation = 'vertical', label = 'Latitude') # (3) Marks heat_map = bqplot.GridHeatMap(color = hist2d, row = lat_centers, column = long_centers, scales = {'color': col_sc, 'row': y_sc, 'column': x_sc}, interactions = {'click': 'select'}, anchor_style = {'fill':'blue'}, selected_style = {'opacity': 1.0}, unselected_style = {'opacity': 1.0}) # (4) interactivity - none yet # (5) put it all together in a figure fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax]) fig ``` Let's start building up our dashboard like before. One easy thing we can do is add a label: ``` # (1) add scales - colors, x & y col_sc = bqplot.ColorScale(scheme="RdPu", min=np.nanmin(hist2d), max=np.nanmax(hist2d)) x_sc = bqplot.LinearScale() y_sc = bqplot.LinearScale() # (2) create axis - for colors, x & y c_ax = bqplot.ColorAxis(scale = col_sc, orientation = 'vertical', side = 'right') x_ax = bqplot.Axis(scale = x_sc, label='Longitude') y_ax = bqplot.Axis(scale = y_sc, orientation = 'vertical', label = 'Latitude') # (3) Marks heat_map = bqplot.GridHeatMap(color = hist2d, row = lat_centers, column = long_centers, scales = {'color': col_sc, 'row': y_sc, 'column': x_sc}, interactions = {'click': 'select'}, anchor_style = {'fill':'blue'}, selected_style = {'opacity': 1.0}, unselected_style = {'opacity': 1.0}) # (4) interactivity - label mySelectedLabel = ipywidgets.Label() def get_data_value(change): if len(change['owner'].selected) == 1: #only 1 selected i,j = change['owner'].selected[0] v = hist2d[i,j] # grab data value mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label # make sure we check out heat_map.observe(get_data_value, 'selected') # (5) put it all together in a figure fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax]) myDashboard = ipywidgets.VBox([mySelectedLabel,fig]) myDashboard ``` Let's also include information about the duration as a function of date in a particular bin on another plot -- a scatter plot this time. Let's first start by making this plot alone before putting it into our dashboard. ``` import datetime as dt # we'll use this to format our dates all fancy like ``` #1: Now let's make our scales. We'll start with a new `bqplot` scale called `DateScale`: ``` x_scl = bqplot.DateScale(min=dt.datetime(1950,1,1),max=dt.datetime(2020,1,1)) # note: for dates on x-axis ``` Let's plot the duration on a log scale since we know that's probably what will look best based on the range of durations: ``` y_scl = bqplot.LogScale() ``` #2: Our axis: ``` ax_xcl = bqplot.Axis(label='Date', scale=x_scl) ax_ycl = bqplot.Axis(label='Duration in Sec', scale=y_scl, orientation='vertical', side='left') ``` #3: our marks, in this case a scatter plot Thinking ahead, we know that we want to select a 2d bin from our heatmap to then draw scatters for our scatter plot. Let's write things in this way: ``` i,j = 19,0 # picking an x/y bin -- this is one I know has a lot of data! ``` Let's specify the range of longs & lats for this selection of x/y bin: ``` longs = [long_edges[j], long_edges[j+1]] # min/max longitude lats = [lat_edges[i],lat_edges[i+1]] # min/max latitude ``` Let's *mask* out a subset of the UFO dataset with *only* these ranges of longitude and latitude: ``` region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\ (ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) ) # we can see this selects for the upper right point of our heatmap lats, longs, ufos['latitude'][region_mask] ``` We won't add any interactivity to this plot -- the interactivity will be drawn from our heatmap, so all that is left to do is add in marks: #4: Marks ``` # lets plot the durations as a function of year there duration_scatt = bqplot.Scatter(x = ufos['date'][region_mask], y = ufos['duration_seconds'][region_mask], scales={'x':x_scl, 'y':y_scl}) ``` #5: Put it all together and take a look! ``` fig_dur = bqplot.Figure(marks = [duration_scatt], axes = [ax_xcl, ax_ycl]) fig_dur ``` ### Scatter plot + label driven by heatmap dashboard Let's put together our heatmap + label + scatter plot as a dashboard. I'll recopy what we had before into some cells we can put together: ``` # (I) CREATE LABEL mySelectedLabel = ipywidgets.Label() # (II) HEAT MAP # (1) add scales - colors, x & y col_sc = bqplot.ColorScale(scheme="RdPu", min=np.nanmin(hist2d), max=np.nanmax(hist2d)) x_sc = bqplot.LinearScale() y_sc = bqplot.LinearScale() # (2) create axis - for colors, x & y c_ax = bqplot.ColorAxis(scale = col_sc, orientation = 'vertical', side = 'right') x_ax = bqplot.Axis(scale = x_sc, label='Longitude') y_ax = bqplot.Axis(scale = y_sc, orientation = 'vertical', label = 'Latitude') # (3) Marks heat_map = bqplot.GridHeatMap(color = hist2d, row = lat_centers, column = long_centers, scales = {'color': col_sc, 'row': y_sc, 'column': x_sc}, interactions = {'click': 'select'}, anchor_style = {'fill':'blue'}, selected_style = {'opacity': 1.0}, unselected_style = {'opacity': 1.0}) # skipping 4 & 5 for now # (III) SCATTER PLOT # (1) scales x_scl = bqplot.DateScale(min=dt.datetime(1950,1,1),max=dt.datetime(2020,1,1)) # note: for dates on x-axis y_scl = bqplot.LogScale() # (2) Axis ax_xcl = bqplot.Axis(label='Date', scale=x_scl) ax_ycl = bqplot.Axis(label='Duration in Sec', scale=y_scl, orientation='vertical', side='left') # (3) Marks # NOTE: we'll start with some default value selected i,j = 19,0 # picking an x/y bin -- this is one I know has a lot of data! longs = [long_edges[j], long_edges[j+1]] # min/max longitude lats = [lat_edges[i],lat_edges[i+1]] # min/max latitude # lets plot the durations as a function of year there duration_scatt = bqplot.Scatter(x = ufos['date'][region_mask], y = ufos['duration_seconds'][region_mask], scales={'x':x_scl, 'y':y_scl}) # skipping 4 & 5 for now # (IV) LINKING TOGETHER DASHBOARD WITH INTERACTIVITY def get_data_value(change): if len(change['owner'].selected) == 1: #only 1 selected i,j = change['owner'].selected[0] v = hist2d[i,j] # grab data value mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label # now: for the scatter plot -- THIS PART IS NEW longs = [long_edges[j], long_edges[j+1]] lats = [lat_edges[i],lat_edges[i+1]] region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\ (ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) ) duration_scatt.x = ufos['date'][region_mask] duration_scatt.y = ufos['duration_seconds'][region_mask] heat_map.observe(get_data_value, 'selected') # (5) create figures fig_heatmap = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax]) fig_dur = bqplot.Figure(marks = [duration_scatt], axes = [ax_xcl, ax_ycl]) # since we know from last time we wanna make our figs a bit bigger: fig_heatmap.layout.min_width='500px' fig_dur.layout.min_width='500px' myDashboard = ipywidgets.VBox([mySelectedLabel, ipywidgets.HBox([fig_heatmap,fig_dur])]) myDashboard ``` Note that when I select a deep purple place, my scatter plot is very laggy, this makes me think we should do this with a histogram/bar type plot. So let's try that below, by augmenting our creation of our dashboard: ``` # Below hasn't changed: # (I) CREATE LABEL mySelectedLabel = ipywidgets.Label() # (II) HEAT MAP # (1) add scales - colors, x & y col_sc = bqplot.ColorScale(scheme="RdPu", min=np.nanmin(hist2d), max=np.nanmax(hist2d)) x_sc = bqplot.LinearScale() y_sc = bqplot.LinearScale() # (2) create axis - for colors, x & y c_ax = bqplot.ColorAxis(scale = col_sc, orientation = 'vertical', side = 'right') x_ax = bqplot.Axis(scale = x_sc, label='Longitude') y_ax = bqplot.Axis(scale = y_sc, orientation = 'vertical', label = 'Latitude') # (3) Marks heat_map = bqplot.GridHeatMap(color = hist2d, row = lat_centers, column = long_centers, scales = {'color': col_sc, 'row': y_sc, 'column': x_sc}, interactions = {'click': 'select'}, anchor_style = {'fill':'blue'}, selected_style = {'opacity': 1.0}, unselected_style = {'opacity': 1.0}) # skipping 4 & 5 for now ``` Let's use a `Bar` mark from `bqplot` to plot duration as a function of time: ``` # (II) BAR PLOT # (1-2) scales & ax in usual way x_scl = bqplot.LinearScale() # note we are back to linears y_scl = bqplot.LinearScale() ax_xcl = bqplot.Axis(label='Date', scale=x_scl) ax_ycl = bqplot.Axis(label='Total duration in Sec', scale=y_scl, orientation='vertical', side='left') # create the data mask for each binned region like we did before: i,j = 19,0 longs = [long_edges[j], long_edges[j+1]] lats = [lat_edges[i],lat_edges[i+1]] region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\ (ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) ) ``` Here, we'll use `numpy`'s histogram function (this time in 1D) to grab all of the *years* overwhich the durations occur. We'll do a count of UFO sitings per binned year, weighted by the duration of sitings: ``` ufos['date'] ``` For nice formatting purposes, let's create a column that just has years: ``` ufos['year'] = ufos['date'].dt.year ufos['year'] # Histogram, weight by duration, 10 bins in years: dur, dur_edges = np.histogram(ufos['year'][region_mask], weights=ufos['duration_seconds'][region_mask], bins=10) ``` Get bin centers: ``` dur_centers = (dur_edges[:-1] + dur_edges[1:]) / 2 ``` Finally, create the marks for the bar-plot: ``` duration_hist = bqplot.Bars(x=dur_centers, y=dur, scales={'x':x_scl, 'y':y_scl}) ``` #4: Now we can finally add some interactivity. We have to be careful to not try to plot bars when there is no data in our selection. ``` def get_data_value(change): if len(change['owner'].selected) == 1: #only 1 selected i,j = change['owner'].selected[0] v = hist2d[i,j] # grab data value mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label # Histogram: longs = [long_edges[j], long_edges[j+1]] lats = [lat_edges[i],lat_edges[i+1]] region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\ (ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) ) if len(ufos['year'][region_mask]) > 0: # make sure point exist so no histogram errors! dur, dur_edges = np.histogram(ufos['year'][region_mask], weights=ufos['duration_seconds'][region_mask], bins=10) dur_centers = (dur_edges[:-1] + dur_edges[1:]) / 2 duration_hist.x = dur_centers duration_hist.y = dur # make sure we connect to heatmap heat_map.observe(get_data_value, 'selected') ``` #5: Put the figures together: ``` fig_heatmap = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax]) fig_dur = bqplot.Figure(marks = [duration_hist], axes = [ax_xcl, ax_ycl]) fig_heatmap.layout.min_width = '500px' fig_dur.layout.min_width = '500px' plots = ipywidgets.HBox([fig_heatmap,fig_dur]) myDashboard = ipywidgets.VBox([mySelectedLabel, plots]) myDashboard ``` So, this is much more reactive than what we had before, while still keeping a lot of the same transfer of information. Arguably, this is an even *clearer* representation of what we are interested in. Bonus things to think about: * how would you keep the same time range across all plots * how would you plot multiple bar selections on the same set of axis? How would you highlight that in the heatmap plot? ### maybe hint for HW goes here... # This is probably as far as we will get today ## Market Maps with bqplot As we will discuss shortly - maps and their projections can be misleading. One way around this is to plot data on a "MarketMap" format. `bqplot` has such a mark we can make use of! In theory, we can read this data in with `pandas.read_excel` function. In practice, it can be very slow, so we'll use the saved CSV linked in today's page. We will look at a dataset about surgeries performed in the United States over one year: ``` # IN THEORY: #!pip install xlrd # JPN, might have to run this # note: this is quering from the web! How neat is that?? #df = pd.read_excel('https://query.data.world/s/ivl45pdpubos6jpsii3djsjwm2pcjv', skiprows=5) # the above might take a while to load all the data df = pd.read_csv('/Users/jillnaiman/Downloads/market_map_data.csv') ``` Let's take a look at the top of this dataset: ``` df.head() ``` Let's also use some useful pandas functions, for example we can check what types of data we are dealing with: ``` df.dtypes ``` Let's also look at some summary data, recall that while this will calculate the summary stats for all numerical columns, it won't always make sense: ``` df.describe() ``` For example, things like the "mean zipcode" are meaningless numbers. Let's explore our data further: for example, lets look at how many separate types of surgery are represented in this dataset: ``` df["DRG Definition"].unique().size ``` What about unique hospital (provider) names? ``` df["Provider Name"].unique().size ``` How many states are represented? ``` df["Provider State"].unique().size ``` How are these states coded? ``` df["Provider State"].unique() ``` Lets figure out what the most common surgeries are via how many many folks are discharged after each type of surgery: ``` most_common = df.groupby("DRG Definition")["Total Discharges"].sum() most_common ``` ... but lets sort by the largest on top: ``` most_common = df.groupby("DRG Definition")["Total Discharges"].sum().sort_values(ascending=False) most_common ``` Lets look at only the top 5, for fun: ``` most_common[:5] ``` ... or we can only look at the names of the top 5: ``` most_common[:5].index.values ``` ### Cleaning the dataset for the MarketMap plot Here we are going to practice doing some fancy things to clean this data. This will be good practice for when you run into other datasets "in the wild": Let's first create a little table of total discharges for each type of surgery & state: ``` total_discharges = df.groupby(["DRG Definition", "Provider State"])["Total Discharges"].sum() total_discharges ``` The above is not intuative, lets prettify it: ``` total_discharges = df.groupby(["DRG Definition", "Provider State"])["Total Discharges"].sum().unstack() total_discharges # note we went from a list back into rows/columns ``` #### Aside: lets quick check out what are the most frequent surgeries ``` # for our map, we are going to want to # normalize the discharges or each surgery # for each # state by the total discharges across all # states for a particular type of surger # lets add this to our total_discharges DF total_discharges["Total"] = total_discharges.sum(axis = 1) total_discharges["Total"].head() # just look at the first few # finally, lets check out the most often # performed surgery across all states # we can do this by sorting our DF by this total we just # calculated: total_discharges.sort_values(by = "Total", ascending=False, inplace = True) # now lets just look at the first few of our # sorted array total_discharges.head() # so, from this we see that joint replacement # or reattachment of a lower extremeity is # the most likely surgery (in number of discharges) # followed by surgeries for sepsis and then heart failure # neat. We won't need these for plotting, so we can remove our # total column we just calculated del total_discharges["Total"] total_discharges.head() # now we see that we are back to just states & surgeries # *but* our sorting is still by the total that we # previously calculated. # spiffy! ``` #### End aside! Now, we have to explicitly import market map, for example if we try: ``` bqplot.market_map ``` We get an error. By default bqplot does not import all packages, we have to explicitely import market_map: ``` import bqplot.market_map # for access to market_map ``` Now we'll do our usual `bqplot` thing with scales, axes and marks, but with our new `market_map` mark: ``` # (1) scales x_sc, y_sc = bqplot.OrdinalScale(), bqplot.OrdinalScale() # note, just a different way to call things in Python c_sc = bqplot.ColorScale(scheme="Blues") ``` We only need a color axis: ``` c_ax = bqplot.ColorAxis(scale = c_sc, orientation = 'vertical') ``` What should we plot in color? Let's take a look at the total discharges for the most popular surgical procedure (the 0th index): ``` total_discharges.iloc[0].values, total_discharges.columns.values total_discharges.iloc[0].name # most popular surgery total_discharges.iloc[1].name, total_discharges.iloc[1].values # the 2nd most popular ``` Let's use MarketMap to plot the most popular surgery number by state: ``` # (3) Marks... & fig? mmap = bqplot.market_map.MarketMap(color = total_discharges.iloc[0].values, names = total_discharges.columns.values, scales={'color':c_sc}, axes=[c_ax]) ``` Let's show our map: ``` mmap ``` A few things to note: you'll see that I didn't do the last process of making a figure! This is because MarketMap is a special kind of `bqplot` that makes its own self contained figure. Ok, so far so good, but again, we don't have any interactivity! Let's add some: ``` # (1) scales x_sc, y_sc = bqplot.OrdinalScale(), bqplot.OrdinalScale() # note, just a different way to call things in Python c_sc = bqplot.ColorScale(scheme="Blues") # (2) axes c_ax = bqplot.ColorAxis(scale = c_sc, orientation = 'vertical') # (3/5) Marks/fig mmap = bqplot.market_map.MarketMap(color = total_discharges.iloc[0].values, names = total_discharges.columns.values, scales={'color':c_sc}, axes=[c_ax]) # (4) interactivity def show_data(change): print(change) mmap.observe(show_data, 'selected') mmap ``` So we see while this is a new kind of `bqplot` object, it has all the "changes" we've become accustomed to. Let's build up our interactivity: ``` # (1) scales x_sc, y_sc = bqplot.OrdinalScale(), bqplot.OrdinalScale() # note, just a different way to call things in Python c_sc = bqplot.ColorScale(scheme="Blues") # (2) axes c_ax = bqplot.ColorAxis(scale = c_sc, orientation = 'vertical') # (3/5) Marks/fig mmap = bqplot.market_map.MarketMap(color = total_discharges.iloc[0].values, names = total_discharges.columns.values, scales={'color':c_sc}, axes=[c_ax]) # (4) interactivity def show_data(change): print(change['owner'].selected) mmap.observe(show_data, 'selected') mmap ``` One thing we might want to do is sum up all of the discharges for all of the states selected. How do we map states and names? Let's print out a few more things: ``` # (1) scales x_sc, y_sc = bqplot.OrdinalScale(), bqplot.OrdinalScale() # note, just a different way to call things in Python c_sc = bqplot.ColorScale(scheme="Blues") # (2) axes c_ax = bqplot.ColorAxis(scale = c_sc, orientation = 'vertical') # (3/5) Marks/fig mmap = bqplot.market_map.MarketMap(color = total_discharges.iloc[0].values, names = total_discharges.columns.values, scales={'color':c_sc}, axes=[c_ax]) # (4) interactivity def show_data(change): print(change['owner'].selected) print(change['owner'].color) print(change['owner'].names) mmap.observe(show_data, 'selected') mmap ``` From the above we see we get the entire list of names *and* total discharges each time we click! So we can use these: ``` # (1) scales x_sc, y_sc = bqplot.OrdinalScale(), bqplot.OrdinalScale() # note, just a different way to call things in Python c_sc = bqplot.ColorScale(scheme="Blues") # (2) axes c_ax = bqplot.ColorAxis(scale = c_sc, orientation = 'vertical') # (3/5) Marks/fig mmap = bqplot.market_map.MarketMap(color = total_discharges.iloc[0].values, names = total_discharges.columns.values, scales={'color':c_sc}, axes=[c_ax]) # (4) interactivity myLabel = ipywidgets.Label() # show total number of discharges def show_data(change): v = 0 # sum up discharges for s in change['owner'].selected: # for all selected states #print(s) discharges_in_state = change['owner'].color[change['owner'].names == s] # color map value for this state v += discharges_in_state # add it up myLabel.value = 'Summed discharges for ' + total_discharges.iloc[0].name + ' = ' + str(int(v)) # the middle part just reminds us what name we are using mmap.observe(show_data, 'selected') myDashboard = ipywidgets.VBox([myLabel, mmap]) myDashboard ``` So, MarketMap is a nice way to display map values without having to deal with any projection stuff.
github_jupyter
# Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` ## Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! ``` data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() ``` ## Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. ``` rides[:24*10].plot(x='dteday', y='cnt') ``` ### Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`. ``` dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() ``` ### Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. ``` quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std ``` ### Splitting the data into training, testing, and validation sets We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. ``` # Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] ``` We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). ``` # Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] ``` ## Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*. > **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function. 2. Implement the forward pass in the `train` method. 3. Implement the backpropagation algorithm in the `train` method, including calculating the output error. 4. Implement the forward pass in the `run` method. ``` class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # #def sigmoid(x): # return 0 # Replace 0 with your sigmoid calculation here #self.activation_function = sigmoid def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. hidden_inputs = np.dot(X[None,:], self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with your calculations. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer final_outputs = final_inputs # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. error = y - final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Calculate the hidden layer's contribution to the error hidden_error = np.dot(error, self.weights_hidden_to_output.T) # TODO: Backpropagated error terms - Replace these values with your calculations. output_error_term = error aux = hidden_outputs * (1 - hidden_outputs) hidden_error_term = hidden_error * aux # Weight step (input to hidden) delta_weights_i_h += np.dot(X[:,None], hidden_error_term) # Weight step (hidden to output) delta_weights_h_o += np.dot(hidden_outputs.T, output_error_term) # TODO: Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) ``` ## Unit tests Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project. ``` import unittest inputs = np.array([[0.5, -0.2, 0.1]]) targets = np.array([[0.4]]) test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]]) test_w_h_o = np.array([[0.3], [-0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328], [-0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, -0.20185996], [0.39775194, 0.50074398], [-0.29887597, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) ``` ## Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. ### Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. ### Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. ### Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. ``` import sys ### Set the hyperparameters here ### iterations = 5000 learning_rate = 0.3 hidden_nodes = 8 output_nodes = 5 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim() ``` ## Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. ``` fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features).T*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) ``` ## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric). Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? > **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter #### Your answer below The model does quite a good job at predicting the data in general. The point where it starts failing is from December 22nd, we could reasonably assume the data was collected in a place where christmas is celebrated. I would expect that during christmas vacations, bike rental would go down significantly compared with any regular day of the year. The problem comes from the fact that holidays (we assume that his trend will repeat itself on other big holidays) are just a small fraction of the data from the whole year. Meaning that with only two years worth of data for training, the neural network won't have enough examples to model that trend, while it can accurately predict the rest of days (normal days).
github_jupyter
# Facial Keypoint Detection This project will be all about defining and training a convolutional neural network to perform facial keypoint detection, and using computer vision techniques to transform images of faces. The first step in any challenge like this will be to load and visualize the data you'll be working with. Let's take a look at some examples of images and corresponding facial keypoints. <img src='images/key_pts_example.png' width="500" height="500"/> Facial keypoints (also called facial landmarks) are the small magenta dots shown on each of the faces in the image above. In each training and test image, there is a single face and **68 keypoints, with coordinates (x, y), for that face**. These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. These keypoints are relevant for a variety of tasks, such as face filters, emotion recognition, pose recognition, and so on. Here they are, numbered, and you can see that specific ranges of points match different portions of the face. <img src='images/landmarks_numbered.jpg' width="300" height="300"/> --- ## Load and Visualize Data The first step in working with any dataset is to become familiar with your data; you'll need to load in the images of faces and their keypoints and visualize them! This set of image data has been extracted from the [YouTube Faces Dataset](https://www.cs.tau.ac.il/~wolf/ytfaces/), which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints. #### Training and Testing Data This facial keypoints dataset consists of 5770 color images. All of these images are separated into either a training or a test set of data. * 3462 of these images are training images, for you to use as you create a model to predict keypoints. * 2308 are test images, which will be used to test the accuracy of your model. The information about the images and keypoints in this dataset are summarized in CSV files, which we can read in using `pandas`. Let's read the training CSV and get the annotations in an (N, 2) array where N is the number of keypoints and 2 is the dimension of the keypoint coordinates (x, y). --- First, before we do anything, we have to load in our image data. This data is stored in a zip file and in the below cell, we access it by it's URL and unzip the data in a `/data/` directory that is separate from the workspace home directory. ``` # -- DO NOT CHANGE THIS CELL -- # !mkdir /data !wget -P /data/ https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5aea1b91_train-test-data/train-test-data.zip !unzip -n /data/train-test-data.zip -d /data # import the required libraries import glob import os import numpy as np import pandas as pd import copy import random import matplotlib.pyplot as plt import matplotlib.image as mpimg import torch import cv2 ``` Then, let's load in our training data and display some stats about that dat ato make sure it's been loaded in correctly! ``` key_pts_frame = pd.read_csv('/data/training_frames_keypoints.csv') n = 0 image_name = key_pts_frame.iloc[n, 0] key_pts = key_pts_frame.iloc[n, 1:].as_matrix() key_pts = key_pts.astype('float').reshape(-1, 2) print('Image name: ', image_name) print('Landmarks shape: ', key_pts.shape) print('First 4 key pts: {}'.format(key_pts[:4])) ``` ## Look at some images Below, is a function `show_keypoints` that takes in an image and keypoints and displays them. As you look at this data, **note that these images are not all of the same size**, and neither are the faces! To eventually train a neural network on these images, we'll need to standardize their shape. ``` def show_keypoints(image, key_pts, gt_pts=None): """Show image with keypoints""" plt.imshow(image) plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m') # Display a few different types of images by changing the index n # select an image by index in our data frame plt.figure() show_keypoints(mpimg.imread(os.path.join('/data/training/', image_name)), key_pts) plt.show() ``` ## Dataset class and Transformations To prepare our data for training, we'll be using PyTorch's Dataset class. Much of this this code is a modified version of what can be found in the [PyTorch data loading tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html). #### Dataset class ``torch.utils.data.Dataset`` is an abstract class representing a dataset. This class will allow us to load batches of image/keypoint data, and uniformly apply transformations to our data, such as rescaling and normalizing images for training a neural network. Your custom dataset should inherit ``Dataset`` and override the following methods: - ``__len__`` so that ``len(dataset)`` returns the size of the dataset. - ``__getitem__`` to support the indexing such that ``dataset[i]`` can be used to get the i-th sample of image/keypoint data. Let's create a dataset class for our face keypoints dataset. We will read the CSV file in ``__init__`` but leave the reading of images to ``__getitem__``. This is memory efficient because all the images are not stored in the memory at once but read as required. A sample of our dataset will be a dictionary ``{'image': image, 'keypoints': key_pts}``. Our dataset will take an optional argument ``transform`` so that any required processing can be applied on the sample. We will see the usefulness of ``transform`` in the next section. ``` from torch.utils.data import Dataset, DataLoader, TensorDataset class FacialKeypointsDataset(Dataset): """Face Landmarks dataset.""" def __init__(self, csv_file, root_dir, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.key_pts_frame = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.key_pts_frame) def __getitem__(self, idx): image_name = os.path.join(self.root_dir, self.key_pts_frame.iloc[idx, 0]) image = mpimg.imread(image_name) # if image has an alpha color channel, get rid of it if(image.shape[2] == 4): image = image[:,:,0:3] key_pts = self.key_pts_frame.iloc[idx, 1:].as_matrix() key_pts = key_pts.astype('float').reshape(-1, 2) sample = {'image': image, 'keypoints': key_pts} if self.transform: sample = self.transform(sample) return sample ``` Now that we've defined this class, let's instantiate the dataset and display some images. ``` # Construct the dataset face_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv', root_dir='/data/training/') # print some stats about the dataset print('Length of dataset: ', len(face_dataset)) # Display a few of the images from the dataset num_to_display = 3 for i in range(num_to_display): # define the size of images fig = plt.figure(figsize=(20,10)) # randomly select a sample rand_i = np.random.randint(0, len(face_dataset)) sample = face_dataset[rand_i] # print the shape of the image and keypoints print(i, sample['image'].shape, sample['keypoints'].shape) ax = plt.subplot(1, num_to_display, i + 1) ax.set_title('Sample #{}'.format(i)) # Using the same display function, defined earlier show_keypoints(sample['image'], sample['keypoints']) ``` ## Transforms Now, the images above are not of the same size, and neural networks often expect images that are standardized; a fixed size, with a normalized range for color ranges and coordinates, and (for PyTorch) converted from numpy lists and arrays to Tensors. Therefore, we will need to write some pre-processing code. Let's create four transforms: - ``Normalize``: to convert a color image to grayscale values with a range of [0,1] and normalize the keypoints to be in a range of about [-1, 1] - ``Rescale``: to rescale an image to a desired size. - ``RandomCrop``: to crop an image randomly. - ``ToTensor``: to convert numpy images to torch images. We will write them as callable classes instead of simple functions so that parameters of the transform need not be passed everytime it's called. For this, we just need to implement ``__call__`` method and (if we require parameters to be passed in), the ``__init__`` method. We can then use a transform like this: tx = Transform(params) transformed_sample = tx(sample) Observe below how these transforms are generally applied to both the image and its keypoints. ``` from torchvision import datasets, transforms, models, utils # tranforms #Normalize class Normalize(object): """Convert a color image to grayscale and normalize the color range to [0,1].""" def __call__(self, sample): image, key_pts = sample['image'], sample['keypoints'] image_copy = np.copy(image) key_pts_copy = np.copy(key_pts) # convert image to grayscale image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) # scale color range from [0, 255] to [0, 1] image_copy= image_copy/255.0 # scale keypoints to be centered around 0 with a range of [-1, 1] # mean = 100, sqrt = 50, so, pts should be (pts - 100)/50 key_pts_copy = (key_pts_copy - 100)/50.0 return {'image': image_copy, 'keypoints': key_pts_copy} #Rescale class Rescale(object): """Rescale the image in a sample to a given size. Args: output_size (tuple or int): Desired output size. If tuple, output is matched to output_size. If int, smaller of image edges is matched to output_size keeping aspect ratio the same. """ def __init__(self, output_size): assert isinstance(output_size, (int, tuple)) self.output_size = output_size def __call__(self, sample): image, key_pts = sample['image'], sample['keypoints'] h, w = image.shape[:2] if isinstance(self.output_size, int): if h > w: new_h, new_w = self.output_size * h / w, self.output_size else: new_h, new_w = self.output_size, self.output_size * w / h else: new_h, new_w = self.output_size new_h, new_w = int(new_h), int(new_w) img = cv2.resize(image, (new_w, new_h)) # scale the pts, too key_pts = key_pts * [new_w / w, new_h / h] return {'image': img, 'keypoints': key_pts} #Random Crop class RandomCrop(object): """Crop randomly the image in a sample. Args: output_size (tuple or int): Desired output size. If int, square crop is made. """ def __init__(self, output_size): assert isinstance(output_size, (int, tuple)) if isinstance(output_size, int): self.output_size = (output_size, output_size) else: assert len(output_size) == 2 self.output_size = output_size def __call__(self, sample): image, key_pts = sample['image'], sample['keypoints'] h, w = image.shape[:2] new_h, new_w = self.output_size top = np.random.randint(0, h - new_h) left = np.random.randint(0, w - new_w) image = image[top: top + new_h, left: left + new_w] key_pts = key_pts - [left, top] return {'image': image, 'keypoints': key_pts} #Convert to tensor class ToTensor(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): image, key_pts = sample['image'], sample['keypoints'] # if image has no grayscale color channel, add one if(len(image.shape) == 2): # add that third color dim image = image.reshape(image.shape[0], image.shape[1], 1) # swap color axis because # numpy image: H x W x C # torch image: C X H X W image = image.transpose((2, 0, 1)) return {'image': torch.from_numpy(image), 'keypoints': torch.from_numpy(key_pts)} ``` ## Test out the transforms Let's test these transforms out to make sure they behave as expected. As you look at each transform, note that, in this case, **order does matter**. For example, you cannot crop a image using a value smaller than the original image (and the orginal images vary in size!), but, if you first rescale the original image, you can then crop it to any size smaller than the rescaled size. ``` # test out some of these transforms rescale = Rescale(100) crop = RandomCrop(50) composed = transforms.Compose([Rescale(250), RandomCrop(224)]) # apply the transforms to a sample image test_num = 100 sample = face_dataset[test_num] fig = plt.figure() for i, tx in enumerate([rescale, crop, composed]): transformed_sample = tx(sample) ax = plt.subplot(1, 3, i + 1) plt.tight_layout() ax.set_title(type(tx).__name__) show_keypoints(transformed_sample['image'], transformed_sample['keypoints']) plt.show() ``` ## Create the transformed dataset Apply the transforms in order to get grayscale images of the same shape. Verify that your transform works by printing out the shape of the resulting data (printing out a few examples should show you a consistent tensor size). ``` # define the data tranform # order matters! i.e. rescaling should come before a smaller crop data_transform = transforms.Compose([Rescale(250), RandomCrop(224), Normalize(), ToTensor()]) # create the transformed dataset transformed_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv', root_dir='/data/training/', transform=data_transform) # print some stats about the transformed data print('Number of images: ', len(transformed_dataset)) # make sure the sample tensors are the expected size for i in range(5): sample = transformed_dataset[i] print(i, sample['image'].size(), sample['keypoints'].size()) ``` ## Data Iteration and Batching Right now, we are iterating over this data using a ``for`` loop, but we are missing out on a lot of PyTorch's dataset capabilities, specifically the abilities to: - Batch the data - Shuffle the data - Load the data in parallel using ``multiprocessing`` workers. ``torch.utils.data.DataLoader`` is an iterator which provides all these features, and we'll see this in use in the *next* notebook, Notebook 2, when we load data in batches to train a neural network! --- ## Ready to Train! Now that you've seen how to load and transform our data, you're ready to build a neural network to train on this data. In the next notebook, you'll be tasked with creating a CNN for facial keypoint detection.
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns data=pd.read_csv("india-districts-census-2011.csv",header=0) data.shape data.ndim data.dtypes data.head() data.tail() data.info() data['State name'] df=sorted(data['State name']) df data.describe() data.duplicated(subset=None, keep='first') duplicatedval=data.duplicated(subset=None, keep='first') print(data.groupby('State name')) data.groupby('State name').groups grouped = data.groupby('State name') for name,group in grouped: print(name) print(group) data.groupby(['State name','District name', 'Population']).groups data.count(axis=1) data.isnull().sum() data['Population'].max() data['Population'].min() data5=data[(data['State name']=='GUJARAT')] data5 data['Population'].sum() data.describe(include=object) data.groupby(['State name','District name', 'Population']).groups data['State name'].value_counts() df=list(data) df #grouped = data.groupby('State name') #for name,group in grouped: # print(name) # print(group) grouped.get_group('GUJARAT') grouped['Population'].agg(np.mean) grouped['Population'].agg([np.sum,np.mean,np.std]) data.isna().sum() data['Age_Group_0_29'].unique() print(data['District name'].value_counts()) state=data['State name'] district=data['District name'] def country_population(state,district): pop_df_tmp = data[(data['State name']==state) & (data['District name']==district)] pop_df_tmp = pop_df_tmp.sort_values('Agricultural_Workers',ascending=True) y = range(0, len(pop_df_tmp)) x_male = pop_df_tmp['Male'] x_female = pop_df_tmp['Female'] print(x_male) print(x_female) # max xlim max_x_scale = max(max(x_female), max(x_male)) print(max_x_scale) print() Male_Workers=pop_df_tmp['Male_Workers'] Female_Workers=pop_df_tmp['Female_Workers'] Main_Workers=pop_df_tmp['Main_Workers'] Marginal_Workers=pop_df_tmp['Marginal_Workers'] Non_Workers=pop_df_tmp['Non_Workers'] Cultivator_Workers=pop_df_tmp['Cultivator_Workers'] Agricultural_Workers=pop_df_tmp['Agricultural_Workers'] Household_Workers=pop_df_tmp['Household_Workers'] print(Male_Workers) print(Female_Workers) max_gen_workers=max(max(Male_Workers),max(Female_Workers)) print(max_gen_workers) print() print(Cultivator_Workers) print(Agricultural_Workers) max_workers=max(max(Agricultural_Workers),max(Cultivator_Workers)) print(max_workers) print() print(pop_df_tmp['Total_Education']) print() print(pop_df_tmp['Age_Group_0_29']) print(pop_df_tmp['Age_Group_30_49']) print(pop_df_tmp['Age_Group_50']) print() print(pop_df_tmp['Literate']) print() country_population('BIHAR','Saran') import matplotlib.pyplot as plt data.plot.scatter(x='Female',y='Literate', c='DarkBlue') plt.hist(data['Workers'],color='green',edgecolor='white',bins=5) plt.title("histogram for population workers") plt.xlabel("Population") plt.ylabel("frequency") plt.show() plt.hist(data['Female'],color='green',edgecolor='white',bins=5) plt.title("histogram for population workers") plt.xlabel("Population") plt.ylabel("frequency") plt.show() plt.hist(data['Male'],color='green',edgecolor='white',bins=5) plt.title("histogram for population workers") plt.xlabel("Population") plt.ylabel("frequency") plt.show() data.plot.scatter(x='Male',y='Literate', c='DarkBlue') import matplotlib.pyplot as plt data.plot.scatter(x='Population',y='Rural_Households', c='DarkBlue') import matplotlib.pyplot as plt data.plot.scatter(x='Population',y='Urban_Households', c='DarkBlue') ```
github_jupyter
# About Notebook - [**Kaggle Housing Dataset**](https://www.kaggle.com/ananthreddy/housing) - Implement linear regression using: 1. **Batch** Gradient Descent 2. **Stochastic** Gradient Descent 3. **Mini-batch** Gradient Descent **Note**: _Trying to implement using **PyTorch** instead of numpy_ ``` import pandas as pd from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import torch def banner(msg, _verbose=1): if not _verbose: return print("-"*80) print(msg.upper()) print("-"*80) ``` # Data import and preprocessing ``` df = pd.read_csv('Housing.csv', index_col=0) def convert_to_binary(string): return int('yes' in string) for col in df.columns: if df[col].dtype == 'object': df[col] = df[col].apply(convert_to_binary) data = df.values scaler = StandardScaler() data = scaler.fit_transform(data) X = data[:, 1:] y = data[:, 0] print("X: ", X.shape) print("y: ", y.shape) X_train, X_valid, y_train, y_valid = map(torch.from_numpy, train_test_split(X, y, test_size=0.2)) print("X_train: ", X_train.shape) print("y_train: ", y_train.shape) print("X_valid: ", X_valid.shape) print("y_valid: ", y_valid.shape) class LinearRegression: def __init__(self, X_train, y_train, X_valid, y_valid): self.X_train = X_train self.y_train = y_train self.X_valid = X_valid self.y_valid = y_valid self.Theta = torch.randn((X_train.shape[1]+1)).type(type(X_train)) def _add_bias(self, tensor): bias = torch.ones((tensor.shape[0], 1)).type(type(tensor)) return torch.cat((bias, tensor), 1) def _forward(self, tensor): return torch.matmul( self._add_bias(tensor), self.Theta ).view(-1) def forward(self, train=True): if train: return self._forward(self.X_train) else: return self._forward(self.X_valid) def _cost(self, X, y): y_hat = self._forward(X) mse = torch.sum(torch.pow(y_hat - y, 2))/2/X.shape[0] return mse def cost(self, train=True): if train: return self._cost(self.X_train, self.y_train) else: return self._cost(self.X_valid, self.y_valid) def batch_update_vectorized(self): m, _ = self.X_train.size() return torch.matmul( self._add_bias(self.X_train).transpose(0, 1), (self.forward() - self.y_train) ) / m def batch_update_iterative(self): m, _ = self.X_train.size() update_theta = None X = self._add_bias(self.X_train) for i in range(m): if type(update_theta) == torch.DoubleTensor: update_theta += (self._forward(self.X_train[i].view(1, -1)) - self.y_train[i]) * X[i] else: update_theta = (self._forward(self.X_train[i].view(1, -1)) - self.y_train[i]) * X[i] return update_theta/m def batch_train(self, tolerance=0.01, alpha=0.01): converged = False prev_cost = self.cost() init_cost = prev_cost num_epochs = 0 while not converged: self.Theta = self.Theta - alpha * self.batch_update_vectorized() cost = self.cost() if (prev_cost - cost) < tolerance: converged = True prev_cost = cost num_epochs += 1 banner("Batch") print("\tepochs: ", num_epochs) print("\tcost before optim: ", init_cost) print("\tcost after optim: ", cost) print("\ttolerance: ", tolerance) print("\talpha: ", alpha) def stochastic_train(self, tolerance=0.01, alpha=0.01): converged = False m, _ = self.X_train.size() X = self._add_bias(self.X_train) init_cost = self.cost() num_epochs=0 while not converged: prev_cost = self.cost() for i in range(m): self.Theta = self.Theta - alpha * (self._forward(self.X_train[i].view(1, -1)) - self.y_train[i]) * X[i] cost = self.cost() if prev_cost-cost < tolerance: converged=True num_epochs += 1 banner("Stochastic") print("\tepochs: ", num_epochs) print("\tcost before optim: ", init_cost) print("\tcost after optim: ", cost) print("\ttolerance: ", tolerance) print("\talpha: ", alpha) def mini_batch_train(self, tolerance=0.01, alpha=0.01, batch_size=8): converged = False m, _ = self.X_train.size() X = self._add_bias(self.X_train) init_cost = self.cost() num_epochs=0 while not converged: prev_cost = self.cost() for i in range(0, m, batch_size): self.Theta = self.Theta - alpha / batch_size * torch.matmul( X[i:i+batch_size].transpose(0, 1), self._forward(self.X_train[i: i+batch_size]) - self.y_train[i: i+batch_size] ) cost = self.cost() if prev_cost-cost < tolerance: converged=True num_epochs += 1 banner("Stochastic") print("\tepochs: ", num_epochs) print("\tcost before optim: ", init_cost) print("\tcost after optim: ", cost) print("\ttolerance: ", tolerance) print("\talpha: ", alpha) %%time l = LinearRegression(X_train, y_train, X_valid, y_valid) l.mini_batch_train() %%time l = LinearRegression(X_train, y_train, X_valid, y_valid) l.stochastic_train() %%time l = LinearRegression(X_train, y_train, X_valid, y_valid) l.batch_train() ```
github_jupyter
# How to handle WelDX files In this notebook we will demonstrate how to create, read, and update ASDF files created by WelDX. All the needed funcationality is contained in a single class named `WeldxFile`. We are going to show different modes of operation, like working with physical files on your harddrive, and in-memory files, both read-only and read-write mode. ## Imports The WeldxFile class is being imported from the top-level of the weldx package. ``` from datetime import datetime import numpy as np from weldx import WeldxFile ``` ## Basic operations Now we create our first file, by invoking the `WeldxFile` constructor without any additional arguments. By doing so, we create an in-memory file. This means, that your changes will be temporary until you write it to an actual file on your harddrive. The `file_handle` attribute will point to the actual underlying file. In this case it is the in-memory file or buffer as shown below. ``` file = WeldxFile() file.file_handle ``` Next we assign some dictionary like data to the file, by storing it some attribute name enclosed by square brackets. Then we look at the representation of the file header or contents. This will depend on the execution environment. In JupyterLab you will see an interactive tree like structure, which can be expanded and searched. The root of the tree is denoted as "root" followed by children created by the ASDF library "asdf_library" and "history". We attached the additional child "some_data" with our assignment. ``` data = {"data_sets": {"first": np.random.random(100), "time": datetime.now()}} file["some_data"] = data file ``` Note, that here we are using some very common types, namely an NumPy array and a timestamp. For weldx specialized types like the coordinates system manager, (welding) measurements etc., the weldx package provides ASDF extensions to handle those types automatically during loading and saving ASDF data. You do not need to worry about them. If you try to save types, which cannot be handled by ASDF, you will trigger an error. We could also have created the same structure in one step: ``` file = WeldxFile(tree=data, mode="rw") file ``` You might have noticed, that we got a warning about the in-memory operation during showing the file in Jupyter. Now we have passed the additional argument mode="rw", which indiciates, that we want to perform write operations just in memory, or alternatively to the passed physical file. So this warning went away. We can use all dictionary operations on the data we like, e.g. update, assign, and delete items. ``` file["data_sets"]["second"] = {"data": np.random.random(100), "time": datetime.now()} # delete the first data set again: del file["data_sets"]["first"] file ``` We can also iterate over all keys as usual. You can also have a look at the documentation of the builtin type `dict` for a complete overview of its features. ``` for key, value in file.items(): print(key, value) ``` ### Access to data by attributes The access by key names can be tedious, when deeply nested dictionaries are involved. We provide a handling via attributes like this ``` accessible_by_attribute = file.as_attr() accessible_by_attribute.data_sets.second ``` ## Writing files to disk In order to make your changes persistent, we are going to save the memory-backed file to disk by invoking `WeldxFile.write_to`. ``` file.write_to("example.asdf") ``` This newly created file can be opened up again, in read-write mode like by passing the appropriate arguments. ``` example = WeldxFile("example.asdf", mode="rw") example["updated"] = True example.close() ``` Note, that we closed the file here explictly. Before closing, we wanted to write a simple item to tree. But lets see what happens, if we open the file once again. ``` example = WeldxFile("example.asdf", mode="rw") display(example) example.close() ``` As you see the `updated` state has been written, because we closed the file properly. If we omit closing the file, our changes would be lost when the object runs out of scope or Python terminates. ## Handling updates within a context manager To ensure you will not forget to update your file after making changes, we are able to enclose our file-changing operations within a context manager. This ensures that all operations done in this context (the `with` block) are being written to the file, once the context is left. Note that the underlying file is also closed after the context ends. This is useful, when you have to update lots of files, as there is a limited amount of file handles an operating system can deal with. ``` with WeldxFile("example.asdf", mode="rw") as example: example["updated"] = True fh = example.file_handle # now the context ends, and the file is being saved to disk again. # lets check the file handle has been closed, after the context ended. assert fh.closed ``` Let us inspect the file once again, to see whether our `updated` item has been correctly written. ``` WeldxFile("example.asdf") ``` In case an error got triggered (e.g. an exception has been raised) inside the context, the underlying file is still updated. You could prevent this behavior, by passing `sync=False` during file construction. ``` try: with WeldxFile("example.asdf", mode="rw") as file: file["updated"] = False raise Exception("oh no") except Exception as e: print("expected error:", e) WeldxFile("example.asdf") ``` ## Keeping a log of changes when manipulating a file It can become quite handy to know what has been done to file in the past. Weldx files provide a history log, in which arbitrary strings can be stored with time stamps and used software. We quickly run you through the process of adding history entries to your file. ``` filename_hist = "example_history.asdf" with WeldxFile(filename_hist, mode="rw") as file: file["some"] = "changes" file.add_history_entry("added some changes") WeldxFile(filename_hist)["history"] ``` When you want to describe a custom software, which is lets say a library or tool used to generate/modify the data in the file and we passed it into the creation of our WeldxFile. ``` software = dict( name="my_tool", version="1.0", homepage="https://my_tool.org", author="the crowd" ) with WeldxFile(filename_hist, mode="rw", software_history_entry=software) as file: file["some"] = "changes" file.add_history_entry("added more changes") ``` Lets now inspect how we wrote history. ``` WeldxFile(filename_hist)["history"]["entries"][-1] ``` The entries key is a list of all log entries, where new entries are appended to. We have proper time stamps indicating when the change happened, the actual log entry, and optionally a custom software used to make the change. ## Handeling of custom schemas An important aspect of WelDX or ASDF files is, that you can validate them to comply with a defined schema. A schema defines required and optional attributes a tree structure has to provide to pass the schema validation. Further the types of these attributes can be defined, e.g. the data attribute should be a NumPy array, or a timestamp should be of type `pandas.Timestamp`. There are several schemas provided by WelDX, which can be used by passing them to the `custom_schema` argument. It is expected to be a path-like type, so a string (`str`) or `pathlib.Path` is accepted. The provided utility function `get_schema_path` returns the path to named schema. So its output can directly be used in WeldxFile(schema=...) ``` from weldx.asdf.util import get_schema_path schema = get_schema_path("single_pass_weld-0.1.0") schema ``` This schema defines a complete experimental setup with measurement data, e.g requires the following attributes to be defined in our tree: - workpiece - TCP - welding_current - welding_voltage - measurements - equipment We use a testing function to provide this data now, and validate it against the schema by passing the `custom_schema` during WeldxFile creation. Here we just have a look at the process parameters sub-dictionary. ``` from weldx.asdf.cli.welding_schema import single_pass_weld_example _, single_pass_weld_data = single_pass_weld_example(out_file=None) display(single_pass_weld_data["process"]) ``` That is a lot of data, containing complex data structures and objects describing the whole experiment including measurement data. We can now create new `WeldxFile` and validate the data against the schema. ``` WeldxFile(tree=single_pass_weld_data, custom_schema=schema, mode="rw") ``` But what would happen, if we forget an import attribute? Lets have a closer look... ``` # simulate we forgot something important, so we delete the workpiece: del single_pass_weld_data["workpiece"] # now create the file again, and see what happens: try: WeldxFile(tree=single_pass_weld_data, custom_schema=schema, mode="rw") except Exception as e: display(e) ``` We receive a ValidationError from the ASDF library, which tells us exactly what the missing information is. The same will happen, if we accidentially pass the wrong type. ``` # simulate a wrong type by changing it to a NumPy array. single_pass_weld_data["welding_current"] = np.zeros(10) # now create the file again, and see what happens: try: WeldxFile(tree=single_pass_weld_data, custom_schema=schema, mode="rw") except Exception as e: display(e) ``` Here we see, that a `signal` tag is expected, but a `asdf/core/ndarray-1.0.0` was received. The ASDF library assignes tags to certain types to handle their storage in the file format. As shown, the `signal` tag is contained in `weldx/measurement` container, provided by `weldx.bam.de`. The tags and schemas also provide a version number, so future updates in the software become managable. Custom schemas can be used to define own protocols or standards describing your data. ## Summary In this tutorial we have encountered how to easily open, inspect, manipulate, and update ASDF files created by WelDX. We've learned that these files can store a variety of different data types and structures. Discussed features: * Opening in read/write mode `WeldxFile(mode="rw")`. * Creating files in memory (passing no file name to `WeldxFile()` constructor). * Writing to disk (`WeldxFile.write_to`). * Keeping log of changes (`WeldxFile.history`, `WeldxFile.add_history_entry`). * Validation against a schema `WeldxFile(custom_schema="/path/my_schema.yaml")`
github_jupyter
``` import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import os import os.path as path import itertools from sklearn.model_selection import train_test_split import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras.layers import Input,InputLayer, Dense, Activation, BatchNormalization, Flatten, Conv2D from tensorflow.keras.layers import MaxPooling2D, Dropout from tensorflow.keras.models import Sequential, Model, load_model from tensorflow.keras.optimizers import SGD, Adam from tensorflow.keras.callbacks import ModelCheckpoint,LearningRateScheduler, \ EarlyStopping from tensorflow.keras import backend as K from tensorflow.keras.utils import to_categorical, multi_gpu_model, Sequence from tensorflow.keras.preprocessing.image import ImageDataGenerator os.environ['CUDA_VISIBLE_DEVICES'] = '2' data_dir = 'data/' # train_data = np.load(path.join(data_dir, 'imagenet_6_class_172_train_data.npz')) # val_data = np.load(path.join(data_dir, 'imagenet_6_class_172_val_data.npz')) x_train = np.load(path.join(data_dir, 'imagenet_6_class_172_x_train.npy')) y_train = np.load(path.join(data_dir, 'imagenet_6_class_172_y_train.npy')) x_val = np.load(path.join(data_dir, 'imagenet_6_class_172_x_val.npy')) y_val = np.load(path.join(data_dir, 'imagenet_6_class_172_y_val.npy')) y_list = np.load(path.join(data_dir, 'imagenet_6_class_172_y_list.npy')) # x_train = train_data['x_data'] # y_train = train_data['y_data'] # x_val = val_data['x_data'] # y_val = val_data['y_data'] x_test = x_val y_test = y_val # y_list = val_data['y_list'] x_train.shape, y_train.shape, x_val.shape, y_val.shape, x_test.shape, y_test.shape, y_list.shape y_train = to_categorical(y_train) y_val = to_categorical(y_val) y_test = y_val x_train.shape, y_train.shape, x_val.shape, y_val.shape, x_test.shape, y_test.shape input_shape = x_train[0].shape output_size = len(y_list) def build_2d_cnn_custom_ch_32_DO(conv_num=1): input_layer = Input(shape=input_shape) x = input_layer for i in range(conv_num): x = Conv2D(kernel_size=5, filters=32*(2**(i//2)), strides=(1,1), padding='same')(x) # x = BatchNormalization()(x) x = Activation('relu')(x) x = MaxPooling2D(pool_size=2, strides=(2,2), padding='same')(x) x = Flatten()(x) x = Dropout(0.75)(x) output_layer = Dense(output_size, activation='softmax')(x) model = Model(inputs=input_layer, outputs=output_layer) return model for i in range(1, 8): model = build_2d_cnn_custom_ch_32_DO(conv_num=i) model.summary() del model class BalanceDataGenerator(Sequence): def __init__(self, x_data, y_data, batch_size, shuffle=True): self.x_data = x_data self.y_data = y_data self.batch_size = batch_size self.shuffle = shuffle self.sample_size = int(np.sum(y_data, axis=0).min()) self.data_shape = x_data.shape[1:] self.y_label = self.y_data.argmax(axis=1) self.labels = np.unique(self.y_label) self.on_epoch_end() def __len__(self): return int(np.ceil(len(self.labels) * self.sample_size / self.batch_size)) def on_epoch_end(self): self.indexes = np.zeros((len(self.labels), self.sample_size)) for i, label in enumerate(self.labels): y_index = np.argwhere(self.y_label==label).squeeze() if self.shuffle == True: self.indexes[i] = np.random.choice(y_index, self.sample_size, replace=False) else: self.indexes[i] = y_index[:self.sample_size] self.indexes = self.indexes.flatten().astype(np.int32) if self.shuffle == True: np.random.shuffle(self.indexes) def __getitem__(self, batch_idx): indices = self.indexes[batch_idx*self.batch_size: (batch_idx+1)*self.batch_size] return self.x_data[indices], self.y_data[indices] batch_size = 40 data_generator = BalanceDataGenerator(x_train, y_train, batch_size=batch_size) for i in range(6, 8): base = 'vis_imagenet_6_class_2D_CNN_custom_ch_32_DO_075_DO' model_name = base+'_{}_conv'.format(i) # with tf.device('/cpu:1'): model = build_2d_cnn_custom_ch_32_DO(conv_num=i) # model = multi_gpu_model(model, gpus=2) model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-4), metrics=['accuracy']) model_path = 'model/checkpoint/'+model_name+'_checkpoint/' os.makedirs(model_path, exist_ok=True) model_filename = model_path+'{epoch:03d}-{val_loss:.4f}.hdf5' checkpointer = ModelCheckpoint(filepath = model_filename, monitor = "val_loss", verbose=1, save_best_only=True) early_stopping = EarlyStopping(monitor='val_loss', patience=50) hist = model.fit_generator(data_generator, steps_per_epoch=len(x_train)//batch_size, epochs=10000, validation_data=(x_val, y_val), callbacks = [checkpointer, early_stopping], workers=8, use_multiprocessing=True ) print() print(model_name, 'Model') fig, ax = plt.subplots() ax.plot(hist.history['loss'], 'y', label='train loss') ax.plot(hist.history['val_loss'], 'r', label='val loss') ax.plot(hist.history['acc'], 'b', label='train acc') ax.plot(hist.history['val_acc'], 'g', label='val acc') ax.set_xlabel('epoch') ax.set_ylabel('loss') ax.legend(loc='upper left') plt.show() png_path = 'visualization/learning_curve/' filename = model_name+'.png' os.makedirs(png_path, exist_ok=True) fig.savefig(png_path+filename, transparent=True) model.save(model_path+'000_last.hdf5') del(model) model_path = 'model/checkpoint/'+model_name+'_checkpoint/' model_filename = model_path + sorted(os.listdir(model_path))[-1] model = load_model(model_filename) [loss, accuracy] = model.evaluate(x_test, y_test) print('Loss:', loss, 'Accuracy:', accuracy) print() del(model) log_dir = 'log' os.makedirs(log_dir, exist_ok=True) base = 'vis_imagenet_6_class_2D_CNN_custom_ch_32_DO_075_DO' with open(path.join(log_dir, base), 'w') as log_file: for i in range(6, 8): model_name = base+'_{}_conv'.format(i) print() print(model_name, 'Model') model_path = 'model/checkpoint/'+model_name+'_checkpoint/' model_filename = model_path + sorted(os.listdir(model_path))[-1] model = load_model(model_filename) model.summary() [loss, accuracy] = model.evaluate(x_test, y_test) print('Loss:', loss, 'Accuracy:', accuracy) del(model) log_file.write('\t'.join([model_name, str(accuracy), str(loss)])+'\n') for i in range(6, 8): model_name = base+'_{}_conv'.format(i) print() print(model_name, 'Model') model_path = 'model/checkpoint/'+model_name+'_checkpoint/' model_filename = model_path + '000_last.hdf5' model = load_model(model_filename) model.summary() [loss, accuracy] = model.evaluate(x_test, y_test) print('Loss:', loss, 'Accuracy:', accuracy) del(model) import matplotlib.pyplot as plt from sklearn.metrics import classification_report, confusion_matrix i = 6 model_name = base+'_{}_conv'.format(i) print() print(model_name, 'Model') model_path = 'model/checkpoint/'+model_name+'_checkpoint/' model_filename = model_path + sorted(os.listdir(model_path))[-1] model = load_model(model_filename) model.summary() [loss, accuracy] = model.evaluate(x_test, y_test) print('Loss:', loss, 'Accuracy:', accuracy) Y_pred = model.predict(x_test) y_pred = np.argmax(Y_pred, axis=1) y_real = np.argmax(y_test, axis=1) confusion_mat = confusion_matrix(y_real, y_pred) print('Confusion Matrix') print(confusion_mat) print() print('Classification Report') print(classification_report(y_real, y_pred)) print() # labels = y_table.T[0] plt.figure(figsize=(4,4), dpi=100) plt.xticks(np.arange(len(y_list)), y_list) plt.yticks(np.arange(len(y_list)), y_list) plt.imshow(confusion_mat, interpolation='nearest', cmap=plt.cm.bone_r) del(model) ```
github_jupyter
``` ## author- KUMAR ABHINAV ## DATE- 20/3/19 import pandas as pd import numpy as np import os file_path=os.getcwd() print('file_path:'+file_path) from matplotlib.pylab import rcParams rcParams['figure.figsize'] = 25, 10 import matplotlib.pyplot as plt from matplotlib import pyplot df=pd.read_csv(r"C:\Users\Lenovo\AnacondaProjects\hungryaap\worldcities.csv") df.head() print("df :%s"%(df.shape,)) from sklearn.cluster import KMeans import pandas as pd from sklearn.preprocessing import MinMaxScaler from matplotlib import pyplot as plt %matplotlib inline plt.scatter(df.lat,df.lng) plt.xlabel('latitude') plt.ylabel('longitude') km = KMeans(n_clusters=10) y_predicted = km.fit_predict(df[['lat','lng']]) y_predicted df['cluster']=y_predicted df.head() km.cluster_centers_ df1 = df[df.cluster==0] df2 = df[df.cluster==1] df3 = df[df.cluster==2] df4 = df[df.cluster==3] df5 = df[df.cluster==4] df6 = df[df.cluster==5] df7 = df[df.cluster==6] df8 = df[df.cluster==7] df9 = df[df.cluster==8] df10 = df[df.cluster==9] plt.scatter(df1.lat,df1.lng,color='green') plt.scatter(df2.lat,df2.lng,color='red') plt.scatter(df3.lat,df3.lng,color='black') plt.scatter(df4.lat,df4.lng,color='blue') plt.scatter(df5.lat,df5.lng,color='violet') plt.scatter(df6.lat,df6.lng,color='cyan') plt.scatter(df7.lat,df7.lng,color='indigo') plt.scatter(df8.lat,df8.lng,color='orange') plt.scatter(df9.lat,df9.lng,color='yellow') plt.scatter(df10.lat,df10.lng,color='pink') plt.scatter(km.cluster_centers_[:,0],km.cluster_centers_[:,1],color='purple',marker='*',label='centroid') plt.xlabel('latitude') plt.ylabel('longitude') plt.legend() scaler = MinMaxScaler() scaler.fit(df[['lng']]) df['lng'] = scaler.transform(df[['lng']]) scaler.fit(df[['lat']]) df['lat'] = scaler.transform(df[['lat']]) df.head() plt.scatter(df.lat,df.lng) km = KMeans(n_clusters=10) y_predicted = km.fit_predict(df[['lat','lng']]) y_predicted df['cluster']=y_predicted df.head() km.cluster_centers_ df1 = df[df.cluster==0] df2 = df[df.cluster==1] df3 = df[df.cluster==2] df4 = df[df.cluster==3] df5 = df[df.cluster==4] df6 = df[df.cluster==5] df7 = df[df.cluster==6] df8 = df[df.cluster==7] df9 = df[df.cluster==8] df10 = df[df.cluster==9] plt.scatter(df1.lat,df1.lng,color='green') plt.scatter(df2.lat,df2.lng,color='red') plt.scatter(df3.lat,df3.lng,color='black') plt.scatter(df4.lat,df4.lng,color='blue') plt.scatter(df5.lat,df5.lng,color='violet') plt.scatter(df6.lat,df6.lng,color='cyan') plt.scatter(df7.lat,df7.lng,color='indigo') plt.scatter(df8.lat,df8.lng,color='orange') plt.scatter(df9.lat,df9.lng,color='yellow') plt.scatter(df10.lat,df10.lng,color='pink') plt.scatter(km.cluster_centers_[:,0],km.cluster_centers_[:,1],color='purple',marker='*',label='centroid') plt.xlabel('latitude') plt.ylabel('longitude') plt.legend() ``` # Elbow Curve to find optimum value for clusters ``` sse = [] k_rng = range(1,10) for k in k_rng: km = KMeans(n_clusters=k) km.fit(df[['lat','lng']]) sse.append(km.inertia_) plt.xlabel('K') plt.ylabel('Sum of squared error') plt.plot(k_rng,sse) ``` # so optimum value is k is 3 from above plot of sse vs k ## so we will try k means clustering with no of clusters =3 ``` km = KMeans(n_clusters=3) y_predicted = km.fit_predict(df[['lat','lng']]) y_predicted df['cluster']=y_predicted df.head() km.cluster_centers_ df1 = df[df.cluster==0] df2 = df[df.cluster==1] df3 = df[df.cluster==2] plt.scatter(df1.lat,df1.lng,color='green') plt.scatter(df2.lat,df2.lng,color='red') plt.scatter(df3.lat,df3.lng,color='black') plt.scatter(km.cluster_centers_[:,0],km.cluster_centers_[:,1],color='purple',marker='*',label='centroid') plt.xlabel('latitude') plt.ylabel('longitude') plt.legend() ``` # Hierarchical Clustering Algorithm ``` data = df.iloc[:, 2:4].values data import scipy.cluster.hierarchy as shc plt.figure(figsize=(10, 7)) plt.title("Location Dendograms") dend = shc.dendrogram(shc.linkage(data, method='ward')) from sklearn.cluster import AgglomerativeClustering cluster1 = AgglomerativeClustering(n_clusters=5, affinity='euclidean', linkage='ward') cluster1.fit_predict(data) plt.figure(figsize=(10, 7)) plt.scatter(data[:,0], data[:,1], c=cluster1.labels_, cmap='rainbow') cluster2 = AgglomerativeClustering(n_clusters=3, affinity='euclidean', linkage='ward') cluster2.fit_predict(data) plt.figure(figsize=(10, 7)) plt.scatter(data[:,0], data[:,1], c=cluster2.labels_, cmap='rainbow') cluster3 = AgglomerativeClustering(n_clusters=10, affinity='euclidean', linkage='ward') cluster3.fit_predict(data) plt.figure(figsize=(10, 7)) plt.scatter(data[:,0], data[:,1], c=cluster3.labels_, cmap='rainbow') ``` # K Nearest Neighbour ``` from matplotlib.colors import ListedColormap from sklearn import neighbors, datasets n_neighbors = 15 # we only take the first two features. We could avoid this ugly # slicing by using a two-dim dataset X = data y = df.cluster h = .02 # step size in the mesh # Create color maps cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF']) cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) ```
github_jupyter
# Standard Syntax to Name Variables ## Main Objective: Write a function that can suggest you to change variable names in your code like flake8. The variable names and the rules for how to use this syntax are given in 'VariableNameGlossary.txt' file. ## Create dictionaries for variable names, abbreviations, and types: * Import text from 'VariableNameGlossary.txt' * Take in lines starting with '* ' * Each column is separated with a ', ' * Each line is formatted as follows: * var_abv, var_name/alt_var_name, type, cont_type * var_abv: all lower case, no special characters ('/' or '_') * abv_type: all lower case, type of the abbreviation * var_name: all lower case, '_' binds words, '/' is a separator to separate multiple keys * cont_type: optional, if exists any variable using the name should be in the indicated type e.g. boolean * Create 3 dictionaries: * abv = {var_abv: var_name/alt_var_name} * desc = {var_name: var_abv, alt_var_name: var_abv} * cont_type = {var_abv: cont_type} ``` def variablenameglossary_mc(path, f_name = 'VariableNameGlossary_MOz.txt'): f = open(f_name, 'r') l = f.readlines() abv = dict() desc = dict() tip = dict() cont_type = dict() for whL in l: if '*' in whL: lst = whL.split(', ') for idx in range(len(lst)): # Remove special characters spcl_char = ['* ','\n'] for whSpclChar in spcl_char: if whSpclChar in lst[idx]: lst[idx] = lst[idx].replace(whSpclChar,'') # Check if there are any upper case characters in the first three variables if idx <= 2: for whLett in list(lst[idx]): assert whLett.isupper() == False, ''' Upper case letters in the MOzStandardSyntax Glossary are not permitted There is an upper case letter in line: {} Make necessary corrections in '{}' '''.format(whL,f_name) # Check if the abbreviation is registered more than once assert lst[0] not in abv.keys(), ''' There are multiple entries for the same abbreviation. '{}: {}' '{}: {}' Make necessary corrections in '{}' '''.format(lst[0],lst[1],lst[0],abv[lst[0]],f_name) abv[lst[0]] = lst[1] tip[lst[0]] = lst[2] if idx == 3: cont_type[lst[0]] = lst[3] # Check if the description is registered more than once for whDesc in lst[1].split('/'): assert whDesc not in desc.keys(), ''' There are multiple entries for the same description. '{}: {}' '{}: {}' Make necessary corrections in '{}' '''.format(whDesc,lst[0],whDesc,desc[whDesc],f_name) desc[whDesc] = lst[0] return abv, desc, tip, cont_type ```
github_jupyter
# This is the EDA file ## The Main Findings I found were as follows 1. The Main Table in the database is "Results" Table 2. There are 28 columns in total in the "Results" Table 3. There are total 17 ID columns where each ID column referes to some other table in database 4. Row number 46360 contains all null values so we can drop it 5. There are 7 rows with Activity_ID = ######### and every feature corresponding to those rows is blank and we can drop those 6. Lab_ID contains ["None", "Unknown", "LabID", "none", "NONE", "None "] type of null values and we can map them to nans 7. Date_Collected has no null values 8. Date on Time_Collected column looks wrong as the dates are from year 1899-1902 9. Also all times in Date_Collected columns are 00:00:00 So maybe we can just take time from Time_Collected and Date from date collected and join those together to get accurate date_time 10. Actual_Result has >,<,*,'.', as special charecters 11. More findings coming in future ``` import pandas as pd import os import datetime pd.set_option("display.max_rows", 999) os.getcwd() data_folder = "../data/charles_river_samples_csv" os.chdir(data_folder) os.getcwd() os.listdir() #Results is the central table where all the measurements are taken, it's encoded in latin-1 results = pd.read_csv("Results.csv",encoding="latin-1") results.columns.value_counts().sum() #There are 28 columns in total results.columns # We can separately treat ID columns as I think most of them are categorical or level types # There are total 17 ID columns id_columns = [col for col in results.columns if "ID" in col] sum([1 for i in id_columns]) results.shape #There are 46,458 rows and 28 columns out of which 17 are ID columns and 11 are non_ID columns results.head() results.isnull().sum() #Every row contains a null value results.index[results.isnull().all(1)] # So row number 46360 contains all null values so we can drop it results = results.drop(46360) results.isnull().sum() results.shape results["Activity_ID"].value_counts() # There is a row with Activity_ID = ######### which is wrong # Lets find what rows are those results[results["Activity_ID"] == "################"] # So these are the rows with incorrect values in them. We can drop them results.drop(results.loc[results['Activity_ID']=="################"].index, inplace=True) (results["Activity_ID"].value_counts()>1).sum() # Now I think we can safetly assume that Activity_ID uniquely describes each row in the dataset #Checking out percentages of null values in dataset results.isnull().sum()*100/results.shape[0] # Most of the highly null columns(>70%) are comments and we can drop those columns drop_colmns = ["Associated_ID", "Result_Comment","Field_Comment","Event_Comment","QAQC_Comment","Percent_RPD"] results.drop(drop_colmns, axis=1,inplace=True) results.isnull().sum()*100/results.shape[0] results.isna().sum()*100/results.shape[0] # Now let's concentrate on Lab_ID for k,v in results["Lab_ID"].value_counts(sort=True).to_dict().items(): print(k,v) # Change first few rows to nans nans = ["None", "Unknown", "LabID", "none", "NONE", "None "] results["Lab_ID"] = results["Lab_ID"].map(lambda x: "nan" if x in nans else x) # Date_Collected has no null values # However Date on Time Collected column looks wrong as the dates are from 1899-1902 while # dates in Date_Collected column dont match with it # Also all times in Date_Collected columns are 00:00:00 So # Maybe we can just take time from Time_Collected and Date from date collected and join those together # to get accurate date_time results["Time_Collected"].value_counts(sort=True) # Extracting date and time from Date_Collected and Time_Collected we can merge them or use them as features separetly results["Date_Collected"] = pd.to_datetime(results["Date_Collected"]) results["Year"] = pd.DatetimeIndex(results["Date_Collected"]).year results["Month"] = pd.DatetimeIndex(results["Date_Collected"]).month results["Day"] = pd.DatetimeIndex(results["Date_Collected"]).day results["Hour"] = pd.DatetimeIndex(results["Time_Collected"]).hour results["Minute"] = pd.DatetimeIndex(results["Time_Collected"]).minute results["Second"] = pd.DatetimeIndex(results["Time_Collected"]).second ## Site ID for k,v in results["Actual_Result"].value_counts(sort=True).to_dict().items(): print(k,v) ```
github_jupyter
# Gaussian Mixture Model with ADVI Here, we describe how to use ADVI for inference of Gaussian mixture model. First, we will show that inference with ADVI does not need to modify the stochastic model, just call a function. Then, we will show how to use mini-batch, which is useful for large dataset. In this case, where the model should be slightly changed. First, create artificial data from a mixuture of two Gaussian components. ``` %matplotlib inline %env THEANO_FLAGS=device=cpu,floatX=float32 import theano import pymc3 as pm from pymc3 import Normal, Metropolis, sample, MvNormal, Dirichlet, \ DensityDist, find_MAP, NUTS, Slice import theano.tensor as tt from theano.tensor.nlinalg import det import numpy as np import matplotlib.pyplot as plt import seaborn as sns n_samples = 100 rng = np.random.RandomState(123) ms = np.array([[-1, -1.5], [1, 1]]) ps = np.array([0.2, 0.8]) zs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T xs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples) for z, m in zip(zs, ms)] data = np.sum(np.dstack(xs), axis=2) plt.figure(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5) plt.scatter(ms[0, 0], ms[0, 1], c='r', s=100) plt.scatter(ms[1, 0], ms[1, 1], c='b', s=100) ``` Gaussian mixture models are usually constructed with categorical random variables. However, any discrete rvs does not fit ADVI. Here, class assignment variables are marginalized out, giving weighted sum of the probability for the gaussian components. The log likelihood of the total probability is calculated using logsumexp, which is a standard technique for making this kind of calculation stable. In the below code, DensityDist class is used as the likelihood term. The second argument, logp_gmix(mus, pi, np.eye(2)), is a python function which recieves observations (denoted by 'value') and returns the tensor representation of the log-likelihood. ``` from pymc3.math import logsumexp # Log likelihood of normal distribution def logp_normal(mu, tau, value): # log probability of individual samples k = tau.shape[0] delta = lambda mu: value - mu return (-1 / 2.) * (k * tt.log(2 * np.pi) + tt.log(1./det(tau)) + (delta(mu).dot(tau) * delta(mu)).sum(axis=1)) # Log likelihood of Gaussian mixture distribution def logp_gmix(mus, pi, tau): def logp_(value): logps = [tt.log(pi[i]) + logp_normal(mu, tau, value) for i, mu in enumerate(mus)] return tt.sum(logsumexp(tt.stacklists(logps)[:, :n_samples], axis=0)) return logp_ with pm.Model() as model: mus = [MvNormal('mu_%d' % i, mu=pm.floatX(np.zeros(2)), tau=pm.floatX(0.1 * np.eye(2)), shape=(2,)) for i in range(2)] pi = Dirichlet('pi', a=pm.floatX(0.1 * np.ones(2)), shape=(2,)) xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data) ``` For comparison with ADVI, run MCMC. ``` with model: start = find_MAP() step = Metropolis() trace = sample(1000, step, start=start) ``` Check posterior of component means and weights. We can see that the MCMC samples of the component mean for the lower-left component varied more than the upper-right due to the difference of the sample size of these clusters. ``` plt.figure(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g') mu_0, mu_1 = trace['mu_0'], trace['mu_1'] plt.scatter(mu_0[:, 0], mu_0[:, 1], c="r", s=10) plt.scatter(mu_1[:, 0], mu_1[:, 1], c="b", s=10) plt.xlim(-6, 6) plt.ylim(-6, 6) sns.barplot([1, 2], np.mean(trace['pi'][:], axis=0), palette=['red', 'blue']) ``` We can use the same model with ADVI as follows. ``` with pm.Model() as model: mus = [MvNormal('mu_%d' % i, mu=pm.floatX(np.zeros(2)), tau=pm.floatX(0.1 * np.eye(2)), shape=(2,)) for i in range(2)] pi = Dirichlet('pi', a=pm.floatX(0.1 * np.ones(2)), shape=(2,)) xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data) with model: %time approx = pm.fit(n=4500, obj_optimizer=pm.adagrad(learning_rate=1e-1)) means = approx.bij.rmap(approx.mean.eval()) cov = approx.cov.eval() sds = approx.bij.rmap(np.diag(cov)**.5) ``` The function returns three variables. 'means' and 'sds' are the mean and standart deviations of the variational posterior. Note that these values are in the transformed space, not in the original space. For random variables in the real line, e.g., means of the Gaussian components, no transformation is applied. Then we can see the variational posterior in the original space. ``` from copy import deepcopy mu_0, sd_0 = means['mu_0'], sds['mu_0'] mu_1, sd_1 = means['mu_1'], sds['mu_1'] def logp_normal_np(mu, tau, value): # log probability of individual samples k = tau.shape[0] delta = lambda mu: value - mu return (-1 / 2.) * (k * np.log(2 * np.pi) + np.log(1./np.linalg.det(tau)) + (delta(mu).dot(tau) * delta(mu)).sum(axis=1)) def threshold(zz): zz_ = deepcopy(zz) zz_[zz < np.max(zz) * 1e-2] = None return zz_ def plot_logp_normal(ax, mu, sd, cmap): f = lambda value: np.exp(logp_normal_np(mu, np.diag(1 / sd**2), value)) g = lambda mu, sd: np.arange(mu - 3, mu + 3, .1) xx, yy = np.meshgrid(g(mu[0], sd[0]), g(mu[1], sd[1])) zz = f(np.vstack((xx.reshape(-1), yy.reshape(-1))).T).reshape(xx.shape) ax.contourf(xx, yy, threshold(zz), cmap=cmap, alpha=0.9) fig, ax = plt.subplots(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g') plot_logp_normal(ax, mu_0, sd_0, cmap='Reds') plot_logp_normal(ax, mu_1, sd_1, cmap='Blues') plt.xlim(-6, 6) plt.ylim(-6, 6) ``` TODO: We need to backward-transform 'pi', which is transformed by 'stick_breaking'. 'elbos' contains the trace of ELBO, showing stochastic convergence of the algorithm. ``` plt.plot(approx.hist) ``` To demonstrate that ADVI works for large dataset with mini-batch, let's create 100,000 samples from the same mixture distribution. ``` n_samples = 100000 zs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T xs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples) for z, m in zip(zs, ms)] data = np.sum(np.dstack(xs), axis=2) plt.figure(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5) plt.scatter(ms[0, 0], ms[0, 1], c='r', s=100) plt.scatter(ms[1, 0], ms[1, 1], c='b', s=100) plt.xlim(-6, 6) plt.ylim(-6, 6) ``` MCMC took 55 seconds, 20 times longer than the small dataset. ``` with pm.Model() as model: mus = [MvNormal('mu_%d' % i, mu=pm.floatX(np.zeros(2)), tau=pm.floatX(0.1 * np.eye(2)), shape=(2,)) for i in range(2)] pi = Dirichlet('pi', a=pm.floatX(0.1 * np.ones(2)), shape=(2,)) xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data) start = find_MAP() step = Metropolis() trace = sample(1000, step, start=start) ``` Posterior samples are concentrated on the true means, so looks like single point for each component. ``` plt.figure(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g') mu_0, mu_1 = trace['mu_0'], trace['mu_1'] plt.scatter(mu_0[-500:, 0], mu_0[-500:, 1], c="r", s=50) plt.scatter(mu_1[-500:, 0], mu_1[-500:, 1], c="b", s=50) plt.xlim(-6, 6) plt.ylim(-6, 6) ``` For ADVI with mini-batch, put theano tensor on the observed variable of the ObservedRV. The tensor will be replaced with mini-batches. Because of the difference of the size of mini-batch and whole samples, the log-likelihood term should be appropriately scaled. To tell the log-likelihood term, we need to give ObservedRV objects ('minibatch_RVs' below) where mini-batch is put. Also we should keep the tensor ('minibatch_tensors'). ``` minibatch_size = 200 # In memory Minibatches for better speed data_t = pm.Minibatch(data, minibatch_size) with pm.Model() as model: mus = [MvNormal('mu_%d' % i, mu=pm.floatX(np.zeros(2)), tau=pm.floatX(0.1 * np.eye(2)), shape=(2,)) for i in range(2)] pi = Dirichlet('pi', a=pm.floatX(0.1 * np.ones(2)), shape=(2,)) xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data_t, total_size=len(data)) ``` Run ADVI. It's much faster than MCMC, though the problem here is simple and it's not a fair comparison. ``` # Used only to write the function call in single line for using %time # is there more smart way? def f(): approx = pm.fit(n=1500, obj_optimizer=pm.adagrad(learning_rate=1e-1), model=model) means = approx.bij.rmap(approx.mean.eval()) sds = approx.bij.rmap(approx.std.eval()) return means, sds, approx.hist %time means, sds, elbos = f() ``` The result is almost the same. ``` from copy import deepcopy mu_0, sd_0 = means['mu_0'], sds['mu_0'] mu_1, sd_1 = means['mu_1'], sds['mu_1'] fig, ax = plt.subplots(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g') plt.scatter(mu_0[0], mu_0[1], c="r", s=50) plt.scatter(mu_1[0], mu_1[1], c="b", s=50) plt.xlim(-6, 6) plt.ylim(-6, 6) ``` The variance of the trace of ELBO is larger than without mini-batch because of the subsampling from the whole samples. ``` plt.plot(elbos); ```
github_jupyter
## EE 502 P: Analytical Methods for Electrical Engineering # Homework 2: Sets, functions, and relations ## Due October 17, 2021 by 11:59 PM ### <span style="color: red">Mayank Kumar</span> Copyright &copy; 2021, University of Washington <hr> **Instructions**: Please use this notebook as a template. Answer all questions using well formatted Markdown with embedded LaTeX equations, executable Jupyter cells, or both. Submit your homework solutions as an `.ipynb` file via Canvas. <span style="color: red'"> Although you may discuss the homework with others, you must turn in your own, original work. </span> **Things to remember:** - Use complete sentences. Equations should appear in text as grammatical elements. - Comment your code. - Label your axes. Title your plots. Use legends where appropriate. - Before submitting a notebook, choose Kernel -> Restart and Run All to make sure your notebook runs when the cells are evaluated in order. Note : Late homework will be accepted up to one week after the due date and will be worth 50% of its full credit score. ### 0. Warmup (Do not turn in) - Make sure you download and run the notebook for lecture 2. Work through the notebook, and see what happens when you change the expressions, and make up some of your own. - Read chapter one of [An Introduction to Real Analysis](https://www.math.ucdavis.edu/~hunter/intro_analysis_pdf/intro_analysis.html) by John K. Hunter. You can skim the sections on index sets and infinite unions and intersections. - Read up on [sets](https://www.w3schools.com/python/python_sets.asp), [tuples](https://www.w3schools.com/python/python_tuples.asp), and [lambdas](https://www.w3schools.com/python/python_lambda.asp) at python.org. ### 1. Set properties Given the following definitions of sets $$A = \{1,2,3\}$$ $$B = \{3,4,5\}$$ $$C = \{3,4,5,6,7\}$$ determine which of the following properties hold: $$0 \in A$$ $$4 \in B \cap C$$ $$5 \in C-B$$ $$A \subseteq B$$ $$A \subseteq C$$ $$A \cap B \subseteq A$$ $$B \subseteq C$$ If the property holds, say why. If it does not hold, give a counterexample showing the definition is not satisfied. ___ ### Answer starts from here ### Question 1 We have following sets to consider for answering questions: $$A = \{1,2,3\}$$ $$B = \{3,4,5\}$$ $$C = \{3,4,5,6,7\}$$ **Property 1:** **Answer:** No\ **Explanation:**\ Considering definition of set A, 0 does not belong to set A. Here "0" to part of set A, it has be an element of A. It should not be confused with NULL.\ **Counter Example:** Set $\{\}$ would belong to A, or, if A was $\{0,1,2,3\}$ then this statement will hold true. ___ **Property 2:** **Answer:** Yes \ **Explanation:** $B \cap C = \{3,4,5\} $ \ Considering above, 4 is an element of $B \cap C$ ___ **Property 3:** **Answer:** No \ **Explanation:** $5 \in C-B$ \ $ C-B = \{6,7\}$ : Here 5 is not part of the set C-B. While calculating C-B, We consider all the elements which are part of B but not part of C. **Counter Example:** if we remove 5 from B, then this statement will hold true. ___ **Property 4:** **Answer:** No \ **Explanation:** $A \subseteq B$ \ 3 is only element of A which is also available in B. the condition to be true we need all the elements of A must be in B.\ **Counter Example:** If A was $\{3,4\}$, this statement will hold true. ___ **Property 5:** **Answer:** No \ **Explanation:** $A \subseteq C$ \ 3 is only element of A which is also available in C. If condition has to be true, all the elements of A must be in C. \ **Counter Example:** If A was $\{3,4,5\}$, this statement will hold true. ___ **Property 6:** **Answer:** No \ **Explanation:** $A \cap B \subseteq C $ \ $A \cap B = \{ 3\} $: Considering this, all the element which are part of $A \cap B $ are part of C. hence, it is true to say that statement holds true. ___ **Property 7:** **Answer:** Yes \ **Explanation:** $B \subseteq C$ to be true all the element of B should be available in C. In this case all the elements 3, 4 and 5 are part of set C. Hence, statement is true. ### 2. Set operations Let $A = \{ 1,2,3,4,5 \}$ and $B = \{ 0,3,6 \}$. Find a) $A \cup B$ b) $A \cap B$ c) $A - B$ d) $B - A$ Verify your results using Python sets. ``` A = {1,2,3,4,5} B = {0,3,6} # a) print("A union B(Manual):", {0,1,2,3,4,5,6}) print("A union B(Python):", A.union(B)) #b) print("A intersection B(Manual):",{3}) print("A intersection B(Python):", A.intersection(B)) #c) print("A minus B(Manual):", {1,2,4,5}) print("A minus B:(Python)", A.difference(B)) #d) print("B minus A:", {0,6}) print("B minus A:", B.difference(A)) ``` ### 3. Set Proofs Using the definitions of the set operations, the definition of the subset relation, and basic logic, show the following are true. a) $A - B \subseteq A$ b) $A \cap (B-A) = \varnothing$ c) $A \cup (B-A) = A \cup B$ Show examples using small sets in Python illustrating each property. ___ ### Solution to Question 3. ___ **a) $A - B \subseteq A$** \ Say, &emsp; $x \in A-B $ &emsp; &emsp;&emsp;&emsp;&emsp; (if x belongs to A-B)\ &emsp;$\Rightarrow$ &emsp; $ x \in A $ and $ x \notin B $ &emsp;(by definition, if x belong to A it must not be part of B\ &emsp;as,&emsp; $x \in A $&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;(which implies that x will be part of A)\ &emsp; $\therefore$ &emsp; $ A-B \subseteq A $ --- **b) $A \cap (B-A) = \varnothing$** Say, &emsp; $ x \in B-A $ \ $\Rightarrow $ &emsp;&emsp;$ x \in B \;$ and $ \; x \notin A \; $ (by definition: if x belong to B, it must not be part of A)\ if,&emsp; $x \in B $ &nbsp; and &nbsp; $x \notin A$.\ $\therefore$ &emsp; $A \cap x = \varnothing$ \ Which proves that $A \cap (B-A) = \varnothing$ --- **c) $A \cup (B-A) = A \cup B$** Say,&emsp; $x \in B-A $\ $\Rightarrow$ &emsp; $ x \in B $ and $ x \notin A$ &emsp; (It implies that x is part of B but not A) With B-A, We are removing the common element between B and A, \ So, x contains the element exclusively from B. Hence, $A \cap x $ is same as $ A \cap B $ ___ ``` A = {1,2,3,4,5} B = {0,3,6} # Question 3 (a) C = A.difference(B) print ("_______ Question 3(a) _______") print ("A minus B = ", A.difference(B)) print ("Matrix A = ", A,"\n") print ("_______ Question 3(b) _______") print ("A minus B = ", A.intersection(B.difference(A))) print ("Null = ", set(), "\n") print ("_______ Question 3(c) _______") print ("A minus B = ", A.union(B.difference(A))) print ("Matrix A union B = ", A.union(B)) ``` ### 4. Cartesian Products a) If $A_i$ has $n_i$ elements, how many elements does $A_1 \times A_2 \times \dots A_n$ have? Why? b) Give an example where the numbers of elements in $A_1 \times A_2 \times \dots A_n$ is 1. c) Give examples of nonempty sets $A$ and $B$ where $A \times B = B \times A$. ___ #### Answer to Question 4 goes from here **a)** Cardinality of such a case would be $\; n_1\;\times\;n_2\;\times\;n_3\;\times\;n_4\;\times... n_n $\ To start with, let us take cartesian product of two sets namely $A_1$ and $A_2$ with $n_1$ and $n_2$ elements respectively. Here cartesian product will be: $$|A_1 \times A_2| = n_1 \times n_2$$ Similarly if consider cartesian product for 3 sets, $$|A_1 \times A_2 \times A_3| = n_1 \times n_2 \times n_3$$ Similarly, extending this for n number of sets, $$|A_1 \times A_2 \times A_3 \times ... A_n| = n_1 \times n_2 \times n_3\times ... n_n$$ **b)** Cardinality of cartesian product of n sets can be 1 if and only if all the sets have only one element. i.e., $$|A_1 \times A_2 \times A_3 \times ... A_n| = 1 \iff |A_1| = |A_2| =|A_3| =...= |A_n| = 1 $$\ Example: $$A_1 = \{1\},\; A_2 = \{2\},\;A_3 = \{3\},\;....\;A_n = \{n\}$$ **c)** Case 1: $A \times B = B \times A \Rightarrow A = B$ \ Case 2: $ A \times B = B \times A \Rightarrow A = \{\}$ and $B = \{\} $ if we are looking for non empty sets then, example: $$ A = \{1,2,3\} , B = \{1,2,3\} $$ ___ ### 5. Function Properties Plot each of the following functions from $\mathbb{Z}$ to $\mathbb{Z}$ over a representative subdomain (use points and not a line since the domain is $\mathbb{Z}$). What is the range of each function? State which of them are injective, which are surjective, and which are bijective. For functions for which a given property holds, explain why. For functions for which the given property does not hold, give a counterexample. a) $f(n) = n-1$ b) $f(n) = n^3$ c) $f(n) = n^2 + 1$ ___ ### Answer to Question 5 a) $f(n) = n-1$ \ Range : $-\infty$ to $\infty$ Injective : Yes \ Surjective : Yes \ Bijective : Yes b) $f(n) = n^3$\ Range : $-\infty$ to $\infty$ \ Injective : Yes \ Surjective : Yes \ Bijective : Yes c) $f(n) = n^2 + 1$ \ Range : 1 to $\infty$ \ Injective : No \ Surjective : No \ Bijective : No Range is not equal to co-domain in this case.\ Counter example: if we change the co-domain to 1 to $\infty$, function will become Surjective. if we change the domain to positive numbers (along with codomain 1 to $\infty$), function will become bijective. ___ ``` import matplotlib.pyplot as plt import numpy as np %matplotlib inline x = np.linspace(-500,500,1000) fig,ax = plt.subplots(1,2, figsize = (15,5)) fig1,ax1 = plt.subplots(1,1, figsize = (15,5)) #Defining first function from the question def F(x): return x-1 ax[0].plot(x,F(x), color = 'red') #plot x-1 ax[0].set_title('$f(x) = x-1$') #Defining second function from the question def F1(x): return x**3 ax[1].plot(x,F1(x), color = 'green') #plot x^3 ax[1].set_title("$f(x) = x^3$") #Defining third function from the question def F2(x): return (x**2 + 1) ax1.plot(x,F2(x), color = 'blue') #Plot x^2 + 1 ax1.set_title("$f(x) = x^2 + 1$") ``` ### 6. Composition a) Suppose that $f(x) = x^2 + 1$ and $g(x) = e^x$ are functions from $\mathbb{R}$ into $\mathbb{R}$. Find $f \circ g$ and $g \circ f$ and plot them in Python (on the same axis to compare). b) Suppose $f(x) = a x + b$ and $g(x) = c x + d$ are functions from $\mathbb{R}$ into $\mathbb{R}$. Find constants $a$, $b$, $c$ and $d$ such that $f \circ g = g \circ f$. ``` import numpy as np import math from sympy import * init_printing(use_latex='mathjax') ################################## Question 6 (a) #################################### def G(x): #declaration for Function G return round(math.exp(x),3) #Declaration for Function F def F(x): #declaration for Function F return x**2 + 1 def compose(F, G): #composing the function to generate a composite function return lambda x: F(G(x)) domain = np.linspace(-1.5,1.5,1000) #defining a domain for ploting the function FOG = compose(F,G) GOF = compose(G,F) fig,ax = plt.subplots(1,1,figsize = (14,7)) ax.set_title("FOG and GOF") count = 0 #ploting the function FOG and GOF for x in domain: if count == 0: ax.plot(x, FOG(x), "o",label = "FOG", color = 'red') ax.plot(x, GOF(x), "o",label = "GOF", color = "blue") count += 1 elif count >= 1: ax.plot(x, FOG(x), "o",color = 'red') ax.plot(x, GOF(x), "o",color = "blue") fig.legend() ``` ### Question 6 (b) $F = ax + b $\ $G = cx + d $ $FOG = acx + ad + b $\ $GOF = acx + bc + d $ if $FOG = GOF$ \ $\Rightarrow acx + ad + b = acx + bc + d $\ $\Rightarrow ad + b = bc + d $\ $\Rightarrow b(1-c) = d(1-a)$ $\Rightarrow \frac {b}{1-a} = \frac {d}{1-c} $ Considering above, $a \neq 1 \;and\; c \neq 1$ If we substitute $a = 2,\;\;\; -b = \frac{d}{1-c} \; \;\;\;...(i) $ from $(i), \;if \;\;c = 2,\;\; b = d $\ $\therefore a=c$ & $ b=d $ we have $a = 2, b = 3, c = 2, d = 3$ F = 2x + 3 \ G = 2x + 3 ### 7. Relations Define the relation $R$ on $\mathbb{N} \times \mathbb{N}$ saying that $x \; R\; y$ if and only if the binary representations of $x$ and $y$ have the same number of ones. For example, $15 \; R \; 23$ since $15$ in binary is $1111$ and $23$ in binary is $10111$, which both have four ones. Show that $R$ is an equivalence relation. ``` def Dec2Bin(x): """ Function to convert decimal to binary. output is list containing bianry representation. index 0 will have most significant digit of the bianry representation. """ if x == 0: return [0] #if number is 0, return 0. y = [] #declare a empty list, this will be used to append the respective values. while x: y.append(x%2) x >>= 1 return y[::-1] #reversing the list to maintain the most significant bit at index "0" def countOne(L): """ counts number of 1 in a list. this function assumes that entry is list of 0 and 1. However in any condition if list element is 1, it will count and return. """ N = 0 for i in range(len(L)): if L[i] == 1: N += 1 return N n=100 #this will limit the range of x and y to 1 to 99. to check for relations #beyond that please update the value of n R = {(x,y) for x in range(1,n) for y in range (1,n) if countOne(Dec2Bin(x)) == countOne(Dec2Bin(y)) } #checking for the values of x and y, x =15, y = 23 print ("are (15,23) related? : ", (15,23) in R) ##################################################################################### # a relation is equivalence if it is symmetric, transitive and reflexive # defining a function to check a pair of numbers are transitive. def transitive(R): for a,b in R: for c,d in R: if b==c and (a,d) not in R: return False return True #defining a function to check symmetricity of a relation def symmetric(R): for a,b in R: for c,d in R: if b==c and a==d: return True return False def reflexive(R): for a,b in R: if a==b and (a,b) not in R: return False return True T = transitive(R) S = symmetric(R) Ref = reflexive(R) if (T ==True and S == True and Ref == True): print("\nReflexive: ", Ref) print("Transitive: ", T) print("Symmetric: ", S) print("\nGiven relation R is Equivalence") ``` ### 8. Sets, Functions, and Relations in Python Express each of the following objects in Python. a) The set of $P$ prime numbers less than 100. b) The function $f: \{1,2,3,4,5,6\} \rightarrow \{0,1\}$ in which $f(x) = 0$ if $x$ is even and $f(x)=1$ if $x$ is odd, expressed as a dictionary. ``` #declare a function to check if a given number is prime def isPrime(n,i=2): #input arguments are n = number and i = index for recursion if (n == 1 or n == 0): #returns False if input number is 0 or 1 False if ( n == i): #if the number is 2 it return true return True if (n%i == 0): #for numbers > 2 dividing with all the numbers till it get 0. return False i = i+1 return isPrime(n,i) n = 100 #it doesnot include 100, hence will consider numbers less than 100. R = {(x) for x in range (2,n) if isPrime(x,2) == True} print("Prime numbers less than %d are: " %n,R) ############################# Part 2 ################# #### Declare a general function to generate a dictionary in which even place has value = 0 and odd has value = 1 def funcMap(n): """ it returns a dictionary with key starting with 1 to n. if key is odd, value will be 0 if key is even, value will be 1 it will return dictionary with n elements. """ n = n + 1 F = {} #declare and empty dictionary for i in range (1,n): F[i] = i%2 return F # function call for key 1 to 6. FofX = funcMap(6) print("\nHere goes the function: ",FofX) FofX[5] ```
github_jupyter
``` #imports import argparse import os from collections import OrderedDict import torch.nn.functional as F import torch import torch.nn.functional as F from torchvision import datasets, transforms, models from PIL import Image import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import json #define user options here #initialize parser parser = argparse.ArgumentParser() args = None option_dict = { 'data dir': 'Data directory (preordered as in Udacity\'s source)', '--save_dir': 'Location for model checkpoints', '--learning_rate' : 'def. 0.003', '--hidden_units' : 'def. 521', '--epochs' : 'def. 20', '--gpu' : 'enable GPU', '--mapping_file' : '.json mapping ID:Name', #for prediction '--arch' : 'pretrained architecture, def. \'vgg13\'' } for name, helptext in option_dict.items(): parser.add_argument(name, help=helptext) args = parser.parse_args() #define user options here #initialize parser #del parser parser = argparse.ArgumentParser() option_dict = { 'data dir': 'Data directory (preordered as in Udacity\'s source)', '--save_dir': 'Location for model checkpoints', '--learning_rate' : 'def. 0.003', '--hidden_units' : 'def. 521', '--epochs' : 'def. 20', '--gpu' : 'Enable GPU', '--mapping_file' : '.json mapping ID:Name', #for prediction '--arch' : 'pretrained architecture, def. \'vgg13\'' } model = models.vgg16(pretrained=False) #configs #pretrained models allowed # TODO set true global arch_dict arch_dict = { 'resnet18': models.resnet18(pretrained=False), 'alexnet' : models.alexnet(pretrained=False), 'vgg16' : models.vgg16(), 'squeezenet' : models.squeezenet1_0(pretrained=False), 'densenet' : models.densenet161(pretrained=False), 'inception' : models.inception_v3(pretrained=False) } parser.add_argument('--gpu', help=option_dict['--gpu'],action='store_true') args = parser.parse_args() #TODO IF device = 'cpu' if args.gpu: if torch.cuda.is_available(): device = 'cuda:0' print('Cuda found and enabled.') else: user_chosen_device = 'cpu' print('Cuda not found - default to CPU.') else: device = \ torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') print('Device: ' + device) #TODO: Set --data_dir data_dir = 'flowers' train_dir = data_dir + '/train' valid_dir = data_dir + '/valid' test_dir = data_dir + '/test' hidden_units =512 #--hidden_units n_epochs =20 = n_epochs parser.add_argument("data_dir", help=option_dict['data_dir']) args = parser.parse_args() #to prediction mapping_file = 'cat_to_name.json' #--category_names #TODO Set -- batch_size #--learning_rate globa learning_rate learning_rate=0 #--arch arch ='vgg13' #define other preprocessing methods global data_transforms = { 'train' : transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]), 'test' : transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]), 'valid' :transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) } #model #define default initialized model. def construct_model(arch='vgg13',#TODO add other hyperparams, if needed ): architecture = arch_dict[arch] print('Loading ' + arch + '...') def def train_model(model, datadir, batch_size = 32, learning_rate, ): # Load the datasets with ImageFolder image_datasets = {'train' : datasets.ImageFolder(train_dir, data_transforms['train']), 'test': datasets.ImageFolder(test_dir, data_transforms['test']), 'valid': datasets.ImageFolder(valid_dir, data_transforms['valid'])} dataloaders = {'train' : torch.utils.data.DataLoader(image_datasets['train'], batch_size = 32, shuffle=True), 'test' : torch.utils.data.DataLoader(image_datasets['test'], batch_size = 32, shuffle=True), 'valid': torch.utils.data.DataLoader(image_datasets['valid'], batch_size = 32, shuffle=True)} ```
github_jupyter
``` import os import sys import random import math import numpy as np import cv2 import matplotlib.pyplot as plt import json import pydicom from imgaug import augmenters as iaa from tqdm import tqdm import pandas as pd import glob ``` ### First: Install Kaggle API for download competition data. ``` DATA_DIR = '/kaggle/input' # Directory to save logs and trained model ROOT_DIR = '/kaggle/working' ``` ### Install Matterport's Mask-RCNN model from github. See the [Matterport's implementation of Mask-RCNN](https://github.com/matterport/Mask_RCNN). ``` !git clone https://www.github.com/matterport/Mask_RCNN.git os.chdir('Mask_RCNN') #!python setup.py -q install # Import Mask RCNN sys.path.append(os.path.join(ROOT_DIR, 'Mask_RCNN')) # To find local version of the library from mrcnn.config import Config from mrcnn import utils import mrcnn.model as modellib from mrcnn import visualize from mrcnn.model import log train_dicom_dir = os.path.join(DATA_DIR, 'stage_2_train_images') test_dicom_dir = os.path.join(DATA_DIR, 'stage_2_test_images') ``` ### Some setup functions and classes for Mask-RCNN - dicom_fps is a list of the dicom image path and filenames - image_annotions is a dictionary of the annotations keyed by the filenames - parsing the dataset returns a list of the image filenames and the annotations dictionary ``` def get_dicom_fps(dicom_dir): dicom_fps = glob.glob(dicom_dir+'/'+'*.dcm') return list(set(dicom_fps)) def parse_dataset(dicom_dir, anns): image_fps = get_dicom_fps(dicom_dir) image_annotations = {fp: [] for fp in image_fps} for index, row in anns.iterrows(): fp = os.path.join(dicom_dir, row['patientId']+'.dcm') image_annotations[fp].append(row) return image_fps, image_annotations # The following parameters have been selected to reduce running time for demonstration purposes # These are not optimal class DetectorConfig(Config): """Configuration for training pneumonia detection on the RSNA pneumonia dataset. Overrides values in the base Config class. """ # Give the configuration a recognizable name NAME = 'pneumonia' # Train on 1 GPU and 8 images per GPU. We can put multiple images on each # GPU because the images are small. Batch size is 8 (GPUs * images/GPU). GPU_COUNT = 1 IMAGES_PER_GPU = 8 BACKBONE = 'resnet50' NUM_CLASSES = 2 # background + 1 pneumonia classes IMAGE_MIN_DIM = 256 IMAGE_MAX_DIM = 256 RPN_ANCHOR_SCALES = (32, 64, 128, 256) TRAIN_ROIS_PER_IMAGE = 32 MAX_GT_INSTANCES = 3 DETECTION_MAX_INSTANCES = 3 DETECTION_MIN_CONFIDENCE = 0.9 DETECTION_NMS_THRESHOLD = 0.1 STEPS_PER_EPOCH = 100 config = DetectorConfig() config.display() class DetectorDataset(utils.Dataset): """Dataset class for training pneumonia detection on the RSNA pneumonia dataset. """ def __init__(self, image_fps, image_annotations, orig_height, orig_width): super().__init__(self) # Add classes self.add_class('pneumonia', 1, 'Lung Opacity') # add images for i, fp in enumerate(image_fps): annotations = image_annotations[fp] self.add_image('pneumonia', image_id=i, path=fp, annotations=annotations, orig_height=orig_height, orig_width=orig_width) def image_reference(self, image_id): info = self.image_info[image_id] return info['path'] def load_image(self, image_id): info = self.image_info[image_id] fp = info['path'] ds = pydicom.read_file(fp) image = ds.pixel_array # If grayscale. Convert to RGB for consistency. if len(image.shape) != 3 or image.shape[2] != 3: image = np.stack((image,) * 3, -1) return image def load_mask(self, image_id): info = self.image_info[image_id] annotations = info['annotations'] count = len(annotations) if count == 0: mask = np.zeros((info['orig_height'], info['orig_width'], 1), dtype=np.uint8) class_ids = np.zeros((1,), dtype=np.int32) else: mask = np.zeros((info['orig_height'], info['orig_width'], count), dtype=np.uint8) class_ids = np.zeros((count,), dtype=np.int32) for i, a in enumerate(annotations): if a['Target'] == 1: x = int(a['x']) y = int(a['y']) w = int(a['width']) h = int(a['height']) mask_instance = mask[:, :, i].copy() cv2.rectangle(mask_instance, (x, y), (x+w, y+h), 255, -1) mask[:, :, i] = mask_instance class_ids[i] = 1 return mask.astype(np.bool), class_ids.astype(np.int32) ``` ### Examine the annotation data, parse the dataset, and view dicom fields ``` # training dataset anns = pd.read_csv(os.path.join(DATA_DIR, 'stage_2_train_labels.csv')) anns.head() image_fps, image_annotations = parse_dataset(train_dicom_dir, anns=anns) ds = pydicom.read_file(image_fps[0]) # read dicom image from filepath image = ds.pixel_array # get image array # show dicom fields ds # Original DICOM image size: 1024 x 1024 ORIG_SIZE = 1024 ###################################################################### # Modify this line to use more or fewer images for training/validation. # To use all images, do: image_fps_list = list(image_fps) image_fps_list = list(image_fps[:1000]) ##################################################################### # split dataset into training vs. validation dataset # split ratio is set to 0.9 vs. 0.1 (train vs. validation, respectively) sorted(image_fps_list) random.seed(42) random.shuffle(image_fps_list) validation_split = 0.1 split_index = int((1 - validation_split) * len(image_fps_list)) image_fps_train = image_fps_list[:split_index] image_fps_val = image_fps_list[split_index:] print(len(image_fps_train), len(image_fps_val)) ``` ### Create and prepare the training dataset using the DetectorDataset class. ``` # prepare the training dataset dataset_train = DetectorDataset(image_fps_train, image_annotations, ORIG_SIZE, ORIG_SIZE) dataset_train.prepare() ``` ### Let's look at a sample annotation. We see a bounding box with (x, y) of the the top left corner as well as the width and height. ``` # Show annotation(s) for a DICOM image test_fp = random.choice(image_fps_train) image_annotations[test_fp] # prepare the validation dataset dataset_val = DetectorDataset(image_fps_val, image_annotations, ORIG_SIZE, ORIG_SIZE) dataset_val.prepare() ``` ### Display a random image with bounding boxes ``` # Load and display random samples and their bounding boxes # Suggestion: Run this a few times to see different examples. image_id = random.choice(dataset_train.image_ids) image_fp = dataset_train.image_reference(image_id) image = dataset_train.load_image(image_id) mask, class_ids = dataset_train.load_mask(image_id) print(image.shape) plt.figure(figsize=(10, 10)) plt.subplot(1, 2, 1) plt.imshow(image[:, :, 0], cmap='gray') plt.axis('off') plt.subplot(1, 2, 2) masked = np.zeros(image.shape[:2]) for i in range(mask.shape[2]): masked += image[:, :, 0] * mask[:, :, i] plt.imshow(masked, cmap='gray') plt.axis('off') print(image_fp) print(class_ids) model = modellib.MaskRCNN(mode='training', config=config, model_dir=ROOT_DIR) ``` ### Image Augmentation ``` # Image augmentation augmentation = iaa.SomeOf((0, 1), [ iaa.Fliplr(0.5), iaa.Affine( scale={"x": (0.8, 1.2), "y": (0.8, 1.2)}, translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)}, rotate=(-25, 25), shear=(-8, 8) ), iaa.Multiply((0.9, 1.1)) ]) NUM_EPOCHS = 3 # Train Mask-RCNN Model import warnings warnings.filterwarnings("ignore") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=NUM_EPOCHS, layers='all', augmentation=augmentation) # select trained model dir_names = next(os.walk(model.model_dir))[1] key = config.NAME.lower() dir_names = filter(lambda f: f.startswith(key), dir_names) dir_names = sorted(dir_names) if not dir_names: import errno raise FileNotFoundError( errno.ENOENT, "Could not find model directory under {}".format(self.model_dir)) fps = [] # Pick last directory for d in dir_names: dir_name = os.path.join(model.model_dir, d) # Find the last checkpoint checkpoints = next(os.walk(dir_name))[2] checkpoints = filter(lambda f: f.startswith("mask_rcnn"), checkpoints) checkpoints = sorted(checkpoints) if not checkpoints: print('No weight files in {}'.format(dir_name)) else: checkpoint = os.path.join(dir_name, checkpoints[-1]) fps.append(checkpoint) model_path = sorted(fps)[-1] print('Found model {}'.format(model_path)) class InferenceConfig(DetectorConfig): GPU_COUNT = 1 IMAGES_PER_GPU = 1 inference_config = InferenceConfig() # Recreate the model in inference mode model = modellib.MaskRCNN(mode='inference', config=inference_config, model_dir=ROOT_DIR) # Load trained weights (fill in path to trained weights here) assert model_path != "", "Provide path to trained weights" print("Loading weights from ", model_path) model.load_weights(model_path, by_name=True) # set color for class def get_colors_for_class_ids(class_ids): colors = [] for class_id in class_ids: if class_id == 1: colors.append((.941, .204, .204)) return colors ``` ### How does the predicted box compared to the expected value? Let's use the validation dataset to check. ``` # Show few example of ground truth vs. predictions on the validation dataset dataset = dataset_val fig = plt.figure(figsize=(10, 30)) for i in range(4): image_id = random.choice(dataset.image_ids) original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_val, inference_config, image_id, use_mini_mask=False) print(original_image.shape) plt.subplot(6, 2, 2*i + 1) visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id, dataset.class_names, colors=get_colors_for_class_ids(gt_class_id), ax=fig.axes[-1]) plt.subplot(6, 2, 2*i + 2) results = model.detect([original_image]) #, verbose=1) r = results[0] visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'], dataset.class_names, r['scores'], colors=get_colors_for_class_ids(r['class_ids']), ax=fig.axes[-1]) # Get filenames of test dataset DICOM images test_image_fps = get_dicom_fps(test_dicom_dir) # Make predictions on test images, write out sample submission def predict(image_fps, filepath='submission.csv', min_conf=0.95): # assume square image resize_factor = ORIG_SIZE / config.IMAGE_SHAPE[0] #resize_factor = ORIG_SIZE with open(filepath, 'w') as file: for image_id in tqdm(image_fps): ds = pydicom.read_file(image_id) image = ds.pixel_array # If grayscale. Convert to RGB for consistency. if len(image.shape) != 3 or image.shape[2] != 3: image = np.stack((image,) * 3, -1) image, window, scale, padding, crop = utils.resize_image( image, min_dim=config.IMAGE_MIN_DIM, min_scale=config.IMAGE_MIN_SCALE, max_dim=config.IMAGE_MAX_DIM, mode=config.IMAGE_RESIZE_MODE) patient_id = os.path.splitext(os.path.basename(image_id))[0] results = model.detect([image]) r = results[0] out_str = "" out_str += patient_id out_str += "," assert( len(r['rois']) == len(r['class_ids']) == len(r['scores']) ) if len(r['rois']) == 0: pass else: num_instances = len(r['rois']) for i in range(num_instances): if r['scores'][i] > min_conf: out_str += ' ' out_str += str(round(r['scores'][i], 2)) out_str += ' ' # x1, y1, width, height x1 = r['rois'][i][1] y1 = r['rois'][i][0] width = r['rois'][i][3] - x1 height = r['rois'][i][2] - y1 bboxes_str = "{} {} {} {}".format(x1*resize_factor, y1*resize_factor, \ width*resize_factor, height*resize_factor) # bboxes_str = "{} {} {} {}".format(x1, y1, \ # width, height) out_str += bboxes_str file.write(out_str+"\n") # show a few test image detection example def visualize(): image_id = random.choice(test_image_fps) ds = pydicom.read_file(image_id) # original image image = ds.pixel_array # assume square image resize_factor = ORIG_SIZE / config.IMAGE_SHAPE[0] # If grayscale. Convert to RGB for consistency. if len(image.shape) != 3 or image.shape[2] != 3: image = np.stack((image,) * 3, -1) resized_image, window, scale, padding, crop = utils.resize_image( image, min_dim=config.IMAGE_MIN_DIM, min_scale=config.IMAGE_MIN_SCALE, max_dim=config.IMAGE_MAX_DIM, mode=config.IMAGE_RESIZE_MODE) patient_id = os.path.splitext(os.path.basename(image_id))[0] print(patient_id) results = model.detect([resized_image]) r = results[0] for bbox in r['rois']: print(bbox) x1 = int(bbox[1] * resize_factor) y1 = int(bbox[0] * resize_factor) x2 = int(bbox[3] * resize_factor) y2 = int(bbox[2] * resize_factor) cv2.rectangle(image, (x1,y1), (x2,y2), (77, 255, 9), 3, 1) width = x2 - x1 height = y2 - y1 print("x {} y {} h {} w {}".format(x1, y1, width, height)) plt.figure() plt.imshow(image, cmap=plt.cm.gist_gray) visualize() # remove files to allow committing (hit files limit otherwise) !rm -rf /kaggle/working/Mask_RCNN ```
github_jupyter
# EXAMPLE: Personal Workout Tracking Data This Notebook provides an example on how to import data downloaded from a specific service Apple Health. NOTE: This is still a work-in-progress. # Dependencies and Libraries ``` from datetime import date, datetime as dt, timedelta as td import pytz import numpy as np import pandas as pd # functions to convert UTC to Eastern time zone and extract date/time elements convert_tz = lambda x: x.to_pydatetime().replace(tzinfo=pytz.utc).astimezone(pytz.timezone('US/Eastern')) get_year = lambda x: convert_tz(x).year get_month = lambda x: '{}-{:02}'.format(convert_tz(x).year, convert_tz(x).month) #inefficient get_date = lambda x: '{}-{:02}-{:02}'.format(convert_tz(x).year, convert_tz(x).month, convert_tz(x).day) #inefficient get_day = lambda x: convert_tz(x).day get_hour = lambda x: convert_tz(x).hour get_day_of_week = lambda x: convert_tz(x).weekday() ``` # Import Data # Workouts ``` # apple health workouts = pd.read_csv("C:/Users/brand/Desktop/Healthcare Info Systems/90day_workouts.csv") workouts.head() ``` # Drop unwanted metrics ``` new_workouts = workouts.drop(['Average Pace','Average Speed','Average Cadence','Elevation Ascended','Elevation Descended','Weather Temperature','Weather Humidity'], axis=1) new_workouts.head() age = input("Enter your age: ") print(new_workouts.dtypes) #new_workouts["Duration"] = pd.to_numeric(new_workouts.Duration, errors='coerce') display(new_workouts.describe()) ``` # Create Avg HR Intesnity ``` new_workouts['Avg Heart Rate Intensity'] = new_workouts['Average Heart Rate'] / input().age # other try- new_workouts['Average Heart Rate'].div(age) workouts.tail() ``` # Exercise Guidelines ``` # Minutes of Weekly Exercise def getExer(): global ex_time ex_time = input("Enter weekly exercise time in minutes: ") print("For more educational information on recommended daily exercise for adults, visit", "\nhttps://health.gov/paguidenes/second-edition/pdf/Physical_Activity_Guidelines_2nd_edition.pdf#page=55") print() if int(ex_time) <= 149: print("Your daily exercise time of", ex_time, "is less than recommended. Consider increasing it to achieve at least 150 minutes per week to improve your health.") elif int(ex_time) >= 150 and int(ex_time) <= 300: print("Your daily exercise time of", ex_time, "is within the recommended amount. Achieving 150-300 minutes per week will continue to improve your health.") elif int(ex_time) >= 301: print("Your daily exercise time of", ex_time, "exceeds the recommended amount. Your weekly total should benefit your health.") else: print("Invalid entry for minutes of daily exercise") getExer() ```
github_jupyter
<a href="https://colab.research.google.com/github/wisrovi/pyimagesearch-buy/blob/main/skin_detection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ![logo_jupyter.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAZAAAABcCAYAAABA4uO3AAAAAXNSR0IArs4c6QAAAIRlWElmTU0AKgAAAAgABQESAAMAAAABAAEAAAEaAAUAAAABAAAASgEbAAUAAAABAAAAUgEoAAMAAAABAAIAAIdpAAQAAAABAAAAWgAAAAAAAABIAAAAAQAAAEgAAAABAAOgAQADAAAAAQABAACgAgAEAAAAAQAAAZCgAwAEAAAAAQAAAFwAAAAAD7LUsAAAAAlwSFlzAAALEwAACxMBAJqcGAAAAVlpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IlhNUCBDb3JlIDUuNC4wIj4KICAgPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICAgICAgPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIKICAgICAgICAgICAgeG1sbnM6dGlmZj0iaHR0cDovL25zLmFkb2JlLmNvbS90aWZmLzEuMC8iPgogICAgICAgICA8dGlmZjpPcmllbnRhdGlvbj4xPC90aWZmOk9yaWVudGF0aW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KTMInWQAAQABJREFUeAHsnQeAXVWZ+M+5r01LmSRTEyCRgJgoFjqiSRRUsOMSXCkpgLGgruta/uiaia6srq676qLSUlBxJSKKHRRCs0EAS6JCFhIymZoymf7evHvv//d99943b2beTGYmIQR4J5lX7jv1O9/52vnOd4wppiIEihAoQqAIgSIEihAoQqAIgSIEihAoQuBwQcAeroYOuh3ft2aNsWYBf1s30e/Fxiyi1ns2GbNgsW+2Gt+s5s9a/6DbKlZQhEARAkUIFCFwQAgcmQxEmMVG45h9/EmqNJ5Zat0DjkYyNPiOqdscM+YkY5o3wVQWu0WmMi7IFTMVIVCEQBECE4LAkcFAGhpgFIscs2gxGgXMosF6hUZRf21TmZew5U7cL816sQR8xokns5mB7kzGxMu7d6+c1V2QWQhD2mRgKqQx6i/UZvFZEQJFCBQhUIRAYQg8swzklltiC83C2JalCzP53au6pa0i1uO+yLfeibCTE411jjW+NwfmMMMYf4oxNsU7DMFaDFpZDFdZynfzt5/nTcY4243v/8U4zp/8pP1L27tqWvPrN9c+lDDHn+SbJVbKFVMRAkUIFCFQhMAkIPDMMBAYh9n3AsesOnkg6nPt2uYFvvHPNpY/Y082sUSdU1KuPMJ3sV550HoXXuGjnPhsc8ifJB0BfMSBnzgoMrE4n+NBuYG08dN9nWT+M/nudnxzR/Pcut/mGIf0o+oCa5YYGijunQQALb4WIVCEQBEC44PA4WUgd/tx046+EO5nVF/fWmPj7jsh8Esh4Kc5FZUxNA3jZ/qNP5CGQ4iGIG/8Y3ucIUV/hUYH/+Gxg/lLP8kX65hYLGaTpcbGEsbvR0nJZh6DV9zu2fi32pbV/ClXkWgl7z4pW9AElstU/FCEQBECRQgUIRBB4PAwkGhjO9Q4atbtfAmqwgfoxDud8ulTfNEs0r2iVaCRqGoh+xXBnoXyDGEGE00yNC0n+ymyAS8sKGETpdYmUsbrFWuXf6e19isty+t+qrVHeyVF05aCo/hShEARAkUIjAWBp5+BXOsnzCqrpqqam1pebFz/k0j5MI5pEHGsSy7cQ5JVj6vA62qsHh/Ub2gnll2VQENJ2FK2U0h+X9eDsJqr21bU/1AfiDbS/BPXNDQU3MzXPMWXIgSKEChC4HkOgaePgdwim9wkzFWVt+ydluzt+zfj2yvROIzX0yEMIwMhT5Dj6euDdmCMF9l8d7CPpcrZOMFK1tf9a8/1Ptp++exHtFQe8xujluJPRQgUIVCEwPMSAk8P8c4jvDXrdrHHYb/C/ka1Mg7fw+PKwjig2IGJ6UgAvGhIji2dGvPTPQaT2n+0VTxwlVm61F3YsCW5pWGol9iR0OFiH4oQKEKgCIFnGgKHnIEsvAWCi1tu1TVtFU6Zez2mqncqUc4OiKsu7lHh4cBneuQj28eKxaa99eNOxUzrde75KyrUxU0r6x82ok3JSfdRzqeMrKr4pAiBIgSKEHjuQ+DQMRDZgL5uc1xcc2vXNZ/CNsOtTsWMo7yuvRBlMVNZ6PFkNsMP8ySIvxeb+bakIineYNbPXtmyfPY12ouGu+OmYUnx7MhhnpJic0UIFCFwZELg0DAQYR4SeoT9juobGy+18cQG/sQdNw3rSEKSD007hxOGukdjk04FezZde29oXTn7Cm1eXJGLXlqHcyaKbRUhUITAEQqBg/d6EhddSTCPmrVNn4LgbhBPXD+Tln2F1LOSech4fBgfh1K8rn1ZZ+qsy2vWNd2hJ9iFeYiXVjEVIVCEQBECz3MIHJxmIMxjtbAI69eubfqSnTrzI0jrLgxEjFYHz5yOnMlJY45LMbaHYgNmUdOq+l5lInkn6Y+crhZ7UoRAEQJFCBweCEyeyAuTWLARRjGEechBwOca85CZSHnde/vxJDs5mzT3LbzFT2oYFtkTOZgk5UWbkb/I7flg6iuWLUKgCIFnPwSEtgpNEHO5/B3BafIaSLgXUL2ueXVsSmWD16mb5XL2Y/J1HsGACruW5hxLyu/tuL9lef2r9JnE08Ld99B0HcQpxuQ6NKAs1lKEwLMRAmq9efbE5ZsUsY9cdTnjsRKCeqPXs18IqGgzk6rvWTbPaWfKTNFIbmldXnehmutkAJO4yKpmbeNriEg/3eHFt7axdUXt70IYPgvc1Z5ls1bsbhECRzoEQuZBYNkqz/FfTQAmz3FsumXH3l8ZOYt2BDKXiatHHBLcstRmqjc0nmFs8kYNRxJMzPOBeQiLTHpduzPOlFlL0b62tlm7xsjBSWM0XMsBcTQfCZzYDbEps+bZeNy4e5rvpuxruBDLsq80KYZ0wLaLGYoQKELgyIVAcGdRFubx8ti0mu8bool7vR2Z2fOm1e8yZo/ZuFGE9ENk7Tg0YJjYHoiYa4hrdfR3OiqtF/uBjWGx8nw5FyGmq+dHUpdkm/C69nhOSVlD7frGN2qsr4CJTAwGvrcfTQY34X3CmAgMJmmNfC6mIgSKEHi+QeCeYMCOawdw2JGQTz6R+/bF4hVHbEy+8TMQkZy3XqCmlXSmZwMnzGv9TF8G083EtZhnP2IIiSeS1gBapbOh/uamWcpEJrgRjrcBgR0BqdxxIvUVUxECRQg8fyGwKBg65mxrudfIODFIhB/3+rqOWJFy3Axk4catCQnlUbOh6XKnvPLNcEdMNlbOSjxfJzzmZzOyqT7TTZvrAyBs5E02wseXPN+vjlXWGKdypkBxRlBq9fMYpOODWzFXEQLPbQhEwS6gCpPYWz2csBmf9oCYvcXaTNW6J2uNZ/9TYluRxs18DueADmtbsh/S25G1ZdPeVrN214WtS2d/LzgfMsZ+SB5C2Jhdk93bUucYL+YYZ4v2vQFe0vDs8cI4rPAuNlaEQBECRxQExsdANimz8KxJfZ6T5lOx20tEXbSP53mS/RAJKjzQL5LCF+fcsvP2xqVH9Y3XW6J1Wf0NQyCYv8E+5IfilyIEihAoQuDIg8CBGUh43qNmbctpeAst87r3YbA/wpmHBkQUKZ7/g0lMS+M2Lw0WO8An38S5fjfDIcOjst0dHyd3g1mDuc9w38mBkhwkrJsS9KnyJC67skeUh8WBul/8vQiBIgSe3xAYm4GIRLwGPwCStV6DSZQgbWfkVr/Dab6iffH0Gk77hT9w57nVEPHRLArHyNJj3XxiEyp8zmPPk99CV1txux1eX1SFjLdQe7nfpaAwiDBJtUY2uuT9w1Xr2r7ZvqK6RU+Wh3e/yw8F06GK7KvztCkWMKOTaGqzMcKQlrJJP4yLFuyHPJQ6JCDmvs3MbVhHc5dvVi92mXwd5Khlx/uDxk3b5Az2k4LHw+XbN/pm6QX09RC1E/VnxJjC9u5hjp/u0PxBjDjGuhl8eZrgOWLeI3gC06WCx4cYnuIkUsXCeSxcPLm5O1QHaaOJ431UXHmaxiZN5+PL8Sf5Ewqa2tDAulmUh9usQanj6cQ1gdEi1uzhmI+8qcn/ODYDkfDsDScPVN3UdBZKxxv8vk4x2kRUOb+ep+uzx/3ljoRWDzyVIqIPPYN3sIkt19EKcZMf9DCjLZuasPzm9bNP42bE1CbPkzZZGuMv4buU6e0WFiPlho9llPbIKdnFXOVxuqeH+9SHJse42QG0kKmme5/c9f7JgBAfwGe7wY8vXLBVmfEWs8CVgJRDq+UbWsrCBdWF8wgC1TEGq1cGRztvg1XIgrgOZrcKpjoaMRlah7Q/tA8NPBEX5WaeT4boRvVXQtCC8QlTG5m0r7TzbtpBWhmZYQJPhNDtY2EFcBk5JqlKQkUIgxzOxOmHOoyQZctWXhoWiLOI4Mr4UhR6IojYPHIcDVQj8ByEx/jqzc+l44PRjzbvUd6DbUfqiebv3eDQaBry0Dzjh1XUz+g9wNd4HmxGwi/IS6iPCcJw/OuoML5EfRz+HlkRgrh4hfvLHjLrMGaa1xyaa7IjRj4ajkXtBXM2+fkYPtYC38dmIJVPKEBYzh+yZWVCsEWCPzx7H3hA22TK8TL9f3cG0nfj71qC4qP94Se5iDaNu9vRuLudC/HO2FRpEi0D5rD/19D6HzJlj/jxRGs8nR3wHVvu9ffNsem+U+n/W21J2ckCCz/dK+MRGMAZrGsTyRjmqC0mm74XD6nSqD3NK23iSEDdleQ9n2cQbgk7rMyLR4jpwrSMuXza13d8af+qY/bp4huN6MpisTYb7JxLMVLwbHDCR8sjeeXQUR4C1d/UdDTewFWe58T9mO0zmZ5WDjm2kjPQugTp8hmULla5v0WZj8J11rXb62wyXgO3KnGsMwDr2d18Ud1TYR6jcXmCNqUHYychKiId5fVx7ronS9LJVE02E690fC/lxD083v2uXqestcNa7jmmr6t4hbHyOnEtQcYoKRinMsKam1qqMSZW24RfZgZQTuPJPW3pnYzp5AAukeQojCSEP3MyzPzIXB2IiUifG2B+IXwkWsPu7oo5jpOo5DgxI82mOTjUvmfZrF05eMq5KnNB1F/t+pgvQhg2gm+58fl29vW7ZmfiZpbjx1NgoGdjfjewbdu1bM6ewXYELngITiTkjsCl7k167os+eTIvszc0zswOeHXc3VkuDuhOLNbZlylp3rfKikSleZQxv/skYTaDeDzmoPhR2lq0WhiiCEE6LwK/fd21NVk/O4NlV8JpgdHHdqDL3kZbR9LH0ESv/eel+ubWGn8gWx/znUzL8rohy5OfB5MwDrNYcDQQ3JjLus6z5iC7znStl5T1k826+026s6Xd2m4KBsxF2rtnDeUagu+DNR74074XOAiUMv/a5pwv7yztn+YcHY8706Swzfhd6dLUrr3WypmyYD4Gx3fg+ieRQyT3wkkIAMSvat3O4xwT+ysSf0yim5NZpeHChQ7lU19Oe3Pqe+/XWlfUfbBQzbUbmk81TuL3Jo6C0te9Fa7ynuYV9fcVypv/rHpd6znWZL+EO/KJXu9+IRyMzbi2bIqYou5tW1G/KD//8M8165u22kTJiwhZD5EKiVaQSbSQhNvV8b62lXXfyEPOoVWEhKp6XdPbmIAFkARkbnt/y8q6e3JMJ8yDd9dbMcUtgDnGbNze13LpYB6I4zzjuqvgXa9jtc4FhaZof7gQi89iU/s749rQurL+Ou1AhExKNFezcGV+n6y1fvLdaG3nAYdj6YsgY0IXs296qa8JHvnjeMr/j6Z31e9W4h4tGq20wEses5q5YefsmGcvBH/OZrkeT+5Z9K0MehwHxYX94oHgwzzsk3y+m8X3rd3L5jymtU4kzphoFCFT4E6alzqOcym1L4H0H01dFfyJ4AOYbA9vHOw1D1jP+07LZXPu5rORAJkSYUEYjs26y+mvT8oOJHtu3HvxcZ0Rc5G8I1Je27Xrdp4Hnb+YPKcAwxreGauoy2IW9bsxu+6iE/fZuLOh9dLa32tdeeVH1B09EIIVaks161pOZy2uAF6v5Od66qtAOY4zOj6aNGOEoPtP8X63idlv085ftJq8OqJqC77nzV/19U/UOInSZeDfm7EYz6fe6ZRJ8SdtMXfoetb+jXH9KG1S3+pYUdmhsFpD70YTniiUS3ltyXkqL22XIiC+gSXxQoZTzRjKySsCRTg2I8xqB7T/Pg9caVt21J+0rrx69Hv0Eq4jaMW5zOcrBEaIfZvbVtb/Mlqfs7/dOAf7wXtp4o20d5QtnTLD9HY90rKy/hU8Gyk85M1X3fqmV2MdXw4MzqBumKsp5136iyZtcKgBPsb8jb+f+tnY/7ZdUSNCnSlo4g7Xp4Q4sqnyX/tZptL3dqd6B45/6n0IpGGqvWHnqV4s9j46dhaPaulzadBP5sP3ZT62YGD5oVueuKl9aXW3WDEi3InqOFTvMtDCSUwjcDE48cV2yoxY4Hl1mLQP7RHgkUN2aB7ydf5XH09t29vomgWL/fkt2+LbPngc5/yzZXYansX7mn8Gk3kTgBP0QF3cLH2PbJCYJvn8GLZosUkuNi6S+Z08eSmhSP7HKZvyfr+3U0xaMb+/y3fiyVfLjYotK+oeVKKSM+lsNHMaz0g2/jNeVp7/a5sohYH0C0MN2uIDyMIFhpytdMxyvn1D2pLHQxMIKYtLZ9y/MlZV/1r5Pdvecg1v96jULigehS2w9v2xGTXnCEq6bc3CCGAy1qte3/xpk3WvkuCOhkNHMBFqpDsKMi8J8s1k0Z8JIp5Zu3bXe5wB+7amJfYpVf1V62gwxNy5EoBdjdlvinWQC6I9owDuce6Gn0o4halOquwEt6fjCpjZBa0rLXF5IP6jMRH5TSQk3quPabkaJe1KFmSpzGVwOIp2oiTsQwis75UBt3rjxF8Z6+v8FAz6v1qX1X2EQblmPEwkXNASQ8i3/n8Bj4tMCrqT6VO5BzMoa4x2oUq8w2TtCZgiT2C+L6tZ13wX3/8J5vFn6ZbNDMy1lTVfUNyjfKIn80Med+r+UA4XJCdJiJPM01Ju4WRR+/H4f5tk2RkyFfSd8cpWmXzRxA1rXiUsqZK5erHJZt7LOG9z3eyVuy87uilw/w61oqhE9B4SAJhbuXG962wy+S5gJe0btGZy5dqQD6W+ly3lkGstcDgVLfvj3NPz7VRq4INPXaRa8ehzJ+1FQgYf6d//A5/+nymZMoW1BnownghHgkbLaKsMC8Bs2nptqrdzNXj5r6yvb1DcP+Dc5RH96nWN/+qm7UdsacU0UewFVxQnpU9B0rGB06XgSi39OM1m+v6FeHxrE+XelY1L0brz6osK5ebN8y+LVde9Q9aH297yPX7/pWiLEs+P88DfYB0lgVVgZXGRKR0rzHFkCuAzMOvGXS+MOfarfrzkdYrX5JRL9PLmQgTuCvpawftRCJzn4PL/b9Cc/2hbUfdZ1SIL9Xd4iyzQ/oQrk6xzU7Oj+T9943wwVo6cJzitiUHpmmXuswOl4HY9eHGO073/09U3Nr237bL62wrCJix9MG+jMBAWhhAZFoi3rulCJ4Mg+kxdSQsAZYDKPBqWCEH2t4kaR/JMbIbd1/zn1hX1bzQrQilSzEyRuiiZCiWxnzI+JvJKJrTVKZ36GfZS5B4T15aUJ/E0ExPVg4aL0M3ShTkm0HjtQ0iRQMJ3fsZeypV8jGZQHgvuxDFj+bJw625sOrnZ2odGcn/pug5JwLvP69gbFvVFYxD2gGbM3wXBqX8I0T63Y7cip++JpM7CXtd0G/eTvE0kFA+kh1igIZpHqHY3xCkJyI4i26nsHVXB+AlDP+Plbs++++fcsPNljZdbbbB6bdPX7ZTK9xopL/s6fZ1ttPUH8BDpjpp8rwrh60SbKnkRh0Y9G08iefp3It2f2XaZ/W0+oZE+aQoWRDYg5E2/cqZWneh17mbFsiBhrBChx0D+v1NPCy30AYmE55gqzCEnmFjsxTamoHadaTUfhrC/rLXBP1sXWqgNR80MeQ/nsmZd4+l0+ifOlBkzCTOT8fu7kzYFb+rv2k3EhEcosx2A9/EOs7THMc5XwDjLnJTzGsb3KER2FZraDRwB9rzOPXRE56g35iVz8z+kXZFM9RbOpS4awQdMPP5VVdJZKzBt2u3eTRWbqYV2DeZWfyos5TjqOIl5KQEePu2/Pdbb9drq9bvOa1s++4FB5p7XUgjT6ev2TTdu3wNcbraA/g3YVCzBeax+k+55EEXu/5j3ToEnMkQt87fQxBNoe7TueQPOtFkX9+9vX1x3Q9NZzZfbHSNxMmxPGPESOyBtldi+H9uKmWcBS+YPy63MYab3L8AQbcZpp2ZEDr8OM/KJNp6ab1zorXVmxioqv87cLUKge6eazEYTAPS5dXF9L830xH4Rm1b1am9/u+Ay3QYf+7ufoFdI7n4zeNmDTBXHpDyTAb2Qhl7Gfib9chlbzcqB/W2vqLql7VXtSzEVjdIe/e3wOhDi1Yhi0EIxVzHn4Ms3WSO+1y3nuSri4HkCodK4e5uh0MNSTkNoQaP2bobZOVg+oIzwiP6e/WhOmwE5/bYgAZqIMWj09hXM99Qw4GwF4/xM7bqmc9ze+HlBf7Fg5JuW85uk04w/0Tbtod2iCdonm37J/L9UmDmWmTR4/CB4/HfakvGUkX229e1LMdHX6z5tPF4TK6n4Qe36XVe2LLXXFFyz+e1N4nNhBnKt2MbNQNXa5lc6ydQLR0jak2joEBVRkEJhEbXBBd9sT/r+BVo3hERMELo46hZb07zVmkULPIjxSC8iYY6CaFurrEgDEOTZSCCrYBxZuQed9DYWylVyeNKIuSeyV4ptdxV6u9P/21S/3w6yVcHxhdgMaiHGZp2SioSX7RAm9JCRvgxJ4RB4xkKPG5HqhGIP9OfVMVgAaSwOMUDSLDG2v7tLNA9nes3b/G4sBZm+35Lz061P1d8lWslgKWNYUBWx7q6PIn1/2uvZ10cE4aMGunZ/nTzvrF6769OxqTPeK8gPI9xN6w1pk/lOx/J5yqDy68F9G83OrIfgzRSCjGn3Bn5fGNj6IaLR3oAQeVkI8u40/9IpnynMo48ypSyu2zEXfa7l2DkPR3sE+W3IZ0wBJ3kD3hdg4K/19rV2O5U1S6pt01fbjIFRbxJGPWR8Wj6UButuajqZ3ZQHbLLEYWH1cLCz3O/p3IHTx2dSyfLbnrpoOpRjaKq5vmUe5sr3AN8P0ccU0uP1aFhdtPI73QtzIWQukh87YQUTxqFAgm36hDOl8t+97j1Za1OYkbx97D99OpF1b268/KhAOsirAO12rpfueS+1fhShBcEgPhUGex/jP6V5ud08ZJGr+SUYd8r2fQ+TLsxjdy/jLAOm1yGkf67p0vqn8qoPPgKXqieaT7ex7FXEaztXykDY52CuvR28fhn4hiCUN3dSKmBUA5XX/t+0lOn9vS2rPB7mkYb4pUx/9y/Z8FjTtrxe8G1ogulUm/olwHGNLZlyOmX6nWlVF6K9JFuX15+vTETHkb8nQtsh0Rzojd0em6rMo5e5Z1y9d8GYGiq6ev6gVoahrem32vXNC/1032fJ/3avo63bmV79MrO/bS0/Lg3DLcmkDS604EtcNEKboHY30z3rxvZ69ouUedhUBfuX3R7r4Ucw+Qey/d0teMXs1MYi/A6ZK9r8ClNesRarhVzZLTVn/HR3g5/tX9t2xQsC81RQUF9lT8X0da3gSwNrMebtb+typlW/ypq2n/JskcJhBHykKN0XbQ9xpq73rFd6cf/bMMujKC+RTr7pxvz/3r1sdmDqlexhEi0VofIdAODLPJqJcOTa0qn/A7Pc1rbE/nJMTTeqZALvhRmIuh2KiOG/SYiGf/hdd8ceghBLgI64/bBm1M3FjWxMQbwKmVZCyWFIpbKhKIuG1LrjuvfVzn33EqSa4/2BviwTfULdhuZXNRtzr6lbTZ6GgHiJiQzG07F0XgdSxH3A5nw/2yESqtYT1O87Im1DlN/A96tUkxtNghZJP/wflC34itAek8BqMiFvZaG+iMUjDOcr3NP+T7kSMpatm7jkazF1bjRq+zRmdc36FgdJ91NIrZjp7FvZd3kPyPUxlSizmUfdeOK83ZdUMVSS9DMg1uzMiJAJbJbW/gT7MXZ9+3sQc8Apm7qABbyUDcZbTChoSD41vYHuNcc0f9ZOmfVyiFYPxKecxfUp+vk5zSMv0sYCwBP19QIAwA5Q8/L6zfx6NnC9FcnufJgIBNm+H+ZwTfOl9X8doYKLAIAJYsa390z1BjI/EuYBQ+1BEChHEPhu60DzsmhPpECbXqu1T9LexyHctzCuW01s6jEwwxvQiD6M2a4PLbIUvwqFgXR7SBKtB0kdTesfbPnUf2esafqcggD9NlZi39L0rjrULtLIsXot1m7nl49X3bDrf5nWn7GtWIsgYr2B/h9VXdN2QvsSpOhIaBFHCQSS6vVN73DKpr2OdvqAKUS2+5NoS1dLE5p3wepBeMqGMnBBlr+fX89TgaO0Yg3404MQcSLfPwxT/rK51qiQqHUEBEw1rWSy7Ie2fNrxwDANTFO09U9o+F/RfPIS4Zl8Xr1YcGqA+u7g2x0IYp+DqF+FJtGHpPx2tPt/VXONeHNGzhxSLmyb/B9yKmae7XW292DmLIeh/jdtfViyaMqHnzwQnMTdu2W53cK382FS14Lb74ao9rN2L0CTe2XbcvvACFzRygIGhhYvLCEbiw18wSmrxJGmD+bR9b8EMPxE8+X1OzTr8JfQRFp1U9tZ4IUwDxxuUjjcZJ5wHf/cHCHP76+swwuMFzqyfH7WhvYfxLLpO+nn0cL0YtOrXs34r2K8V2NyHwofaR+h0ogQY80MiM8Pbdn0Snd/K0QgvrR1eY2Y4CEdMGLRgqO1xNy3Xip7fOYmtM17PJN9kHqqMJdKdd/AJH9CIGTnCcVa0eRfRjIQAYLayKnUOkt8GYTIf4WX0uRbPtiSwX4HUCatQUJtWKrmpVkbGo+3noO7gp9A5W1pyex6VBa7LubVjELKRQkpKLrbxKy7/FNI+reYbD/mgbI4ROkfyHavnlOI8st71QXaJpX8gg/nUyMSN6/UHGazKplY58SaG1pe3Ho5G5iLFIIjJeio3qhk9H3Yu7ol+94ABPVkp3wqIZ733wDiKfPI9X+4Ghyadmj58yD85UjVtWhLwMRco8S2t/PJgURq0d6LZ3aKd9T25XPTwEb6OKSf+tuyuj+wWL/tlJRfjFSOUKRa3y2mmT0lSYIzYk/+xrZqvr1fzB5KEHo7v6vMQ4i9eJDIJvcwTUnL86LtrJjXH3fdKwb6u5cAzUqYgYSzfgc//5ueP4gyy/vWYB6SA+kv2Ckz69EAugnwWQFs1tHmSsmi+2YfmC+rZ8iY5Dfp8/xTTWLbeXYzi20R1xI8iOpfhR38ahYmMzqIJpo/egklddls913/BpFEwRcIbc8faPdMyXagducesz25fcXsR6jjTOD5R99zyzEzzvbN3s9T/Er1fhKhpR2sIiE0LPOBO4ymFNPOwzh5BMwjJGwMRrINTfJb80kuBPUzECoxo7wV2OBAhQ7ts08E4VcCJOvhOpgJBL523a6Ps9+5mPnrgyGW+n09V4hZTwmynP9gjiPNQRtrCJsUXMNlFKb8SdqSub+KKw9EAPhM3fVN32++Ik8AENjStngQ0YGPSJ+AnzCPO3PMI8LdUXBF4cseaGtZ3ftqeppfD506RhxpTKb3H+nRA6O50MteBYxRYPoGIDtHYIqw8InWFbO/oCMRQfMxxjl4jirQqKGH0maXm92gWoHjSBy8ZtPfc8bu985vG7Z+QqCEbwgDc6ZelmpcVvVYzfXbF4NWfzQxZwoCnWT4KJ5t16rHnDCDNflFlSjQQVth2e/Cu3QAefPc1hU1v9Px1z4SuVUr48+VpJ7665pLm2CGCB7vdVLl30cI6GctzWvvab6QfN9Sj7dIKM4VnNwHYQ3D0iZ9xqI6BgR7MW6twjx0NMMyPvNfBfnXbIqJh0HVusaXIV3fE/Ocv8aSpT/HDn27H4v9oSZZ/zj27cvzCNeQsWxZykUtpJYds2/FpPF3S2EALpz/TYIYumiEQEYpXNSeiW9CqhvAdg9WBgs9yqLclo0SP2XzNJPBXyf0id4KxoM8vqjg2Da3t6br3qd1QCSi/o+oU4QAFgQeOD1UcSduzmTBuKymC/idZ94TMA+/ZDuEm0VdkGJunztXGTOIf6u0IfsuEKET1QQo2p4gfqS1lJTiPVKG7Ri1nj0lpLrPar/2XRAwD/1S+EX7AOEIzD7+bUiy9BbhxfdP1RJy2DBKQsRpu3p967Gg5mWYrTi/U1rBfDzKJV/KPGTsagIZZVyCDzCPtBAykTwhditoC0AjsWGL0s+F0F6kPRLMYzUMaxr58Hnr6846STFZBsxDHDzGaFfGKrZ/5uZJirwHgutA2AgVZK+AyMxTRivEDKFAcdDYE9Gu8CPDjGnNz6QdERxyGpY8GJ6EWdeFziTG+byWd7P9DHE+2jXeRSQ95wVugyviws1UfgpGLHszYna8XpiHtoMkretgeBvR90DgVBSFCXySuf89+yJx3QuKswkvSRiQpE0qTJmBmclXwhCPEhd8BBuw0l+jv0fMQ78UfpF5FSKqzMyamwVXJJwQ9v+TtYSYmkckwVNJ4qFoFkBQp5q+zm/iLPMFNXtLu8IgZSwCu0ggC2HYNXXKP8PkXwA97GdsYlq6tBXmIfgz1vpBQ/TE8Ubn+4q5T6rJtGSKIJAQ9ekDnhOY4GUuFgynI9Jd3JqxAgG9j7eumPO7uev8EsXr0Vyywbum5uvUDt+2/brbTH/XY2hNJepA65uzBQKHMg0SxqjWRYv1EzY3NmPKxcNCpLeR+aL8z+S7LDJhHjc2nYUa8AieCa+WvqKWekiSLn33UTWPwUZ6ffW6XQ26qBvuHknUtR4caa35gRKu7ICoxPN6/dSiYHgBU9XPYiIgcdr8cWb1YfLJ13wJlzZxHPPMlrZLav4oP7Ixnv+7Phr3i9I0yc05FVkoYgAAyXUBhW6r8rBQWtgeHFL0fPsnwUCSRx1oSOnf45lxB8TPbl8ubp9jpLDviKh/wz4tEpEAoLo6u2iWlhIvpBBngO8SiI9xyqYngeWfVPKUTHKgaRxpYSX7VpJ8C9z4KBKisfX6TE+q66dBYuR772IRsomkRMFYz/9nyaFET4jBOJJ61TH/mOR+Cs7cDjGnlC82yJFJBAnwrX7trqOAwnIYlhBbOK9dI+c7VDIU5jGOpDHTqA+CezMmlIfAGQepP+nHEpdI8fl/3qZ4msmk2KE1qGICC5LPxUIkPeQoH8ZKIX7MKt/zMOX3OJV1JbGqOodpf01Q7KTI9IgAn3wvce6kLYtWtTdRXv4JybNla5uYqRTnx2qKPF6oyYiI8mndYifoKgb7t876VntdIIjdLZqOJuu5r7YVM3DonpbEVLOj7Zj63+sPcmB1HGlban6wpjz/jyINocUJytTo3Et/VbApUJHYCkTm6+lozXqdH9McYm4KmODQAlIHMFTvN9/7AGVEuyrBu+9WtM1fyX6C4s/QUgW/NS6dIy62tmXl7HV+z/6/mFiyRIRzhIbztADaYo7J5mrAqSdZkmA+nkBo/B95vH3H8DNKucyDH2BaupfGO9N5n9IjhDHW5NGa6WDo0WAr+mlUxsAUvEI4LZgjE1poOQ2r6jB/DW3g9dc2lSHnf1ekHSZYOS+9lXHJArTYKQewyXqocqvZPD5T/aFFvS+UrH9XuInOuaUkALdLg2yLB3OLWi3SiiTf/1XkwpfLAILCtAxugPfoM2lrFFU8V2asD0BeBGMmIiEIjPr7E8m+rXb+uAik5GVDsF08g6USNCxeXdnAM3ri+oDEYY1kNZlMZj9vYl+Vaiqcfg+PGJKYkkKEZH1kvX0tO+nnTpr7uf4u2sIB29CcZsu+voBQOerPzkO+EsEz+DWPiC0JiAxE/I1I/wRJKE0hYT8QnOnw7ZYL5PT4BJKaLcjv+F8VezGNYjMIujJYy8YcsQXwF8K4EAuV2LZn/JIbJN9E5kTrzWkImBXEBTQrVgn7Rq3rg5jeSE7MFRdzJaoQG4HHPHleX1eZI8byffQEPNCyqfd9uLv/P1zBrwJuv9P8lewbCqMFRxntRZjHIJBThejf0Lh02l4lxuHZk9Hrz/slFBTattf9CiHub7qGkqVTHTd7juYSh5JQg5fvXkfrLr+34ykI288CBtPgcBvncMDnNZD38fggH33FFRFeovNlk/vN1JECohYT/AF+jiP7ePLkxt2XndC1sAFNLtI28qrXj4FpTyjgmxEU6uBSGcEPN2b+O8gKAx53ov0gRh4d9zdwXoySMlT/9KprtlQIjYiEhlyVQkvQOsl1qzK4CdCSSHhk3f9NOIcwWdJ0rftg6FGuc8GHkQwkNBVg2kehEtwNJmpYuWf+q5hFSG7CXB4rr5zDZi0LziL6D09C7DnEJR4Yxvm4/ircPl9CCZE6a1J/hRB1sWpTQWwr/414a0wRc8mQ/GETnIu/Q00DEt5FBXN+YJXLBjX5fxVkmwiShRXnv0HzVfqMxyFq5olZJXse059HQ/r8suFnzzgQIF2XDpvvdBA3XNKWfQv0YYEig49Wr9Y8sdIS6tCVKkwsTrdEVMd9iv6FCNm2o/YDrTvq5mJGmiumDP39gP1EylPJHs+h7NEBs/A5bCgeKEIUAjlTq9IXycsIxIuG7y9Whi/2bxuY2BY2EMhynAwrV2ko9bamW+6F8G1DACiwB4KlQU4Qkxxr36JwLBGB3f+pHp4LTU65OsfzQeKVkTzfuxezD2K0TtMCNR8zRoFLY+McrqUz4vGHq3WXwOQSOZzZtKq+V84pqVcNdGb05qhHJF+cHpiTz2OW+nc9bCv4jxeilKtL1p/OmF+AxufKVQ3scX1Pnm+pwotxIkngHpgXpdx9ui/BWQXMNq/WavLCt7TsqGsAV47GU2tey/L6wCQrkvOYcxfiCm1UtbfrWif7TG3ngGSKsoJPvp9Q70PHU0FsywJCCI2Wmjcp7kNl3iqoj6deEu39r+2X1t+vRQppLaPVJc81LI7Md/Jat2PPK/1M5gxccd80zU2qwLMttX8ovDkII1oKCL8pqHbitISye8JxUwVn6nT9BLUditeRUkxoWwPrjlVCGLCvQ9HWoatDkJ+j5FqhNW8f7OdoTeCwp/sa/jmyONUXfoGEkQgkO457KKLsOXpma/WTzU1OAtfl/p40UkdNor/rdeS7Ndpo1BZE0lrFcdxY+g99A5ybiCePYVMtmHziO1B2v405v9G841TJNe9oL8LIRRv0erbpnocS3AIbw6OVzz23cRhemk35bfpIFvSEkogTsA6IO25DUO1hKZJsGoY8FwaBdxgzJvsHixYHPwrTFtu6EIwGhb/XbgweSOhupvkdYYj8IRWFX0Rw8OIm/UIO7ZXDQITo4aYWwHtLXajFFCo52jPptzAAnC38dU0PoPLPZ3N1KGwktlZDw0D19YS6MO7LDC6cgbnL3iXV1j/WnGxquBstZfForYx4Pv/PJsZEuPCrRjejYWcIJZOqcDPpE8i8I9hsn9dv1+76DeatF/riopwqr4z3996Lx9GleNQ8kKtUCLfsMwhcBZ9X8xcRY4Ux5qMo8rMIUJic0DD0EhoY2CLRmjH5xmAiT1TY9NbdUp9oZgKXCaQ57Y2JRsMJbE96AQiDcxcv0SpEmNC1K3Me4m/DsMoFtw+MKwwTXJHkm39UbyW6Kl/GShzzQtCwKdZqe0my4m+aN1z7I8oFa4wDsWgoxjlFBEUcEdAS99+reUUbOIAJeUSdIhQwfrxHkQRMQB/yM0lstigJszN+HIEGhcd/Qh+HWleUZTzvBGDKiI+PwAaiGcBWC7ImpT8HmYYiRzi5InVbM1AXTIxIfEdYWiPYYj0Oxs2AdS9ksQtiBBJGwa6yGOAyMAQ8S7pOI8uOyItHs8tia+AT6ryzrqkN6feFuvioEgPCO/nlVpOTRvgmC5KFtX2J7cdVcRMntZepq7P8lCqP4zHxYOuls9tAFvE4GUqIyDOpJMe2fNsUlN2kRHRC9QhCytFe3+/IenG1o5utayaBQLJQQUffHQ0vgpWs4VIuYGNwlCCRYedlE9KdGqv2HO9YzAuncW7sUoMbNZIX8c3KkiJ+DUmi8ZD8WPwFIpXDaCSQZafvxrZrvlCq188TeBGVfwv5QaLNTPsyBYysh6A5Mze5XRoYiMXd4/1EcgpEiGOJnB0w6TulGdEI5H0iCeahQpCEiMF7qZX5qRNp2hnIHCv1xGvmBvPjuV/GeWIFWnQ5+1Cclyh9gXXd+3F5/gEduC4+YO5rWsrhtSHJd2AQCd3DEDNUAVOUBvCkDCFKXmbRmoG3HLz8zfblOFUEaXTpfEhbg18awzHhQrKFsz/wEBmiP0fcrcVpI4rCEJYIcEXgLOYiWWOi7Y+SZI+pt6KsGhFjHg7WJ1P4EhjfyyDuWRxM4kFboxSWxyL8AF9w66nc2aBo7Q8vFnpOVs2ZejQFj8IKIOtYaPFDmlWiWkwmCe2Q8aqLNhXcswnNhPmRNEL4ABMJh5IBv/X3PPOffh/Hi7QWZeNDIMRFDw7Be3xIHUqYjZ+0memIKNMDyT5cQUMyPsNfFmxUIoLvRj29mxV66wwnNUM7yTFRcYdzrXmR/rBo6M+q2iEVAeRO4UUkNuM5UGr9s8VlE4+ZtkL+5cSv+7nve8vENQnEwFMKkFrzK619k5K/Q8NAFJd82YcgLdbXib0wMvgHp1h7d2dnBQRCzFMNDROrJsyNFkKFBZM/eFipQe348r0qWTPXerF57FfNBaXnASz+zDFs7YkpimCHZSWyDyCSnhBnAF9wPhdyQFQIPZJtncoMCnhvf3JaJlhko0mUUmaMFO2/0OwT4hVEYvGy/MKULecgNAmKOtcRL5wBGZo/4JjUhyD+XUx9HAaQyx8WG/tNRiliupAl3xDED3MFThkE8hSYmG2lNEcYk9bLj/4LbVxkU6nvAKMy8FJOOicRiM53XPd81/bsIiTHfXCCuzE/PegkY4/JeQDgFMBftAghPvnmRCFiYXRdBnaUagsEm2SVHMte4b8wlnJGC+6OsqUw+sgQmgyef/5Lmccwl52e6k+L0T+Yo8GyKojRDyGgCnRZhzVHPzUXIQ78iM9lnudRn+IKYvts8lWjjZcRckiEB/EKBHaFcWWwmbxPYhrF6VKfqJADXAolCX1Ewlo7G5gTGggLg2x6x+z/afZJEPNcMwFeBUwj97DAB7ADwXcgmY2HgNxYINOBHhVcRgcqNO7fhzKQkDC7XmwaQTkkpopUpIAcd42HJSP2aJITi1cSi4aN8n5BgnH0k50d16/RwvkvUlJqaNBKQi8aZs/NEmBx2nTT2yGeEusJiyazoUCJ7OFuInlvrKezG3deDOKuR0gQOuLcRT4s16Mgp/448RfIWYB0iyZeNugJNEPcAiPT1TggNrKlANR4vYwsLURJw3vgCsmqrtnQ+hYObr0dYnIaQDmGGE6lRmI3yXoXjUjwS0Jk8C5CgChI4tWGlsiPQrxHNrGlLti3gVZPU8cApR1+unHrtoAArWakDSN7fcAnofmAIeyx4j4stL3A9NGrWYEiBx5w0QAbsp8IxiNFDtjK0AwBKPWZhp7wvT6+lFJvEEYjkk4xJ7UutTcTFaAFxfsG3EnnyR4M3jmCixKkajbag2jK73Rkn8T1noLhPMJs3+Fk/J9oDDRpRRjJYvotBGyN9hZYSYgQCdvBLA1kspjQzsDhhMCA8A4BwUSTzCtzIu7emHKZVDQLrL20gv2HFNIYlcLXwJ1CbzlOSr+e5t5hbDNtJ+YRJLVczWoi9IzAFXE24HkCD0hl9pJhnEnGZDXAIX1ZDYgKTLJWFew3wMxnQGdoJwOdYUyOE2jvIPrTnhT81lVHikk3JvLrJOZxnO3J5OalgDAzrRUiSSOJ5aF4XrZn+mPoiw8JKyXqqvRGDXyjoUJ+dyFxw8aspTmMGORiwKJLCNKy14w8hswB2/kHnqzXOzEgjEwIC68B/Vztmc2cM/kd3hJng2AOnjRPVHT3/Elrm6Q0rGULvQQ9K/TLgZ8NwaGnAfnFZhyY69zatY3/4Nnmf7PJ8hdqs+piyTwp46CrwWaxEgZlAkQUBMdb2cv4G4ThOvIdy8HHqzXfKCMD9HIoUtcGeiOrZLGslMknceUkuXjaxHTNUXshuqReYfwmRJjfvX7RUiffbFRSvG0wGZfaUohiz74ZwfPFwZtoDhD/1iX2LjlN3N7dvJKN6Utp+FQEnODMihBswKidcWJHY6o5GnfVt7q280vEprrdM+7nOeX+qFYocxWm+TO2xbpseRJNWmCJAwjMU/YLx7OYokqGvwskA2EgoVov6qrNdDGwMDF5+glzFYfdXgckvwADfJn2PYcrIVDlu9Snc8EHT327271M32OO37ueSShBa/v6YKDwcaABdseoK4pAg19GfEJQSYUTLMwmS1z+QBuIYtWNKPH8ejCUmIaEGeQEaKLqHaEMJFzsAeEIJ0yX/4EmT5Gyp2CuUHIlh/rNia4BB3F0892YJfVr9xzVtNLuDExd4eoK7JhZuNcvMaecLUTAc3vu04M+sgmpt8IVbG1yD8XAdkQmCALmP+lazfrmr9nyyisdiBAnmbNs/sZ1I9Xtb4RuEEzRewoNrdW3bjtUpsUhvLnj+81uPNYohx6lDkKlXIpUDUEdXcsX901Z0UJYgD+3MeqhuXFQD2mhQApxP07QfOyQ1Fu4bdp0lekhJlNLF3sHl3nW3w/3xP4W4kWB6g/4KNuPahDz/PamlB/zntT8q3nUEJYUSR0msmUJ8dmM+ab8aUTYvo6zkGlO4fuL+TsOuFXLXoacoUFDEa4gpq4LnXTvheyZfLZlRf2nda4kajLkOFXLJRI9FbBgFJNUeczL9H7fy2S+gmBWCfYHt3uSccJJdCMxzXlZQGZtfzr9uNaxdWuAxDBg4kqtMaVTiSpNsMZ03wCRAASGmDF7W3h7zCHsCx1rZSm2421BFBivCQ7XnPXLdu6+rEo2oiUg4usPhCva7iRfOA8cIYL0LOa62MElbQzM6JOs9jlTbCgDiYYVY+pDASB6dCS+4+nUGdqrRaIKEHOUjvKjUBte/KYRWdasgQCiUWgldrpKY5oJJuAR8bOisoyInW/m0dcLhSWBGP5KNlSRBoGbf6cWVXMXkuOhTDm58VBWerB1bSY+0kkEUjSuhKom0OOVnAXBRckpQYvAi6T7uxj112YG+h/ct+rYcA+ncJtyWlcO2MFoUggxBTPJYcMt/MJuA/fmMKPq5eNPqax8QRkRE/fnTDMFS4/xMPRQQiiZYVKyF6PC04gCmDQ6ubApfG69iq7u21VgGJHzIB+IlB7YygcrEiYiz+XUMqet8eb5Oz/K342SSZxK2No/we/pOpOi5/BoCaflE4TiJwwGmvX0mn+FicyFiVzKbzoI8eqD6SvjlnDkTsZsb7ksdFOVSg9tohurRVNF82j+Fzut+tPe/la5pC2B6ZLowhzkNO51Cdf/baFAlPldkRP6cgKc9YzAJ6ubqp+GxGVgnU6gBflodXjupwPzolwEdjBJBExJKrjcQxy/gP4cTJXPRNmCDITFG0pZz0SXxtFmaBqKZdymbNzvQVXnngQ9MR9MysgqZDVqkEPWkdCfwPshyrdggWLf/J/5qa7Wliq1zSsHBTHhOWIa4E0OFX49OjSnRcMDdO0r5jxKGJXHUf1fGHfwf5c0CZc7LTfWS0S3xspzmH+bU1kTl7sYJJw6GthH3X0tA5g/S5CA92OnP7915Zy7hnRJ7raAWC+sDMLc6lmU5k1693rjdeEej0S8G0WCic6uIOzvhDFJ1WLTnxHvyVbzWRiUsNkJQ0r6I4iB4fJYCRmjio0Q66gfjfxIAouadX8ATYTc0/ZPKRWnjEfFm6yxfE5WNvk14zhfpM2FC7gFqoXo87Xz/XTLNrtt73wZUyD5irlJ3FolCd4HZsLw5sbgumM9rwFzCYnub8gpf1/SiMM9+z/MfHxA+szBvT4Y/CXg6qNtK+yXcyfnuZVeTdaYr9CujpOmlJnvYzwSHWCrPBl/kn2qhZVcGcpYolLbJCbZGsaB2UouqYPGfJH+yHjYFPMHvGzmwrbldbdF+fVdiOy+zc4QXJF1tdi426/bHAhnhN/XvIoKQ0of5BcEIxIo0CZ7H/QcNodhZiB9NI8fGHlqXLOP/yXfoSEqdU/04dnzPpSBhKYhgNYXhgYQBFAbwRE1pNX0qcGYXVfMaWSz8G/47Z/ku9xzPrrLCFeHxOUsSIfp7wvc8PJt5uGBqp6mZi7hMXWhV1eA/NycJwePwKCzxGSw+zKkPjVPQUDEbBP5g/vmdzAep2mFhNeG8IDkRxTMDnFnIi8sb19M4YTH8nvFRdJmerEuYbuwmXcQu+cuJVIvgSDKAdULLpBzH0oYAy6e16kGDUESeFjFhAeMkkIHAIe7RTC3kAkalCyPO5k+IeSPLzSBO+4opUd9HB2aI+zLKbnW8zSAkrheHkZz3pO40gbtci6AEPtz+fJoY3krZ4OOGhgxrlFbHPxhzDKhaTCXW5ha1C/cP3NlxaNIo/JC6haRG+Gm9Qr7JJ8+yA2cP2Y77yd8TvndXFjHM+blGiXqH5Sa/W24CL82OHtjjpUnGmqF91z98nACaUQ5aefawGyGCfMKOYlN/K9+zGbE3eq9tG1ZzW1yMHLLPq5QEVdscTFfqmZRd0Rd0o9rQ45BhJ8JdGv8WUN38JjvcoOkv5eCM+T8E9A/kc/f1Qvqxl/biJx6PbCFJqV7rT+tokejZ8u8PcvSUAYSSvYs/y72NUU6gO+OYkt4JgcqCygi3MZ8H1PJSb6YkPHLGNEttXPgbllaEfN79t3eQgC0gAHkSamLFmsxjlG9KJYqw02yT5hRSEcUPwfEFGB6Ot7O889DpGIgtTKISHJlVf4aWKldduEtWxNbbBCkUSt+Dr80Vd6flsCKTo85XU4xc9gqiffNHTCPX0sojC1LxxcbSkEUSrrESapUV4ZAthwKPTl8SMpOKfmb05NtZpOWuzNRPD1vMY9vn9xBQqR8JHhhdkzgq5WQ5uZfWsOlVpggKVFutsEjG2GSc8T2DpK8isc/5J4o+XnSSdrOTokrspWX93pRkEy9o7s3M8WkUibRm+1osna3CigoxkMaUxNIw5BHMi9ze05JbF9Rcyex4K4mYkMD3l4DwOyYjoryhazvh8MCm1GtxGmGjRNzQtUNO+e3X37UtpygNLTW8X2jbbnBUxhvf7bEaeyckyYchwoPMJBX6cG8BBE++7oeIa7U/7J2xDSA19bJQ8c1WmuhpsdxpOm2BDKWcxkercAEn4d4xjXT7TghPA6NOU08y9gXPUtrksOYE03C5Jmn8Ard34G0pTZOZMWe7GVU9a3wEKbCaKJVP1P5c8JW0IE1+hbGPerWzSkR8Y7EVPlEQF5izteIxvoUG4W4sXACdqj5AuHBJ3pmooQbx+CJ7md1KFtFCBu2APkBu/qrNfBYsIOqi1nzI/OFey3ny/fgNDjIIJ+3LtAJ9+KJX+HeKZubSG1bJo5cUvDZmIhcUNv9qhkgySzZ9Fbzj7H3y1CisxXjHBbaB4H7SEguL1WsKyS7iPCABigSGxP0GyRYNmDRCIierG6qcjo4z8toXG0H1zeb7ikVr2UDGhfZjJiQhhYVDYp2RTIHT34ftEuwQOOfB44FN3hOtF0x55FqNrS8qWtKeXOfSW3p81M793ZWnh41btPe1bFU6eO0+bib9G/W5w2Cv+NIzM32HXMDguTE7xAGD3RjguM4C+Tc2eGD93P4FRiyZVwxLe7EnHOldhGUxtHK0CwCC1Jtz1m3DFTGdjCupwZmxP83dLLwg+jCvlwtADfG6cSa32oF16H7R5rV0BoLflsYPsVHEu8tWYqHmEypkBrEvMPcdr+6l3MqnNGdISa4wBEhmL+CHSz0cBH7P6SBrHOahO+Hvs5AaycOmffnQtmfDc+GMpCG1ToLyJDsR5o9YhdlUgvJgc/82CTkihyywnMH6+5bWQAdXGJTiioejomhOPhxlE0jBrb4BLj/qFKV+MJjhx0yAKRP/W7tm9Teid/ukN+RjvAMEVicUrWh7eX6W4gMwcLAnZdLmYg3tEWJSRgOZkgdz+EvrpXbFJHiSaKwYi7p0OFO5LRuA/ZuTDKyEUzZcwN4D50GrVNewtDgTEvgj+xmudBpynE1T7a+Q35mT0EJs3w+cIIkSFBBEgv5I3rXuPgPD2cgkiH01AKzfqC/S7slU07gvIs4WBhcYkdqwPLDaCm3ce+tAHcrYV4vAHhZjKMPKR5JOd9/UqMwBx5pr9IYYGLWQsIfrdqhzzfpV7xf06EQFDCfKJIAmrxe2CVMMZGS0Bnkt1dIoVzYHK1hHC9i2oXwiqmXSAHnYzauYs+lmoO2oaaDYTNDnDGYGONSLohsFuCK3MEx3kQ7WxrQ8GUty+2hEupemcghJlXhPqbn+LeJxrabK8QAAEAASURBVETKOuWVxCqKrZIv82fMGeccSG5SFLST8CtyjkcSez8Pyx6qfG5kz0nen01p2AoVqZwQ33g38L5LAhAqRThSRyQhAHBHlAlw40m5K3otvvRtYCbYydx6Xg/RPu/yBwbOaFkxZ6Oq5BGziMYkpjASKv45LOCXIKVBPEYQIPR7zGDllThaumLGGpYCuJFn0D49LMdz+asTK+1i44NQrjJ8/oxTJeONIoIecOwQw4ULgtDzA/HYapj+DOauT88QFCocOi/Mqqi5jf2pJ9AwicYLylrv38W0SYyoiLgUKj3k2ZwvN5bItavV6xovIRLta7gGF9EYU6iKUvlZL5Ab+EIzlnsbLrI7IZApDffje1dLTvXGCohafsGCn8VhQ2IpVW3Y9XKuiz2fiNFp6hPE/bqERZmzkX6RcPshwCPCi++nwb8Sx0m/U57P7blgfMyqbrEyDI4QzdVYTqKTSxwv12nVepIzAyLI+RvRTDhx3Uek4Zdwj/b75fc5UxtT8j6eNL9lmzJujoB8QiR2zkRl2SjnVGjiWi1/y0ZHQ75Yy81W8BABsm+q5beFlSeNT6tCy5vbs13HXvNk8yeYs3mYlog2HJGyOePp6vjyCK1gTct99QiWaJ1lKa6HFteND4kWIvMdaFQHrk7zEXxR5pu9lHcwpxlHrg6wNtAqhQ5FB3wPXN0RkyOC+mCHbgmkb4D0uGogMsVHclJfeT8uGkDrirrLXC8xHxL2Es/PvpzrWo/jtrHXtq6o/Z1eGjPC8wGCH9oyKfNJIYBQv1HGyw9yEtX3AwaijIjyuQQTmYAKniv2LP8w/6t+SrRA4Pd3OQcjxA6ovFWGFUqwo2sDEAONKAsBl7yYci4nTMUH1dQC1VHQ5AuVwqDlT50X/IBRWO9qtAB2pAZ6YTzzapJ1N2k55kf2FfRzoRfalt/lPofqDa0nwvSuY+8GN4yEhG0unNSswR0QmLEwa/wH18wKTnCN7rSFOHNcp4VoV8OgF65BGIEVYiKXWYmpzfHsBjH7QVBTMKV9STd7jRRtNHP0wBqXXd0LgfwrsCVCtEY5+MT0dU9O377C6qVUVDhab4OT52EwTwD3HvUei+Gh4HtPxfYHe9Pbd2xXqbd1R816bn/8KxePlbJPghem/UrtuqdOEfgoHEW7GC0hxMmYhaDKdcecZF8OI+53ps6MI1Bcy905LbpnGd4iCQz+oriCcwqL7Q0R0x8TboorBDZk7kXArd7Q9Hb2yT6r2kdoa5fKCFkfwGONrOQxYDPaWIY/D856iWD0WWHwpIwtxWnDxL8jX6QvKgyMPg8634FQDnr59sZgvp0k1x60Z+PJ66Ues4pLsNS0rt+eNS8jGUhoHgBgwWlqpapH8HgCoh3eh353XA4YiSlJtBK96xvEU80jCliWN5S567hrAoSsWddyBXsoiyBc4q87CsGTyJg9xOdPvVjvFZF6QmabV+XkPo7CsiZX2SilDnEbUSiTrumtCi9W7XdCCVbutz6R8waf0Z6IuVCIj9j71XyI2SEiRsBetQXmiNP8/0assuuVFriyOc5mueyBEC4kNyKZa51vCIOE0oZAtC6ffaPbvfcX2JTLuKiHa22nv5Prd78lZXLnM7R92tX26Qe2L5l3JXhrmxcRmPBe2i2RcxCEMfklLfdoPyBBubajD+Ftd1wM9D9u15770ArKuZa1G/y5gnaVGCjjlPyF2qX/Qkzm3LJ/Rs3RTfejFbwEjaBX74ew/kfVFVfumwlPn0s1XHX2BdUe0Kh5r0qZ1A9F8w48pYCJmLMi+EZjFLwXIYdxyr3oMNlzYO490g7a4o16EZJIvbIu9J1WfJew6sykjFuiSpv43ZjnzlY4Sn+kzvx2dB6VoWdVAFjX8i4m6Hsa3DBZWsLVuDudtL9axiDnVvSdF0DwbSUrvtfLFc1zapP1X5XfFG4Ks+G4om3kHAu4evejGPG4/E2iB4ErGrKAqSJKQCaeCBgd41ZcOViivIRx0ye9cKyv89vOtKoSBI0ucPwUcHaTBJ5VYUD2VIfPt8LH6HwLcwQ/fmlKprwc8xXjxs/Z2o9rgEmBv5RfFEFI3keiXv6vR8rnUYglw/Hsw6o648YKxiEHgjxHShLJAiyUm+E453oCPu13ygRtWboEqU2QbVMoLS2GsUjfhyXKz12/PSVSXO0NO0/F9v1NPXHuMcbR5Tn5RSPE4guOPQNf+/zYWMOaGNdXVptgvS6mAxRgwDKy0Xs3VnmBRmhewmwwuTqkfumvaGnajQB1uJ9kQGwhLLB1SOHvhYCfgmTVi7r/r3yvRQtcvXupZZGPTASpnIep6s1E4F0FAV7AnevG3du0lup/ZGMlP1K7s7XziQF1WuvK2t9LUMsBJ9G/92LbGdYmnfGzidSFtmvPZmfKjPm4hsLApl5M2yczm1dVldb9VJlUfvMNhF+/qeloDmt/GHj8k/zkTJlp/P1ttxKqfQ37IH9SaV3nJ78gn4WBCWGAoHKK/h1u776HndJpc9iD62EMl9PumSz9T1aV1/2sULsaQTrmXDTQ07MaGM0E73qdypoyTD3Xoy3fqMxgqcQSI4krOLjKEewN1Pt2Z1r1WzF1dbGBvwjm81e7tvH/cT3Sz5qWFo4EXL+u6QRUiQYY+4W00weTLff3736k7am6f9P6I08icTyA8bQssZsgch8i31e8rn1irivH3HInbV/LFc7/2b7CPs6zEetJ9gUd38VsVbIUmjEAI06Ito6V6nw1WQlTszAzjFp4ITkty+f8lDp/xr7Pedyd3g0jeE/N2l0z0CI/QcyvJ7Vvw15krbvWvgELwHsgwq+IYf5x9zX9GMeV/0LOuIv9HYLblM6MZ9OvpuhPuWhuVryizH3qIiv7uWGKcDf6Pq53uXZAqXlrumllTYc5wZk262Ttd2n5olh/z2Mw6I/H0/73R0ZExgTI+aCB6c5bdvfYzzPfc4nz1su+kMz3jdzPsk6Z8qrw/FOuO7K2wr4WwsFcvgN/4PgPfEry8SJ08xCnkQwkDAAYi7l/yWb6d7OxNksjUR7ihg+qujUKXS7eTroxm11ft77pLVuW1m8W08D28o3cZ7w0J+0MaUcAGNwylhUpsGbtztMIW3GnnDDl7IfEYxoJjyEVEGJS7NEGE821/sciKViJytB8Y3yTOVR8ZD7ZMBX3QyHuoVvw8IJ0OasB48jMnbvBuO4Znqvw9+jaU1iop+Ei1NWRs8aTTZborhzYpPe4JwSERD2tRALGDBXz/Xe4Pfv/AAGq5eQ+Pv5lV8TSvRdBlO5nyH9lkF2Q3zKGIkHZjyfo3wkQXYLm4Y3Tvddke7u+2LZi9seUWEggWRmvaITW/xEazaPEqjwl7qX/ke7foVKzhdBCnPYusZ21a5vP5IzDLyBKr0DylQCDJ3Cvyw929zRvJ97SQ+wl7IChEOrXn0a9L4J5nMHmtBxA1QCOLOif4E76D7Xrmk8R76Bw05QZorfDk0jjSI3i4snlTqfb3s6f0+5L2IMjGkl8AZFib5N2IZIPYuraAQ3A/9NORQQ4Ds5wKuFdZsjZIolWC+Mq4+T+DbT9bm1m6wWD7SmzCu6tmVW+b+nu/eYOmMgir7OdC9Li87kjfWMQibf5UbBjO1i0H7iK/jCLuhZyQuJUmyyTO2AM/Sul3J8w1b1ehaqQCeaGJtoKzyDgXwUGWdxWr9E9mX4YQkn5Kifdcxnj+QN1/4UWcCUWp0VTR2N4zHkvC72UuO1vWgJTWB9RIs5lc/4hvb2Te1Zy7chZFdPAcayBi1Nm929jU2e9UPZ/wJWlxLd6O6fiBVdow+7H8aUE2NUi8xwHJ30RcJvqJAk5jynP3dt8HTBbJTeSukk0RmPLkeyFuX0LXPk93O+UbLpXhINvS9s0yjoS+VKy5MKTyE8HTuJIFMBrwNz0x8Vep/8jZ2r1a71OwGBtLdc5bHD97qvRSP7AUn6SCEgQCa+ciucSz+BkxnaUmr/ZONf53t+2DuZx+WDDrApJ4bpm2rHA01eN+8V9HqVTBnFisNDYn7YGP8NgXYkcrfVZDYMzdrkJ/jqSYIr/M1Rrl7V7mIjNnL58PR04SD44wV6NM3tZurcvXZqohKLdW31j49nbV8z5ba6oTHjoNWMWLB68vEh8zVdJvKVd72eYX4N5EGUzMx7mIVXLDWEei2VetW1b0iaELGBIgwsk14HRPgziAohS6VTODLC7vQVDPmlRWC6KtUOeWCX0QLSI1uap+muUJ8w62tvCBQuCg2DWKXUqZ0AU9TQym5aiQI0zSZgXWXYu0ZEcU+VMoS9sWHr725JSg56DEa8zCDlRX3dy//UrTPeeb7Mf8ZqAOJeUIYS8jjKvC1qEwgUikVQLMeg2bl/nvb4T/2T7irr7ZaEiye0UqTc2q26V14kAGUvUsAhf7yRpd3dLqF2G5y6E8AXSM6KPfzLM4gu08BGnDPMGHkWUI/x6ci6NChGRBS8vLE54EyEqmPsmL5P+TNvK2brRKwQrl4+Na2J0paVYJIXqZ3kRqR0msmfZUbvo88ur9zddTdX/5KSmxoRYIFXPRRKfK1m1PvkgvEgIA9yLfRPj9XY2c0nUVRDC9fKzSqPDNWaBLTAJtZnFmE+/CPz+hXNNEjNKmB/hxpOz8TccHJuOkeaCSLVSc5/bued/2nbUXwXzUMcTNKiRQpbEbqOtlqX267Ubmh+CIH+FcZwuI4DJx008dSYE6UypMEoawFHaQd1gnhPs0/zEce2VzSvqdiiTH35rn5rC7o7vf98x+yqv/b/TbNfudZiDdF/RSZQkqGMJ4smSqP7BzXFwBeHN62x7CJPjp1svO+rnos2IhgOd+rIzs/Zfva4Oh7J4s5W/IVbO3VjNLcE+GpUB+emxykr94La3TI/q1weDMzT4ePinsN+tl74UZmXOZq/nXyj7SbTP6UKcgdNsaImOIzffZNQ5kDhjYm7r3d+GhnwVN3beqNWrSTDPQhKua8LlpOKV1eDmQAwmVetmuydu/WHtS4LGsPbRrgdY+/tbWfuHNo1kILSo5iDMNfDqO2JO7PUA/4hMmZIk/fe7mJxqwoj8hlhM1yAWfR3JcKvakIf1WtTabMq8nsNAHyKI2ymmvwvrqWgeVuoZlnu0rxx95Qo5O9AnVPiOiYcsQdpYTVsNtGjtOq+t+VFZnzy5U1sUDyNZPj6ntjU56709TX/0PW4iiNnBPMGPY75uCbUaxJnNtrXlP9G/hFG2QgzDukPJZ6xa1LW7wZhe0+2X+cQuaitTSpVNPyHFcmdehJBjl999iZqrXsu+0puh0itp82Sb6WdnzUmpV5XHVUC+j8cW+8Q+ZkDfubltZf1d2gUxdYQeVlx3+r4a07wduJxHKL5ZbDB3QDh/65r0I5o3/6ZHbftuNZPA1D9W9a226/yejnczz+eivR0D4amAwLGuadrziAtl25Gc/0TdP/B6499tf391t2wUB3smbpV6H6q06naVujWRuYzsw5KeOaFdiDHtfpzbLr8OM1yG1vFGxJRj4RfT5EIJdTHVeEq+MCMxqfzZS/fd6vfFvyNtK+MgzodqBsOa0K9CvEQg4h1G99H6G3fdAKG+HHPOucTWOIaN5HI8uQK6JfCV2TJoCdb8nT780nWS39uzbNYurSuoZyTz0B/BB2KaqXa1rO4PPDqj9sad51L3xXw+nVsYuY3RlGHiU1giIQseCUHdxW/3+ia2oW1l3W/4zklx7PoCn0JJvSfvjoex0c6vWd/yWhbiFeDV6TCtGtZjiTAkEUBI1O83MWe/4/n3uAL3p1qljENMSw0NRgJEch9KO+XP57dq9ii63Q6z2cadTZqXF6DzXdbaDkEBSj6gz/Xg9DjWQFSJ9DvUtjHZfgl6st61+y8BJm+jlhN80z0d709uBIPeR/Ptmw76/RfGdWs20fOdvRcf1xnMN5UOFxZC3OeG7Meze1q+iJIv7uTdttSX+USIiWiCfhvzJbf2Xedh09bM2recDDeg6eLxr/0xWwh+DJBueEbZKANYakMVGiErgFUB8AvnH17+0HzPYApJckgQVbV+lW7eyQRKCjl3/c1Ns9y0+TvIhtsncTLLp8fY0ATf/D/y8lfEIpBKAivJ/Qr+CyAaJ9qy6dMFMf2MbpgLZ58od3fxFIrhGdOc6htY+BSSVLS4tW/jfUH9ATlksQ+m4c+Gf5echZ4N1jDy00Tzj6xh9CeF6g4WdrCBSUn2Lcqtl55rvOQszCwcmjK9WApbWp/a+pRu4GrtwOJudCxhBJKG1ztcUot+l/coCSwl3yLmM6qH34K9Do+Lx+LlZPG8rLs3nXafEgk4KioazPw/b4sJA0Hz+Xf2UD5hJDz6QPpBTA2njuhPrmD4oUC7sl/jZdzZOD1xb4kXx9sqw50Ye9Lp3sYhQSWV0MoG8zBcGN6GfC/QTv74EAtkhfZy89Ee05tqVuYU1RMx5+FEK/p9+Lvkz4OjmKKqU21Ho7nVsC9Siouu6OPdjhdrab6s7qkcLksfJXaXML0DJcEVIeJhn8QEnTGpucxqNRdjJQkrg83Hba0un7NjyH5SPnOKcCFqKzztHX3VuZMvB1pruQLj+CBtSkDLPAapgSwTZo7jckcReI6pkE2Z7N60qdjZsaKyI1er9F0cCob3J8owfDzR88m8H8q6RmkfybtAUgnBJ66T/RuB1+5nc/PVmBoKnY8oUPjwP9IYNS43OUiYBsJsY7N9KerkS5mkoDPC+2DmYtLAU0ekcPmh8NgP1H2WKAQBhDSJQAM6UIFRfleCRzA8U60MbItc/RreEJcrEeZBqHIk4J5eQTo8Ty7zKB+0DtxlzdZ4cBnT5sD8Mkr2MR8L8kchO+Re+EKLICIcSrA2mTBEO0JIgST1yUVOYlZZkreRqPWySK9lkcpGrxIY+c6c5bc7vP0GiJGhLiFicroc80nTpRKbzMjf0CTEi0B94WJ2tzWocZw89lQ1M0lcr4H037TQJjUgBsxtaC3Bt4AABp6A0i59bLUWaU8kvgJJpNh9L4jaLiylFygWEloxMTuYTscen5TXdi5w9B6bfGZQqO7hz6L8aJVqVmUfg8H8H9nkb2i6nK8ylxI/Kpr/oTkKf4vyisC6aLHZvgSGYYzAPIB7WAopMHBJvof3BhhTHuHO4aC2T3QKMflJUtwK7n7X78z3wq3ByfqCa00zjfMlwDsiHjSAZ6tjAt/Gy+1eSsvfyKTzxXUDAeMIAmGOzBU8kbqVQUX06SDWq9TFWjgka3+U/g5KcMMzyAQwUWwMXY7r3/UcoBFEh4ActjRuDQRmMQM7JIijFzkERIRZGNZTFBAWHaLBsOdjf5VSUpNASibW2D7s+9ytvv+7qNPvCjUjQdrh7Y1d7/Pl10hqjk7hyoljYRpyKG84AxgNJqNIUqLd4JVFjLMOW1/5gr4hUqrWxXyJq7UwiojxyeliMRXkS+JCJGFimCSOZkNWiFcKbRb5et8HxF13THPMaH0W4l11gc2dPpZ8hdoerfy4nhcYXwRfMVXkj3Fc9Y2RSeZgzSZCjU5hDYT7T9KWMA0xiY53LsdoQgmyhHsXF4soNW+aDK5I6cO7HoVJbALXcnhODw75fEdAOXLeR5fChVuy2Zy26e+nevZ9DiJdHRBpiPARmxRnQL4ghs2Ibg6i5YifRn+AGiOajAqofCypQH3PcCLdfEnLIDmRDi+yasPPkpdIOh/e3YbhD8b4Ppw4hQzFut6GhJs+xyRLPbye7qGGtwVMnk9aBglMbPqjeLhFLcqtfNvk3HTSWxYrn1GKF1GWP46l2F9rnncjybIWJpQiSXhChSaaeXzjm2itBfMHczC6Flaw0AQfBgEhQxt9XtmGvM8H+jgcVw6U/1D9HoR8Gtn3Q1X/EVrP6NK4TARmiI4V87Df2XXiRUCSxfj8SnKHuO8RJVS4jx1gg+4JorVe2LSy/mHd+4hU/ecXVJ7Z0QYmJTnR8pQzvXYqe2DTcWx4Kxv3LxHGIWd8xttBPRXO3odESEU5/ThnWLjOilsUffOrIEYUrRyp8eDGO8hiviIEniYIjM5ApMF7NmmzkM5vil8/C1VcN58fXFbOaJSp1+ytsZSpj3n+SZ6bXdC6o/Y4PDBuUTt7ZMNVKBVfDhsEcmeV/Gu9jhb8E3GlCILpfU36IGd8NPyG2NbV7JhnEhG2I2Y1MdE2bElGebMDzg9x+RXffZzm2ejy7L/reGSztJiKEChCoCAExl4cspnOQmtZUbedk6I3OKVTr2QvRLSQsRlPwaaeZQ+RZPViJLbdm95VLwen5C9I6qUWegxFz4rvhw8CwrjZt2i6xP69dm3Tf9nKmo9wEFDCfCziINqPM+nei7dF1+c2DO+WbCyqyVEFoepvtb60K+veTNkFOIp0O9OrK/zOtm9wrevdoXfd+De5hzdV/F6EwHMcAmKXGTvJ5hAqvFxsY9Pu42ghU5D4ZPE93Uxk6Ca6eM1EEn9oA5dzHWx6/n1wE/0Q7s+ggcjdCG5P5/q2FXUr9JT7jrmZghuTYX/GBmTx10MKgTyYc+HP7cQoejNnVDKE00gSW6oVneN6IvD8zHFj22ZW7N0fxacSzWTftJJZCS/+Urau3snm1cUc+LMc+EpzyjvldrT9nHMp52lf89o4pH0vVlaEwHMEAgdmAmL/xQe87V01rTgkfVZO0JKe3s20fOAGLreEryaEs7iG8rdw41b1BhtIxsfWoPLrOcjP2+fOxT00OjWKGUSIi2giEvjumdq4O8gxPauLC8zFFEUiCvNb3P2tX8cExSEuPRHNyfWKT1k/9huYyF/39Fb+mfMdj3LY7I9dUyv+EvdiW00i+VOTKruEuePcFlNYUp7yO9u/kWMe4kVVnNdnNYoUO//0Q2B8BFh87/FCaVkx+4uYsi4i9MBLvf5uTvQ+nW69yIaycU04CQGD2KojcESHCqrL2/fu7uHcjiRZ7CM8d6MSk30X7sXlQpLkRrpcoq1Ad1NGWv/jprKmirrMkINXubzFD08bBIShC6HH44kYWu8nJtr3ODX6cdo7B600wYFA4Q8zeZkZyiGCT/xHgQ696qRvxIq6G/+Qz7UunxN4XYV1Pm39LlZchMBzBALjYyBCnMNzIeyhr+KE7u+g7WxQqi35wGawyQCLiGRBADBnHt41p7NTOj1HzKEKjusN7Ok2EAeTVIIwdiTdyfQAGiOExp/Cn+WCn2TjLX5G4mvNWTA/ke5JTUv42Xqu5LzIayf0/ZvtBvKpuW9SjRULTQ4C4i4r2uBG4zQvtfdSyb2zrms8Pm47z2G//DT4xTw2swiCZAgLLzKARRDx5WTwDk4MbzZO4ldty2r+xPcg/HpwpuH5522oACi+FCEwMQhMjPiLuYbTvdzetzo2ZVYDUU+JDGfEM+vpShz9404aoqMGi39YMyJNyhWcT2eYFctmue9zwpQb94SjSJwbgp3RkzJnatVMt7P9723L616kGpCYVHJmrmF9LX59+iEgQk7u5Hpec8xLcM3t1uAO++FzJAxoWGiKvNLFj0UIFCEwCgQmxkDEBVKT9eVyFKds+uvwm8fEZCGoeRaeURqb5GOpWD1mRin/9B5sRNUKb4wLmue7Rt8k1AVB+jq5J+GFeuOaembhtVZMzzQExE3XmjpCR8id7Is5uzR8L0MYhpwleWwz96FL+AsJUoeWXUxFCBQhMCEITJCBUHfoDUUo5mnJZMkWQifPhpA+3ZrIhAZ1SDPDL+CbYsuKqhWC5HBwLe5m+s9tX1H/i+Ayq4UCg2I6IiEAw4imTzG+yCyOyGkqdupZBwH1YplQr9UHf0tSIopCR18P85AbyBDHD6Nn1oQ6fJCZdbscF2IiQvMn8IpxHaVc0vNBZR4cRotcRMfdkh5kwxwowdiiJBqMeJk9l5OON9JiJzpQyknMKjGjyp8IMuNOMAzRQlQTOUjmIdqL/E00yca8mNhyWjwVyBhk3uV9rLmXvTUtO9FGR8kv7Ul90Z/0YTJjGqX6IY9lXBOaqyGlx/8lWlP58B1/6UObU+ar4HwOw2GBO6rvoW2c2iJY6FphnnVuaVvmW/AwSvJc8hzE3E++8xBO07AwU7O28TW4QQbeK9y1QN9kkURdfC69MyibdaZUJggxfzVhvj+pCyM6mzLekcrk5tvgo8kbbmYZb30F8wmBG04oCz2LCo/1m+YRPJnEpB6w3qgDY78LjBRTh41phLfUIWpv7N4U+PUA7Q4PMS5EQzwGx5zzsM7h+JJr/QBt5vIN/zBGuSFtjZFveJXB97FxROdw2PxpuULtFHpWqNEw34i688vnf87VMXZfc9nkQ8HyQ3IM+TIEhnm/jLbOCz4v1GahZ3n1Rx+FWR+IJim8pEDefIyAYVTh2O+Tl3hhHgtv8ZG+7V3cWf1mm4j/GG+XBLetwUR4nwy9Gbuvz+SvYiN3uY4S5rH7K8o8BFFyFzONs2shctWsazydMwonZxPxW3fb4L7w6hubXsf2ytGzKupu4irarN5rISE7JJKshK8WxJA72KMb3kRyqGRvKB9ZNLz0myScuGuue2gwFLpIHpJ3K0i4iL5K/C6RkGQPQPYJpJ0qfsuP6yV9ldDkElFUfsf7bHCUi+VjEO1V6yYiq6RcX5FyJFx55UbP7KMe3MDnf21bsjdVPsUtdfo0xLu0f88mo7dFRuXy+3TPGupv0NsxI0LL1QJcGOS/GW7SwZW6327mGmOVniRK7KLFgau1tCd3TAu7W8N5oUULPO0/X4MxkI8NECPRaleTS26UjMbItVdBpF7DpWpbE1u2buTsD30QSVGi0BJFeM6c+VN7S8qyey+a0SWeXzpmga22y9WtwtT2aQh1osjyXBYCZ6mqbmo7y8lmTzBx57s6fiabdfMa62Rnub7TCXDntq2o/6aWlyi+uI3Pb3k8vu2DNl29ofEMsO/01oGm/zGrCHIahLVnjoFh8xpO5dNHwQcZh8BUktzCmY8bwVOaYf9y7a6z6dSxjBZXRz/uOfaxtuX1d6hgo3gGzuncPUQo+DCcfqH5Eo2wLh+HqDEfh2izZv2uf4RX7m6zXIaWT6SkHYG34rjMGbCT32U+IvyVIJbCZK+lnWiOALrCN/yuMLI2XXdT04t8139Lvym7tmOF7VDatBX4LwAmEW7L+li02gnaZIzNpcBjQd75rgBI+jq8LxqqnvoE9jk4L6Y49Quc89anXIsMfI9r3fHN/w3wh/GF8dSAx1u4m/rNtMEN1fZH3Hf/M21P1pGkEesRmEib0Xzonh2eh4XmW+5WoS9cbHaMFzdXALmjcWt/gLNN18749p6pyUz/pdx2eB9zwX1JXBLB4XAnk/0HLsK4o91y3/1ozE8yj5Imz0CoUMJnB/b/2p9wBeZ5MOufcZdyAjff59KeiLh0WtE83K493Nld/7GAaMnTPA4+CoCHPJbLjgRxjF8Sm137NdPUfCrfL+Ve7dmx0spfuj377gOmN4RlAqIsASwFOQNiMOheGjGSaNKjdwMxWaU1BOWDRTs0HIfkHbbQwzaDN5WY9dBk1IchP+e+SD1RP4KHEmLEGdZXjWTbe3PTFDfDzQnZ7OvJetew9oMxDu+T1BXuXnDb5Pr4jNpl7t4mXG7tLFNe+U/cg76cBbiB+vKdFwIYQRpIY+9LNWieobDRR4LbYdkA9rn6M+uafpMYyPyQuf8EWaWtaE4Cd+LgvpbgWYMxetMhQa1xO395/Oi6r2Z3NrVS5scKJ9t8G9fjbXWM/yOecTWAf10IO74asy2sW5gHcHj/rPgxN3DNYBfzG90pMTg/Q+dBy0O80HbBB00QooY1/DUIa/1MfFbdGe6e5laEvhI8KqfhFHNHa7ruTbQv8Bgck5QV5jF0bpgvGGWha3FFYpcWorXh25t58Bue3Gk2bhT8j8pGbfAobC8oMzi2AI+5vybvTnXJHeXnQwQjbgl+EcGHPpwynXAY0zEitL/gkmr+ETxyYzRD4UTpQUY32BceaxLYD0+D8AnGZL2lNl7ClbeLbiGrMDHFRnB4Xayydjn3uW8FJnFnRt27uS4juDAv/46TqP6oH4NtBvUXWtNhGa44OI2LuH6HIM8NhPbB2NQZl3CA9hK3p+sCUz71awSCvZesIkYaIotcFqur/5zbvOvlWjzsp34e58tBMRBpQ+3/cMOWZfbn1Rtaz+T+9F9w0HAqV5CmjWOTIFOwlMfZoSMrG+c+YomkuBH7XXs+xmG1LwbMQxfi4OIdb6dlEYJsLUvsJlyh18cqZiwnRMq7+9zYCnEPjrnmEqmKe70v4mreVdIA+0zfaF1qv1uzoeVNxvXO5NT1J2Vx1q7b1YBk90jrSvsjlUZYZFXXbKmIlVWuYdf/NazjPyBgfgoC2167dteVzMVK8G4/xOoLzSvsL5Dm/xnqvJfxvBiOtoRZ+jJS6HciYqFako2dy32OuxCWVyEu3YzUfx+ezF+GnT4Wt+5Hdy2ze5C2LoBe/DOrTlwLvgEebODZXM/6X4S9Ip/5P05UeGsGeu1qvPbixFK7GiRHCjMvMZ73OQRi3/rOf3IP9/e5Z/r9jLmLxXU8fe9sWW7/Q+DBAnhzrLZ2mdfStKZ1xewGeUYMrIc5Tb4e7eA71Ucdt5C7uq9mDNWcCbr9/7d3HnBWFecCnzPn7N7tsO2W3RXEoBiwpKiJMUYs8anR/DQq0RilaSxJJMZEJUQBTSyJLUWMoi4KmNgCGuszz/bUFBUrKBEVBG7ZXcr23XtPef9v7j24rLFAMDF5O7D33jNnyjfffPO1aS2Tmy6K3ZTZAY/jKWwnepUqTuXv6cC3b+Yc/suAyQ9KnJN1v1/KXpBTOYV3CXCcSTtWel7RGXaRVxp43kzHKf4l97Mv58pVtGg/QQPbLdsew7Enk6M3Ze5xHG8NjGsO4IyAzn/LJVKX0HcH68DanetmxV783IpJDacIvNls9y1qbeaX9Khon3+IjmrZRZfVVLEU/mrqT1N/vTC4vPbqX0KaWobOfBSWK4ELhqPubZta39lw09rtPOqhT8bS/08wIzcj0xvLxopTs2Cgz4P348Af+16CGZnJTX/OM/rCJUuYRHBLIap+hMdbWNI7EIGVkJ7qxBI3xFKpyUi366HNa8D/ePD5tB/0n9+6v5WO3pT8L7j/jxEPsmz/F5kJDbdGb06eSj+S1Ipy2ePRtOOq9GRrnpqN4BLLKx+4D8rK3/64DFIRa27C/m7djWvHcM3sz8g7imS3Asul9be3VNhd7s+oYz/inne1d2654/f0u0XnKF8/DV0fTT8+yfsqaKuDmk8j/58yK6+bhvXxltbW77PF2bbovNTpXAebBWc7w0wPoKyrWyZb8+X+GOUHl/KHQA6uBa5PaF8tSk1teHYTngpMWwR/Z2XFbJpxOPlf95V3js3h5BDr98Hzc8BwGnTUYQfB9OT+1vLojWv2pi8upc7XSS9KdEYlxtMVpESpYLwd6cTik/xM8jxo+DLSYIEmf2LHEjPYALuQi4hraNMo2hQh/fFcgPcbLAVghB80p86noK+D0NcsW/8QOntL8mLUvEjxx5A+BlwzWiY1PkXbFrDdoC8SKWt4+4ThG2I3rv2qM7LhbrU6dTjX6kJratrw5reGyynrlHe0m0q93Dq56QUjRN9tsUr17xuko//xAPMSk5ENWX8CwN2D3s5luqo2AnGhlfH57xn6rZLKYjRmV/V2HCu78A3xz6aLN2l1W9Gwwo52J1AXMBGv+lTJ9TCaabjGfpc6uWEVhHaiU51YAPNgAFg5uy52q7g6IIrPczrwdK7OBsWGXV8A4QgzUqO3owSCLqv+nSqrgpnrJbqq5lsMuPulPB1r+BUMbQm5ckFp5QNyDSrm8zHWsLpm9LIGCiyzy6sXROeu3m2Tpmnp3XQ0Th3qGMrpRdhdThnzgCtt18Yn5zzrHCyAsbo+jpYViFa9wqqom5doTh5Cut8xGL5I3D24/c7N9dg3Qqw9gY8ib6l2K+cdCyP+I0xwDWJnjVU5/I765uS+CKdD7Mram6GZ0yljgPWgj/Q3dnKbar8ZUNLWnF18hPJzn6sfueMulu28QNQ4KO01p67xQs7GulRrr9qONpxn+cF3qbvDqqz5AcLjQQZ0u66qO1z3e9f4gd8gaWjf5eR9EwH3TUfnbrddv8mJNZzh5bI7S13gfgKwTudbB/l7uru09scgPJ6hQdtR5qNYqBeLULZ8f5yujcMY1ZVsds+7JXBPmGtsPf9RSvuqKdP3DpJbMjMVT93pW/4kYDwLfNYDy+O8L+f5KSeeuEL6j4OGP8OlbtOic9+MudoSbZ4+4c7vsqppCOHFdU5bCQtZfsRQW0C/dgATuNf35Jl1YaMlHNPUywdtcYBv02RqZmWsGUZC5wQHUt+d0N0pvL+bfviGtkpujc5bu4+OlD5I368n/g1dWbMQIfNlmO/BTn3iWpj1nlIv7W6O37TmGDM+ROM3gRtCw9s/xQLngFZpp601uFMj+fsfLmi7BIY4Xfe411sVw06HJm4j/qtOYC/u6XOiVoQrhpV/D3CPUJa/i1ObuIL3CH610o7GvxMdcer3LdsapROJ00v88uEImRN0VewG4G00tF1Vd4u4deibn+thMZQpuTRMz7DjiR+xwHIcZeXdYvL9WB4vCI85eljdubR3MXjZE/XuATYz7wjtnAez/iW4Xm2VDzsKYX6tMGQUmKfB8CfQo8pVUelJlNTX1L1mk4LOu6O9TMovUdlfSDUSshUlP/fbO+hCdTACAiUpcSXpxkNnbU6scQ59cWT0prWXgfsLZSzTls8GrveEKBHANZ25+ltRPLsp6gsItdviN60aq8uGjQZX14nwkEnzzNTGe/w1qf0Ct+8eS3kP6+Exq9SL7CZeDR0p+wx9LFYSNsnMrZIFW5XJVDjoI3Rnycm9mVXLd/c71i+QwwiV40gdwgw2EfCgrB+vx/xqMh/GFwn6upZqT+2WntJ0p1nBILfoDZwA3xrIZec0HZuc0rg66O/7ia6JnwhminWxcYlIiTNw07yOpXEQfwd6LalVDJrzie8Nejt60M3yfSYzE2jrkmHFYVZ/zVzusyit/ErQ3fFT8k0NujccBqHfC3W+5Lemv46Wj+Wg/iJ7WnI5NYJB1uFvbH3F3Kqo9ASIVlnFRSOkPAlot71BV4/Klpccm8mm9od7S/Rv0IiP8dpSbzKY9+CO71VuW+qbgWPPpHOfDLI94p/7LPWM5X2H1M+Cg1NUoC8pdt3LYPTQtndBoK1d2UvTp/1gBt6n87iKWEatWFYdfkdbZ135hgQa6dWj7zcbNkXoRDkp18UUNO4DOdhy3cS6temJCbGyvmEVl8IerT0zUxpOdNvS84B+GndqN/nt7Shk6ixgPtTv2gA/UfdlpjR+A60fv3OwK5tUg6Cbjagq+E56SsNEv3PjdOUUHeJZwU4MdqVsbEJCgSl3AtNc8LSegbyQPstw+GKU39NzxT3nsx8KWNQFDOgyv1uUY+tAfM+TjEWXYp5JarFUM8w3hiYcBT+H+j3tS+QYFiwb3lsp33YTdm2ilN9pAF/optaegTXzIG2LAP86riI/FndTE9r2gfTbKaq38zu6su4g2+rfM3Bd8K1/mZmUmAID/hECpT62wydrpN6QKZrf8gFSgB9wCVgETVVr0HrVWhr6JRjh0dR/ZWmQuBAauVxX1OxP/XdyQGXat7yZtlbn0aeMZ+t8isi6Lck1mUmNR6QnNeyLVcOGW32yKXdmON7fEVwNf0uJ9SK26nGcqVfJrx/3q/7Zfm/HE7i+L0ZpaPa7Oo/QfvZOkr1MCSNsuUKC++kp5QrahiKll/m93RwwZB+eFlrMJJO83AULKBt09qpsDoUvUFkO1nxZaBvaPC7o71ZekdqX8o/j+P/riD8R+pgo6UkrBKBG96/Ij6v9mc+QEFjXBxvbjvUs/xbahIJiRXXgD6MfMMCt6XgjJgc97XeAt1jEipxk7ksK7C8x9o5HMbyVEqrXfLqJe4RMaSK06ymjZWV2HUo1lIALcFh3ez+wZSmPfkKtyiQ3QIeHYKEw7tNvQEMXku9bbJa+zfeLL6LXLnDiDU0QzNekLEqeI/3N9w95jkMmCDAhJS28QW1fXMsqv0CnJjU80XLKDpkS5T7CqdWB51jj493pfaERhQV1h6TdNHdmHj78Rx5pHz79+6Y07izxB6JlyGD2uttPgvF0QSwQjvGpv9t/+L4l/rNeml7GilQ5q6TMgSFpBsmv6JxdU6c0vCrzPGYiO/Tr/qNgFRiKbznX+B3roGFrUeqEhlXCbIAkBn29+E4VwctoJw1gD7cEFp1MOuYD0eC6EIocp9rcDW+p5RIFc30AIp9tF2mW6fkXKR1cx6AYjRkLMdtySEsZxJ2UtDD0CtGsKTA/eIjTlC0b751utzRauV1NkINn+BhM7wRX7oRnYJ9pue4twL2HId4ggPPqfaBjV0cqn6Sy61h/sFfWKa4WjRuXYJwa48S7nm39WitnHgX/kcm9P9Iyucx0uaEjCD/y12WmrQz8ddIvvgf7Isi5aGJFJW5M7gFz351LvjLpXFyOJyG7v4QDFUsQXglFm2xtFXBkucDaJWkoFI5t5ZiXkJN78QxyEakJwTKBkXfD5REGWqhf4U9mkjof8AKqMp53goHw059WlKt4CEHdwcNN4KgWxSPrlduvmuR5i9NY4U5WPQBdYZf7WA7qU7xfKGmohXYF1S0Tt3vJW5/5BkxjD8bM0wiWS4FlBzILHMiS4BPcy8M8fOGOd0stU2bNirU9SdAAlFiCFObbuOCUg7oqz8aqFKYVBnyRBNM2Gatd1W6E5zr+crRDlJL9elX6UXLsDbxiDQitYFLYV1Dsrfx+Alw+QBFxmN8y3ofhVfLkhdY7tcnAN3Uld+K8OAkc7u33Uo3W342oyEPQWg8KBcJZ7a20f01gFc2iDo4oCrK+cm05IDPQwXOSlfIrUCZUblhRgRYtOZMP4gVroqC4Ih0tOboCOiR9YFUGWfY6e6qcNBJv+sV3vLfEA+Bb2gi1FVwqJunRzA2NBdofDb6vtn39M/q6ipJwZ5tLZ2QcGAZNHDiHrrBC5SoAFkPk4y39F0pyzFiVluRDC79iTd0xrD+wh+KwUQ0vQfAUA3EGoEuIf6OQlqQBPMAfTVvow2CMrXMPEDcRxeY+cMDVEkE5xeTHL6NKMgSWt17ol7tshFeolatWmgUCWHz7Mb2wm4wborFEgiPIPwX+vDw5tdHwCySIodGw/g/7vU0FiKlU/PwywYaJhOY3n7mcMQB6FwzABlkMws0m6T4snB9lOhCHv9QptnHTFCHtl6q+7gPQJM40HY21sMX7PD4IWlm5QSjt7e1n4MgA6DBZBHfKWg5hHPhOEdZ4+MtyIRo2L1bKJOuIhauqGSy4O3RPmM7zs+sM8Sj1GYnj9NnvYQLPC9zgephQrWiI2gruFG3dUV4fRCTakel/6sM6F8LLM0xTJj4gExcRKQXbIAnMsjAY5Mtqi3T1nY/7YS80pt2xIC43+SwLovenMfi+lT6+irGl/sKon2Mrb7hwSs5NhxmzngwtEU3wywIXTP/qoKhoMSygP89MKWmWCpYmxho8wcgX6ephyi8Kvmvq4MPzrEd8rRYHvvWCrhgeixVnjLuJCg8D7o3at1fRf4wT/pmA3i7QEPiQweL7ts7KpWGIpX1MEq0PIhakWG9jzdHawDAXCtkFiVPAtWVToIZfJUXrdLU/BVrZG2Z3AdNYC8kME2HyrzcrTDkfZsEwEIjJU829MvfAEH8EECDDvjufABbCeiR84V+iK1gV079nun05dZPP8n8FvxctWcB/0R4W46Yza0/JJ5YIZfCL04WRNnSmaZ9wW/oLD1yRI4I2OjcTM7Q89g7zHhyLObmJYZSoyPm4NsRtdjXpWG6m5kubbNs6GZqZQ9r/pbwIcQdihX2R0q9g/mIREHTSis/zPgz70McrzMOsAiwilAp9UN/aKgxcBHNKg1/HiUylvM/5zHXQtocRmrPB/yLiJoD8pDTJsXW/CEksaINPMIKF4avyjm5puATTl6T1GUtY0WUw0gG0HaAU5TGTAQ6klnWCZLI9fTzWoPR9r8yN1Dev3hHXW34BCN/UcwvJngWWI/ktQqcoUE4ftEX5rHEyAZmeV7rexGUPc8uNMdGBJ/Mm3WaV2UxqlYBbzq6PW7ka+2zzzEdx0DNdV3HUnrLvphwuMlOfDa1uMh1Ekj+BY1FYnhC8e37nUcTfQJP+yju6gV4g8IGQgYO5/qtYNK8A33flqgvjLmSO0m6KP2b53gEmrbaYI8EVZlknkk/aqIyCvJVueWjuIwjippEJGSbX26aOSIrbw+rrPlT1dz3PgC3i2G3pfFH1NiPkjwCS9yqSgWmYdQ5GrHVFbTHEt87rWn82g2SX9NSmR43LikFvLI/3KmVr4wtLYl27yMa3jY84MBqvFEennscpstX48VMIgTTMsQKV9QKsgcetihoRDOn+rIPPvZbEfrnkEZeO4BmXxnzuxfgeaZ63axJXQXxdNPR+XZ2o5aKll1GXLlUcwcLEYoz44Qw9U6+f84t0FWtR82d8SZGM46BUV8G0O3u152rNnAoo04YBUC4PKsrIfYH7N4AptQIKvpMj0XnFm0B9wa6oe5L4P+K22pvI39suGl8fx4npyL24RJ4XggfODSwpfV3XNN6r3dwnyVpC+0UTlnJEhxTty8LtdLefSd2iaxsuYLXQMv5ardKKTzB6p9RXrBf3EZO0/ouUt1TXJQ6h187D8MjYdSjD6K9SHMyelRBBhfnNJCzfdWhsNvN1vNQ/Ie8rzO0goIK5uGHvJ3691sW/J/6v0Mhe4KZM8gLW60x8ngUTG4ub9k9FunQNgvo5J9r4NPg4StiZrNjj8mNpAeFY+RDbKmR4dxRtF5WoF1omxd6QlzAoyrYaPOWu5X73r2kVWRsrbvhvXU4HqGA+4iWC773G9Z27vA2pp3RZ1V3A9TzuzxlYJHMz2dhz0EMVQiwv8NDSZYz5ttfo26WrlOMapjU6/ekC4wuKcZVtz7xDK/jvs4ZFv49r40bG6ZV0/A1WZfU1lP+s75Svou5fM9cxi84vJa6dtr5m1SYeoGM+gxDdQD9UEf8Sfx3S/9AIdEcQxilLTVltZFcMP5T3G3S3u5rFIIBpDaPPnqaM1YI7u6rhRfryUBbeQE+l04DrGZACAw0q3ZyqtaD1TbRpyRl0eRIx9bDYANxVynt7WDndgk2k0MADBe4I6EV2bT2Gjd9O/GRg3BNYZAnt6WL9oxblGFv3Y109jAARYYTyYr7/TFquSU4+C0VPoGFlCI8o45W56wKeA78MWbaj67UvYA5zbRCJPJNPr79MKRHjshNrA2WaW0zv81pT19vViYtI8zfGxhtObcM5fmvqWo5CegF4K6jP6mxJP8v7ddKfKEg/oJyZXHJ2JnEv2bpqA+NjnmNzQGBxaTkDzdAk3yV2dUz1WSU5G9ecLPzhnqRMHNxaFdW3e2tTD+FN+YU0zSu1H+SrU3Doe97tErd0WUu+3fKwhaFAUFuY68Mml+V3sgySwEmpAviDssIIBnIuA5r7qxGuMBV6HdcWqIHY+JAf2ziYsikZocFyGgovogMcZLaCkNcFXevnRIrLrjITT1IzVsdHIjjCVpm5FJyk1YmOtq7UCRD26+YV/miY12Os6PgUytdk4gKvc/281qlNmLOyGiN5FGmPQIP9La4O1DO9UeKNqcp3etX1k2Lbn/Y47dzHXZe6omVK4wJpS2wD2jNMBeK6lnxfsOzs25YfwW+aD0Wues1dl57oB94zYZylcg/7LelJFcXBBmdjZ9CRcycGxcVP599rXJOqu2VK/BFW5wjXoC/dq7z21l3R3temR8XPj69Mn4UBsAuC+cxSlZ278uRRffHmt/diQO/Zn+tdaJfZd2gv8j2aWBG0po5nsu9ZJmaZF9F5mpSBx0v6iybTtkkNE9m0egcM6iswiw67t7MZH/hrLSTCRB/DYDsdzbbJa82cy9zDvTJRq1LpyZTwisAc9PUcbweWcVUFrj8L2JgRRwn0e1HGrDMRqnsxt3M9k9rXSHrivuTnst9GML3suO5pruWMkHjHVke5remv6EA/nnk7fmF8ZPJsDLmdmQv6KQz499Q71mtf/3bKa0IyEcK9QrJfg+CXO/e4b6cnYkItkWcJrKy5Aia2qGXyiDfq5rc2YOJ8G6Aava72E6UPY82rd/U62la2TW3oZLHKAW3dqTPAy27ehvSVYuXLGPM6274J4oybh765ze/vWdE2pWEJisOvEWmmnkg8a4QaTZ/htyVHic2CmHO8zvQrrFxD2+VulSmNp9CGh8DYYWiz8ys7u38j7h0mbsfgez+Nkmrc9cmpuEefQpBPVr1dj6EVYLl4R6i+nuug32fE2gKBiB20/5FqYtC9Xo7Ut4lh3YJX4dnFD7WdVHdptDl9Nt38Se5yuZgVgHdhCYzyc30/oG+eU24Oh27xF4uK1Nu0kzFi/1ngg4YXuZ1tq2uqOrrWSEQQnET70p7trtHJ5JSSdn9DtlqfzT4rg2+nxPub25I6ybGsFa6lR1h9nWfSXy/DBhyE7MOqh+UgPgtarOBiKS4MXjZ7DP0gY2QlnrHFeMb21dp9ye9qP97xlIEFSXiptvT8tqk7d9Y3t+yh3f5ziEsihKdi2oxOphLiMmINHVRIwOI+lUUmi1CUDoekffD4B/D4sLzDaC3hVI+/gqeLaNORXtfG5tZJjc/z5vnYzWuxgKwJlH2n29Vz9fozd+xg3J0Ikg0NIeAXM97X4q72kyc1LGlcsGY719WTGDnb44K8ivGwQAk3wTPUOiHaxTg6irE6vPXk7QrW4v6iyG9VoB3/jEBTruNu6QFrnaPNmSNZ0nIabw5ios4W8xNfNsBgfooWm3evGOZhnrcczDwDYrSQX37baMv4FsrkEERq6X8JLWdeX09uXvsZIzeY4kVjEq13FiT10QfBvSEsU5VhlKKtyPp6LLiB4e/Fhe/Dtf5h/jBevmUgf1Bb5NiFwiYnk1XKkZBn4ObnZh8fpszNMhQePiifKBrhMsLBbXkvmAQvZmnooP4a3Cbx/oieL2EAHLJizIolHnBb1+wu8w8FSM1Ae1cfyMvBcG3KUPgxoOzBr8zz4PzSdzNnQu0F2N4FN7nC/pUCBuc3hQ76GIhH4Ilvn7xM+8W/TU6pXzKw7YNy5cuezaj7e/TyHgqVaLggtRL3yk6byvsgHGxKOOjHh8k3eBx8ED4GlYnAewTrfn8WmiyipYcx+vpKrf54j1UyAWWiAuE3xyi8y2bjcjRWyCAgBzwOhOXv9duApOanwCoh7GvzkHcfiYtcYAOeseAyXniVp1V5GNwnA9s9EA6T9lFHXFebygh/hLgYmHfguAjTbeF3XtvbwkxbnpwBYjbDgMSCIGmZHFtMOYtlBylS8msoSUfyzOTh8CIkLZ7TfnPqLdaJTFThjyduIPL/XodInChbqG982FgYliqKaFn9Y4RGtm+ln+2/T/veXRlxU4XBCA40xHdvWApTfBTfNIiAnmUuq5I2ShDhIUxAdhFIkF3OIWPFQhk3NqqXqrGe2VU9ttVMxpl0ghtp/2PKHvf4Mr1UdmDLnIrEye5eCSIcZRXX45Q5liEkQeo1aRDw32KXc4hjITg50TYU+sJEwl3Js4BZ5nGMm5J4Fl5tKlsmjY/ldNuwTlGK8+Vy+VOhXexMN8uRQ7dO2EZ5L+4941400OU/3oGJnc8FuGVnbihoDawFF1HqMWHIhXYPaFOI54JrVfraVd4bViY5B5eAseTM7uVj8dkLThiY49Q4e2l9YSc7swym3LAuwaW4aeQYeNnRHOJ7MN4GNIP8+T6SPPld1mjps/ICS3afC9xh+ZJPdjxLH4ZlWlj0YV9tqhM4RDiG/SPtywsRvzrxZqXllv46eXL9KpMvVBRMP7ATOwyyyzpPf0KTFjRpK2hIJXqDQt8U6aEPAAACoklEQVTlNoNL8oE//EA3YPWZuQmzYXL9GlG+Nmde0KzZxR/WFdLKe+Bu3NhlztJl4BwN3dDqeNpncMyudGlbiAuhS8GnocWQ5sCrxEv7JEh6mMG42xXzmFY2MzJxcPytJHtH9BdB2c9tW81dedKovsTc5DOyWCafh3op1NSjljmmX2W8hONGvgUmQyPUI7vpw34L08tCF4mXvgvDJhoWfIzPjz1odalamh/3QbCAyHpJbjZnCw5CXBochnlmS93siC+Mx8E4EeHxLhoC3lAICRyCn8HwhXBu4Xe+IVuY6R9PDlObJdfSFo6KKBRIR37SK7LG08T9oB8mg62RuJqKxdUE0kwqI1yMeznPf4XezX+zQELSMEZZfWNcY8LWWA4IR3lSWf4jVdGGZ2XJa6G6sKNA+CANNkww9P0fjAFoMLRK/oNbaZomDGVb07gIspApmhHIwPs4h83hfQfSzXDzL6KJ94LtHSg/tr/yXPlfBZ4gLjzHaKC0FniQktGelu1Zj7gjImEHDJCR0Gkc3zRL1LglkHkMtCpxv8iOUxzZsrInaGMt/CpbBewo9lcElcVviM9vs+aJNN9vPKvW0Cq29aDarKKhh489BoT+sNgKGuXHmwFuFTJp3+0cHxJaaltVxvtk2kzTf590H5dXofUSwmPOtxKLTdzG/+I7YQQGWXBhrKYQwKHvLcOACARxJ4m7YVsF6Rgp0xA7A2ooDGFgCANDGBjCwDbBwLZj1NsCnLyPNj+BbawT8Wsvk+tI1dIN+GNFYzATXEw8DnQ/SFo5qE38yH9T1rjqZZZJLz77vETP+xm3BYxDZQxhYAgDQxgYwsC/IwYQFCIswj9ZoRL+NisK/h3bNATzEAaGMDCEgSEMDGFgCANDGBjCwBAG/h9h4P8A1o75+m6cd4AAAAAASUVORK5CYII=) # Skin Detection: A Step-by-Step Example using Python and OpenCV ### by [PyImageSearch.com](http://www.pyimagesearch.com) ## Welcome to **[PyImageSearch Plus](http://pyimg.co/plus)** Jupyter Notebooks! This notebook is associated with the [Skin Detection: A Step-by-Step Example using Python and OpenCV](https://www.pyimagesearch.com/2014/08/18/skin-detection-step-step-example-using-python-opencv/) blog post published on 2014-08-18. Only the code for the blog post is here. Most codeblocks have a 1:1 relationship with what you find in the blog post with two exceptions: (1) Python classes are not separate files as they are typically organized with PyImageSearch projects, and (2) Command Line Argument parsing is replaced with an `args` dictionary that you can manipulate as needed. We recommend that you execute (press ▶️) the code block-by-block, as-is, before adjusting parameters and `args` inputs. Once you've verified that the code is working, you are welcome to hack with it and learn from manipulating inputs, settings, and parameters. For more information on using Jupyter and Colab, please refer to these resources: * [Jupyter Notebook User Interface](https://jupyter-notebook.readthedocs.io/en/stable/notebook.html#notebook-user-interface) * [Overview of Google Colaboratory Features](https://colab.research.google.com/notebooks/basic_features_overview.ipynb) As a reminder, these PyImageSearch Plus Jupyter Notebooks are not for sharing; please refer to the **Copyright** directly below and **Code License Agreement** in the last cell of this notebook. Happy hacking! *Adrian* <hr> ***Copyright:*** *The contents of this Jupyter Notebook, unless otherwise indicated, are Copyright 2020 Adrian Rosebrock, PyimageSearch.com. All rights reserved. Content like this is made possible by the time invested by the authors. If you received this Jupyter Notebook and did not purchase it, please consider making future content possible by joining PyImageSearch Plus at http://pyimg.co/plus/ today.* ### Download the code zip file ``` !wget https://www.pyimagesearch.com/wp-content/uploads/2014/08/skin-detection.zip !unzip -qq skin-detection.zip %cd skin-detection ``` ## Blog Post Code ### Import Packages ``` # import the necessary packages from pyimagesearch import imutils import numpy as np import argparse import cv2 ``` ### Detecting Skin in Images & Video Using Python and OpenCV ``` # construct the argument parse and parse the arguments #ap = argparse.ArgumentParser() #ap.add_argument("-v", "--video", # help = "path to the (optional) video file") #args = vars(ap.parse_args()) # since we are using Jupyter Notebooks we can replace our argument # parsing code with *hard coded* arguments and values args = { "video": "video/skin_example.mov", "output" : "output.avi" } # define the upper and lower boundaries of the HSV pixel # intensities to be considered 'skin' lower = np.array([0, 48, 80], dtype = "uint8") upper = np.array([20, 255, 255], dtype = "uint8") # if a video path was not supplied, grab the reference # to the gray if not args.get("video", False): camera = cv2.VideoCapture(0) # otherwise, load the video else: camera = cv2.VideoCapture(args["video"]) # initialize pointer to output video file writer = None # keep looping over the frames in the video while True: # grab the next frame frame = camera.read()[1] # if we did not grab a frame then we have reached the end of the # video if frame is None: break # resize the frame, convert it to the HSV color space, # and determine the HSV pixel intensities that fall into # the speicifed upper and lower boundaries frame = imutils.resize(frame, width = 400) converted = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) skinMask = cv2.inRange(converted, lower, upper) # apply a series of erosions and dilations to the mask # using an elliptical kernel kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11)) skinMask = cv2.erode(skinMask, kernel, iterations = 2) skinMask = cv2.dilate(skinMask, kernel, iterations = 2) # blur the mask to help remove noise, then apply the # mask to the frame skinMask = cv2.GaussianBlur(skinMask, (3, 3), 0) skin = cv2.bitwise_and(frame, frame, mask = skinMask) # if the video writer is None *AND* we are supposed to write # the output video to disk initialize the writer if writer is None and args["output"] is not None: fourcc = cv2.VideoWriter_fourcc(*"MJPG") writer = cv2.VideoWriter(args["output"], fourcc, 20, (skin.shape[1], skin.shape[0]), True) # if the writer is not None, write the frame with recognized # faces to disk if writer is not None: writer.write(skin) # do a bit of cleanup camera.release() # check to see if the video writer point needs to be released if writer is not None: writer.release() !ffmpeg -i output.avi output.mp4 #@title Display video inline from IPython.display import HTML from base64 import b64encode mp4 = open("output.mp4", "rb").read() dataURL = "data:video/mp4;base64," + b64encode(mp4).decode() HTML(""" <video width=400 controls> <source src="%s" type="video/mp4"> </video> """ % dataURL) ``` For a detailed walkthrough of the concepts and code, be sure to refer to the full tutorial, [*Skin Detection: A Step-by-Step Example using Python and OpenCV*](https://www.pyimagesearch.com/2014/08/18/skin-detection-step-step-example-using-python-opencv/) published on 2014-08-18. # Code License Agreement ``` Copyright (c) 2020 PyImageSearch.com SIMPLE VERSION Feel free to use this code for your own projects, whether they are purely educational, for fun, or for profit. THE EXCEPTION BEING if you are developing a course, book, or other educational product. Under *NO CIRCUMSTANCE* may you use this code for your own paid educational or self-promotional ventures without written consent from Adrian Rosebrock and PyImageSearch.com. LONGER, FORMAL VERSION Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. Notwithstanding the foregoing, you may not use, copy, modify, merge, publish, distribute, sublicense, create a derivative work, and/or sell copies of the Software in any work that is designed, intended, or marketed for pedagogical or instructional purposes related to programming, coding, application development, or information technology. Permission for such use, copying, modification, and merger, publication, distribution, sub-licensing, creation of derivative works, or sale is expressly withheld. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ```
github_jupyter
# Granger Causality with Google Trends - Did `itaewon class` cause `โคชูจัง`? We will give an example of Granger casuality test with interests over time of `itaewon class` and `โคชูจัง` in Thailand during 2020-01 to 2020-04. During the time, gochujang went out of stock in many supermarkets supposedly because people micking the show. We are examining the hypothesis that the interst over time of `itaewon class` Granger causes that of `โคชูจัง`. $x_t$ Granger causes $y_t$ means that the past values of $x_t$ could contain information that is useful in predicting $y_t$. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import tqdm import warnings warnings.filterwarnings('ignore') import matplotlib matplotlib.rc('font', family='Ayuthaya') # MacOS ``` ## 1. Get Trend Objects with Thailand Offset We can get interest over time of a keyword with the unofficial `pytrends` library. ``` from pytrends.request import TrendReq #get trend objects with thailand offset 7*60 = 420 minutes trend = TrendReq(hl='th-TH', tz=420) #compare 2 keywords kw_list = ['โคชูจัง','itaewon class'] trend.build_payload(kw_list, geo='TH',timeframe='2020-01-01 2020-04-30') df = trend.interest_over_time().iloc[:,:2] df.head() df.plot() ``` ## 2. Stationarity Check: Augmented Dickey-Fuller Test Stationarity is a pre-requisite for Granger causality test. We first use [augmented Dickey-Fuller test](https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test) to detect stationarity. For the following model: $$\Delta y_t = \alpha + \beta t + \gamma y_{t-1} + \delta_1 \Delta y_{t-1} + \cdots + \delta_{p-1} \Delta y_{t-p+1} + \varepsilon_t$$ where $\alpha$ is a constant, $\beta$ the coefficient on a time trend and $p$ the lag order of the autoregressive process. The null hypothesis is that $\gamma$ is 0; that is, $y_{t-1}$ does not have any valuable contribution to predicting $y_t$. If we reject the null hypothesis, that means $y_t$ does not have a unit root. ``` from statsmodels.tsa.stattools import adfuller #test for stationarity with augmented dickey fuller test def unit_root(name,series): signif=0.05 r = adfuller(series, autolag='AIC') output = {'test_statistic':round(r[0],4),'pvalue':round(r[1],4),'n_lags':round(r[2],4),'n_obs':r[3]} p_value = output['pvalue'] def adjust(val,lenght=6): return str(val).ljust(lenght) print(f'Augmented Dickey-Fuller Test on "{name}"') print('-'*47) print(f'Null Hypothesis: Data has unit root. Non-Stationary.') print(f'Observation = {output["n_obs"]}') print(f'Significance Level = {signif}') print(f'Test Statistic = {output["test_statistic"]}') print(f'No. Lags Chosen = {output["n_lags"]}') for key,val in r[4].items(): print(f'Critical value {adjust(key)} = {round(val,3)}') if p_value <= signif: print(f'=> P-Value = {p_value}. Rejecting null hypothesis.') print(f'=> Series is stationary.') else: print(f'=> P-Value = {p_value}. Cannot reject the null hypothesis.') print(f'=> "{name}" is non-stationary.') ``` 2.1. `โคชูจัง` unit root test ``` name = 'โคชูจัง' series = df.iloc[:,0] unit_root(name,series) ``` 2.2. `itaewon class` unit root test ``` name = 'itaewon class' series = df.iloc[:,1] unit_root(name,series) ``` We could not reject the null hypothesis of augmented Dickey-Fuller test for both time series. This should be evident just by looking at the plot that they are not stationary (have stable same means, variances, autocorrelations over time). ## 3. Taking 1st Difference Most commonly used methods to "stationarize" a time series is to take the first difference aka $\frac{y_t-y_{t-1}}{y_{t-1}}$ of the time series. ``` diff_df = df.diff(1).dropna() ``` 3.1. 1st Difference of `โคชูจัง` unit root test ``` name = 'โคชูจัง' series = diff_df.iloc[:,0] unit_root(name,series) ``` 3.2. 1st Difference of `itaewon class` unit root test ``` name = 'itaewon class' series = diff_df.iloc[:,1] unit_root(name,series) diff_df.plot() ``` - 1st Difference of `itaewon class` is stationary at 5% significance. - 1st Difference of `โคชูจัง` is not stationary but 0.0564 is close enough to 5% significance, we are making an exception for this example. ## 5. Find Lag Length Note that `maxlag` is an important hyperparameter that determines if your Granger test is significant or not. There are some criteria you can you to find that lag but as with any frequentist statistical test, you need to understand what assumptions you are making. ``` import statsmodels.tsa.api as smt df_test = diff_df.copy() df_test.head() # make a VAR model model = smt.VAR(df_test) res = model.select_order(maxlags=None) print(res.summary()) ``` One thing to note is that this hyperparameter affects the conclusion of the test and the best solution is to have a strong theoretical assumption but if not the empirical methods above could be the next best thing. ``` #find the optimal lag lags = list(range(1,23)) res = grangercausalitytests(df_test, maxlag=lags, verbose=False) p_values = [] for i in lags: p_values.append({'maxlag':i, 'ftest':res[i][0]['ssr_ftest'][1], 'chi2':res[i][0]['ssr_chi2test'][1], 'lr':res[i][0]['lrtest'][1], 'params_ftest':res[i][0]['params_ftest'][1],}) p_df = pd.DataFrame(p_values) p_df.iloc[:,1:].plot() ``` ## 6. Granger Causality Test The Granger causality test the null hypothesis that $x_t$ **DOES NOT** Granger cause $y_t$ with the following models: $$y_{t}=a_{0}+a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots +a_{m}y_{t-m}+{\text{error}}_{t}$$ and $$y_{t}=a_{0}+a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots +a_{m}y_{t-m}+b_{p}x_{t-p}+\cdots +b_{q}x_{t-q}+{\text{error}}_{t}$$ An F-statistic is then calculated by ratio of residual sums of squares of these two models. ``` from statsmodels.tsa.stattools import grangercausalitytests def granger_causation_matrix(data, variables,test,verbose=False): x = pd.DataFrame(np.zeros((len(variables),len(variables))), columns=variables,index=variables) for c in x.columns: for r in x.index: test_result = grangercausalitytests(data[[r,c]], maxlag=maxlag, verbose=False) p_values = [round(test_result[i+1][0][test][1],4) for i in range(maxlag)] if verbose: print(f'Y = {r}, X= {c},P Values = {p_values}') min_p_value = np.min(p_values) x.loc[r,c] = min_p_value x.columns = [var + '_x' for var in variables] x.index = [var + '_y' for var in variables] return x # maxlag is the maximum lag that is possible number by statsmodels default nobs = len(df_test.index) maxlag = round(12*(nobs/100.)**(1/4.)) maxlag data = df_test variables = df_test.columns ``` #### 6.1. SSR based F test ``` test = 'ssr_ftest' ssr_ftest = granger_causation_matrix(data, variables, test) ssr_ftest['test'] = 'ssr_ftest' ``` #### 6.2. SSR based chi2 test ``` test = 'ssr_chi2test' ssr_chi2test = granger_causation_matrix(data, variables, test) ssr_chi2test['test'] = 'ssr_chi2test' ``` #### 6.3. Likelihood ratio test ``` test = 'lrtest' lrtest = granger_causation_matrix(data, variables, test) lrtest['test'] = 'lrtest' ``` #### 6.4. Parameter F test ``` test = 'params_ftest' params_ftest = granger_causation_matrix(data, variables, test) params_ftest['test'] = 'params_ftest' frames = [ssr_ftest, ssr_chi2test, lrtest, params_ftest] all_test = pd.concat(frames) all_test ``` We may conclude that `itaewon class` Granger caused `โคชูจัง`, but `โคชูจัง` did not Granger cause `itaewon class`. # What About Chicken and Eggs? We use the annual chicken and eggs data from [Thurman and Fisher (1988) hosted by UIUC](http://www.econ.uiuc.edu/~econ536/Data/). ## 1. Get data from csv file ``` #chicken and eggs chickeggs = pd.read_csv('chickeggs.csv') chickeggs #normalize for 1930 to be 1 df = chickeggs.iloc[:,1:] df['chic'] = df.chic / df.chic[0] df['egg'] = df.egg / df.egg[0] df = df[['chic','egg']] df df.plot() ``` ## 2. Stationarity check: Augmented Dickey-Fuller Test 2.1. `egg` unit root test ``` name = 'egg' series = df.iloc[:,0] unit_root(name,series) ``` 2.2. `chic` unit root test ``` name = 'chic' series = df.iloc[:,1] unit_root(name,series) ``` ## 3. Taking 1st Difference ``` diff_df = df.diff(1).dropna() diff_df diff_df.plot() ``` 3.1. 1st Difference of `egg` unit root test ``` name = 'egg' series = diff_df.iloc[:,0] unit_root(name,series) ``` 3.2. 1st Difference of `chic` unit root test ``` name = 'chic' series = diff_df.iloc[:,1] unit_root(name,series) ``` ## 4. Find Lag Length ``` # make a VAR model model = smt.VAR(diff_df) res = model.select_order(maxlags=None) print(res.summary()) #find the optimal lag lags = list(range(1,23)) res = grangercausalitytests(diff_df, maxlag=lags, verbose=False) p_values = [] for i in lags: p_values.append({'maxlag':i, 'ftest':res[i][0]['ssr_ftest'][1], 'chi2':res[i][0]['ssr_chi2test'][1], 'lr':res[i][0]['lrtest'][1], 'params_ftest':res[i][0]['params_ftest'][1],}) p_df = pd.DataFrame(p_values) print('Eggs Granger cause Chickens') p_df.iloc[:,1:].plot() #find the optimal lag lags = list(range(1,23)) res = grangercausalitytests(diff_df[['egg','chic']], maxlag=lags, verbose=False) p_values = [] for i in lags: p_values.append({'maxlag':i, 'ftest':res[i][0]['ssr_ftest'][1], 'chi2':res[i][0]['ssr_chi2test'][1], 'lr':res[i][0]['lrtest'][1], 'params_ftest':res[i][0]['params_ftest'][1],}) p_df = pd.DataFrame(p_values) print('Chickens Granger cause Eggs') p_df.iloc[:,1:].plot() ``` ## 5. Granger Causality Test ``` # nobs is number of observation nobs = len(diff_df.index) # maxlag is the maximum lag that is possible number maxlag = round(12*(nobs/100.)**(1/4.)) data = diff_df variables = diff_df.columns ``` #### 5.1. SSR based F test ``` test = 'ssr_ftest' ssr_ftest = granger_causation_matrix(data, variables, test) ssr_ftest['test'] = 'ssr_ftest' ``` #### 5.2. SSR based chi2 test ``` test = 'ssr_chi2test' ssr_chi2test = granger_causation_matrix(data, variables, test) ssr_chi2test['test'] = 'ssr_chi2test' ``` #### 5.3. Likelihood ratio test ``` test = 'lrtest' lrtest = granger_causation_matrix(data, variables, test) lrtest['test'] = 'lrtest' ``` #### 5.4. Parameter F test ``` test = 'params_ftest' params_ftest = granger_causation_matrix(data, variables, test) params_ftest['test'] = 'params_ftest' frames = [ssr_ftest, ssr_chi2test, lrtest, params_ftest] all_test = pd.concat(frames) all_test ``` With this we can conclude that eggs Granger cause chickens!
github_jupyter
# CAM Methods Benchmark **Goal:** to compare CAM methods using regular explaining metrics. **Author:** lucas.david@ic.unicamp.br Use GPUs if you are running *Score-CAM* or *Quantitative Results* sections. ``` #@title from google.colab import drive drive.mount('/content/drive') import tensorflow as tf # base_dir = '/content/drive/MyDrive/' base_dir = '/home/ldavid/Workspace' # data_dir = '/root/tensorflow_datasets' data_dir = '/home/ldavid/Workspace/datasets/' class Config: seed = 218402 class data: path = '/root/tensorflow_datasets/amazon-from-space' size = (256, 256) shape = (*size, 3) batch_size = 32 shuffle_buffer_size = 8 * batch_size prefetch_buffer_size = tf.data.experimental.AUTOTUNE train_shuffle_seed = 120391 shuffle = False class model: backbone = tf.keras.applications.ResNet101V2 last_spatial_layer = 'post_relu' # backbone = tf.keras.applications.EfficientNetB6 # last_spatial_layer = 'eb6' # backbone = tf.keras.applications.VGG16 # last_spatial_layer = 'block5_pool' gap_layer_name = 'avg_pool' include_top = False classifier_activation = None custom = True fine_tune_layers = 0.6 freeze_batch_norm = False weights = f'{base_dir}/logs/amazon-from-space/resnet101-sw-ce-fine-tune/weights.h5' class training: valid_size = 0.3 class explaining: noise = tf.constant(.2) repetitions = tf.constant(8) score_cam_activations = 'all' λ_pos = tf.constant(1.) λ_neg = tf.constant(1.) λ_bg = tf.constant(1.) report = f'{base_dir}/logs/amazon-from-space/resnet101-sw-ce-fine-tune/cam-score.txt' preprocess = tf.keras.applications.resnet_v2.preprocess_input deprocess = lambda x: (x + 1) * 127.5 # preprocess = tf.keras.applications.res.preprocess_input # deprocess = lambda x: x # preprocess = tf.keras.applications.vgg16.preprocess_input # deprocess = lambda x: x[..., ::-1] + [103.939, 116.779, 123.68] to_image = lambda x: tf.cast(tf.clip_by_value(deprocess(x), 0, 255), tf.uint8) masked = lambda x, maps: x * tf.image.resize(maps, Config.data.size) ``` ## Setup ``` ! pip -qq install tensorflow_addons import os import shutil from time import time from math import ceil import numpy as np import pandas as pd import tensorflow_addons as tfa import tensorflow_datasets as tfds import matplotlib.pyplot as plt import seaborn as sns from tensorflow.keras import callbacks for d in tf.config.list_physical_devices('GPU'): print(d) print(f'Setting device {d} to memory-growth mode.') try: tf.config.experimental.set_memory_growth(d, True) except Exception as e: print(e) R = tf.random.Generator.from_seed(Config.seed, alg='philox') C = np.asarray(sns.color_palette("Set1", 21)) CMAP = sns.color_palette("Set1", 21, as_cmap=True) sns.set_style("whitegrid", {'axes.grid' : False}) np.set_printoptions(linewidth=120) def normalize(x, reduce_min=True, reduce_max=True): if reduce_min: x -= tf.reduce_min(x, axis=(-3, -2), keepdims=True) if reduce_max: x = tf.math.divide_no_nan(x, tf.reduce_max(x, axis=(-3, -2), keepdims=True)) return x def visualize( image, title=None, rows=2, cols=None, i0=0, figsize=(9, 4), cmap=None, full=True ): if image is not None: if isinstance(image, (list, tuple)) or len(image.shape) > 3: # many images if full: plt.figure(figsize=figsize) cols = cols or ceil(len(image) / rows) for ix in range(len(image)): plt.subplot(rows, cols, i0+ix+1) visualize(image[ix], cmap=cmap, title=title[ix] if title is not None and len(title) > ix else None) if full: plt.tight_layout() return if isinstance(image, tf.Tensor): image = image.numpy() if image.shape[-1] == 1: image = image[..., 0] plt.imshow(image, cmap=cmap) if title is not None: plt.title(title) plt.axis('off') def observe_labels(probs, labels, ix): p = probs[ix] l = labels[ix] s = tf.argsort(p, direction='DESCENDING') d = pd.DataFrame({ 'idx': s, 'label': tf.gather(CLASSES, s).numpy().astype(str), 'predicted': tf.gather(p, s).numpy().round(2), 'ground-truth': tf.gather(l, s).numpy() }) return d[(d['ground-truth']==1) | (d['predicted'] > 0.05)] def plot_heatmap(i, m): plt.imshow(i) plt.imshow(m, cmap='jet', alpha=0.5) plt.axis('off') ``` ## Related Work ### Summary ``` #@title d = [ ['1512.04150', 'CAM', 'GoogleNet', 'ILSVRC-15 val', 56.4, 43.00, None, None, None, None], ['1512.04150', 'CAM', 'VGG-16', 'ILSVRC-15 val', 57.2, 45.14, None, None, None, None], ['1512.04150', 'Backprop', 'GoogleNet', 'ILSVRC-15 val', 61.31, 50.55, None, None, None, None], ['1512.04150', 'Backprop', 'GoogleNet', 'ILSVRC-15 test', None, 37.10, None, None, None, None], ['1610.02391', 'CAM', 'VGG-16', 'ILSVRC-15 val', 57.2, 45.14, None, None, None, None, None], ['1610.02391', 'Grad-CAM', 'VGG-16', 'ILSVRC-15 val', 56.51, 46.41, None, None, None, None, None], ['1610.02391', 'CAM', 'GoogleNet', 'ILSVRC-15 val', 60.09, 49.34, None, None, None, None, None], ['1610.02391', 'Grad-CAM', 'GoogleNet', 'ILSVRC-15 val', 60.09, 49.34, None, None, None, None, None], ['1710.11063', 'Grad-CAM', 'VGG-16', 'ILSVRC-2012 val', None, None, 46.56, 13.42, 29.28, None, None], ['1710.11063', 'Grad-CAM++', 'VGG-16', 'ILSVRC-2012 val', None, None, 36.84, 17.05, 70.72, None, None], ['1710.11063', 'Grad-CAM', 'AlexNet', 'ILSVRC-2012 val', None, None, 82.86, 3.16, 13.44, None, None], ['1710.11063', 'Grad-CAM++', 'AlexNet', 'ILSVRC-2012 val', None, None, 62.75, 8.24, 86.56, None, None], ['1710.11063', 'Grad-CAM', 'VGG-16', 'Pascal 2007 val', None, None, 28.54, 21.43, 39.44, None, None], ['1710.11063', 'Grad-CAM++', 'VGG-16', 'Pascal 2007 val', None, None, 19.53, 18.96, 61.47, None, None], ['1710.11063', 'Grad-CAM', 'AlexNet', 'Pascal 2007 val', None, None, 45.82, 14.38, 27.21, None, None], ['1710.11063', 'Grad-CAM++', 'AlexNet', 'Pascal 2007 val', None, None, 29.16, 19.76, 72.79, None, None], ['1710.11063', 'Grad-CAM', 'ResNet-50', 'ILSVRC-2012 val', None, None, 30.36, 22.11, 39.49, None, None], ['1710.11063', 'Grad-CAM++', 'ResNet-50', 'ILSVRC-2012 val', None, None, 28.90, 22.16, 60.51, None, None], ['1710.11063', 'Grad-CAM', 'ResNet-50', 'Pascal 2007 val', None, None, 20.86, 21.99, 41.39, None, None], ['1710.11063', 'Grad-CAM++', 'ResNet-50', 'Pascal 2007 val', None, None, 16.19, 19.52, 58.61, None, None], ['1710.11063', 'Grad-CAM', 'VGG-16', 'Pascal 2012 val', None, None, None, None, None, 0.33, None], ['1710.11063', 'Grad-CAM++', 'VGG-16', 'Pascal 2012 val', None, None, None, None, None, 0.34, None], ['1910.01279', 'Backprop Vanilla', 'VGG-16', 'ILSVRC-2012 val', None, None, None, None, None, None, 41.3], ['1910.01279', 'Backprop Smooth', 'VGG-16', 'ILSVRC-2012 val', None, None, None, None, None, None, 42.4], ['1910.01279', 'Backprop Integraded', 'VGG-16', 'ILSVRC-2012 val', None, None, None, None, None, None, 44.7], ['1910.01279', 'Grad-CAM', 'VGG-16', 'ILSVRC-2012 val', None, None, 47.80, 19.60, None, None, 48.1], ['1910.01279', 'Grad-CAM++', 'VGG-16', 'ILSVRC-2012 val', None, None, 45.50, 18.90, None, None, 49.3], ['1910.01279', 'Score-CAM', 'VGG-16', 'ILSVRC-2012 val', None, None, 31.50, 30.60, None, None, 63.7], ] d = pd.DataFrame( d, columns=[ 'Source', 'Method', 'Arch', 'Dataset', 'Loc_Error_T-1', 'Loc_Error_T-5', 'Avg_Drop', 'Incr_in_confidence', 'Win_%', 'mLoc_I^c(s=0)', 'E-Pointing_Game' ] ) #@title (d.groupby(['Dataset', 'Method'], as_index=False) .mean() .replace(np.nan, '', regex=True)) #@title Full Report d.replace(np.nan, '', regex=True) ``` **Localization Error** As defined in http://image-net.org/challenges/LSVRC/2015/index#maincomp: Let $d(c_i,C_k)=0$ if $c_i=C_k$ and 1 otherwise. Let $f(b_i,B_k)=0$ if $b_i$ and $B_k$ have more than 50% overlap, and 1 otherwise. The error of the algorithm on an individual image will be computed using: $$e=\frac{1}{n} \cdot \sum_k min_{i} min_{m} max \{d(c_i,C_k), f(b_i,B_{km}) \}$$ **Pixel Perturbation** (Full-Gradient) First form: remove $k$ most salient pixels from the image and measure impact on output confidence (high impact expected for good saliency methods, similar to Avg. Drop %). This might add artifacts (edges). Second form: remove $k$ least salient pixels from the image and measure output confidence (low impact expected for good saliency methods). **Average Drop %** (Grad-CAM++, Score-CAM) The avg. percentual drop in the confidence of a model for a particular image $x_i$ and class $c$, when only the highlighted region is provided ($M_i\circ x_i$): $$∑_i^N \frac{max(0, Y_i^c − O_i^c)}{Y_i^c} 100$$ * $Y_i^c = f(x_i)^c$ * $O_i^c = f(M_i\circ x_i)^c$ **Increase in confidence %** (Grad-CAM++, Score-CAM) Measures scenarios where removing background noise must improve classification confidence. $$∑^N_i \frac{Y^c_i < O^c_i}{N}100$$ ### CAM $M(f, x)^u_{ij} = \text{relu}(\sum_k w^u_k A_{ij}^k)$ ``` @tf.function def sigmoid_cam(x, y): print(f'CAM tracing x:{x.shape} y:{y.shape}') l, a = nn_s(x, training=False) y = tf.einsum('bhwk,ku->buhw', a, sW) return l, y[..., tf.newaxis] ``` ### Grad-CAM $M(f, x)^u_{ij} = \text{relu}(\sum_k \sum_{lm}\frac{\partial S_u}{\partial A_{lm}^k} A_{ij}^k)$ ``` @tf.function def sigmoid_gradcam(x, y): print(f'Grad-CAM tracing x:{x.shape} y:{y.shape}') with tf.GradientTape(watch_accessed_variables=False) as t: t.watch(x) l, a = nn_s(x, training=False) dlda = t.batch_jacobian(l, a) weights = tf.reduce_sum(dlda, axis=(-3, -2)) # bc(hw)k -> bck maps = tf.einsum('bhwc,buc->buhw', a, weights) return l, maps[..., tf.newaxis] ``` Note that for fully-convolutional network, with a single densely-connected softmax classifier at its end: $S_u = \sum_k w^k_u [\frac{1}{hw}\sum_{lm}^{hw} A^k_{lm}]$ Then: \begin{align} \frac{\partial S_u}{\partial A_{lm}^k} &= \frac{\partial}{\partial A_{lm}^k} \sum_k w^k_u [\frac{1}{hw}\sum_{lm}^{hw} A^k_{lm}] \\ &= w^k_u [\frac{1}{hw}\frac{\partial A^k_{lm}}{\partial A^k_{lm}}] \\ &= \frac{w^k_u}{hw} \end{align} All constants (including $\frac{1}{hw}$) are erased when we apply normalization. Therefore, in these conditions, this this method is equivalent to `CAM`. ### Grad-CAM++ $M(f, x)^u_{ij} = \sum_k \sum_{lm} \alpha_{lm}^{ku} \text{relu}(\frac{\partial S_u}{\partial A_{lm}^k}) A_{ij}^k$ Where $\alpha_{lm}^{ku} = \frac{(\frac{\partial S_u}{\partial A_{lm}^k})^2}{2(\frac{\partial S_u}{\partial A_{lm}^k})^2 + \sum_{ab} A_{ab}^k(\frac{\partial S_u}{\partial A_{lm}^k})^3}$ ``` @tf.function def sigmoid_gradcampp(x, y): print(f'Grad-CAM++ tracing x:{x.shape} y:{y.shape}') with tf.GradientTape(watch_accessed_variables=False) as tape: tape.watch(x) s, a = nn_s(x, training=False) dsda = tape.batch_jacobian(s, a) dyda = tf.einsum('bu,buhwk->buhwk', tf.exp(s), dsda) d2 = dsda**2 d3 = dsda**3 aab = tf.reduce_sum(a, axis=(1, 2)) # (BK) akc = tf.math.divide_no_nan( d2, 2.*d2 + tf.einsum('bk,buhwk->buhwk', aab, d3)) # (2*(BUHWK) + (BK)*BUHWK) weights = tf.einsum('buhwk,buhwk->buk', akc, tf.nn.relu(dyda)) # w: buk maps = tf.einsum('buk,bhwk->buhw', weights, a) # a:bhwk, m: buhw return s, maps[..., tf.newaxis] ``` ### Score CAM $M(f, x)^u_{ij} = \text{relu}(∑_k \text{softmax}(C(A_l^k)_u) A_l^k)$ Where $ C(A_l^k)_u = f(X_b \circ \psi(\text{up}(A_l^k)))_u - f(X_b)_u $ This algorithm has gone through updated. I followed the most recent implementation in [haofanwang/Score-CAM](https://github.com/haofanwang/Score-CAM/blob/master/cam/scorecam.py#L47). ``` def sigmoid_scorecam(x, y): acts_used = Config.explaining.score_cam_activations l, a = nn_s(x, training=False) if acts_used == 'all' or acts_used is None: acts_used = a.shape[-1] # Sorting kernels from highest to lowest variance. std = tf.math.reduce_std(a, axis=(1, 2)) a_high_std = tf.argsort(std, axis=-1, direction='DESCENDING')[:, :acts_used] a = tf.gather(a, a_high_std, axis=3, batch_dims=1) s = tf.Variable(tf.zeros((x.shape[0], y.shape[1], *Config.data.size)), name='sc_maps') for ix in range(acts_used): a_ix = a[..., ix:ix+1] if tf.reduce_min(a_ix) == tf.reduce_max(a_ix): break s.assign_add(_scorecam_feed(x, a_ix)) return l, s[..., tf.newaxis] @tf.function def _scorecam_feed(x, a_ix): print('Score-CAM feed tracing') a_ix = tf.image.resize(a_ix, Config.data.size) b = normalize(a_ix) fm = nn(x * b, training=False) fm = tf.nn.sigmoid(fm) fm = tf.einsum('bc,bhw->bchw', fm, a_ix[..., 0]) return fm ``` Vectorized implementation: The number of activating kernels used was defined in `Config.explaining.score_cam_activations`. For each batch of 16 samples (`100 MB = 16 × 512×512×3 × 8÷1000÷1000`): 1. 16 samples are feed-forwarded, generating the logits `(16, 20)` and activations `(16, 16, 16, score_cam_activations)` 2. The masked input `(16 × score_cam_activations, 512, 512, 3)` is created (`score_cam_activations × 100 MB`). 3. The masked input is feed-forwarded (batching used to prevent the GPU from blowing up). ```python def sigmoid_scorecam(x, y): acts_used = Config.explaining.score_cam_activations l, a = nn_s(x, training=False) std = tf.math.reduce_std(a, axis=(1, 2)) a_high_std = tf.argsort(std, axis=-1, direction='DESCENDING')[:, :acts_used] a = tf.gather(a, a_high_std, axis=-1, batch_dims=-1) a = tf.image.resize(a, Config.data.size) b = normalize(a) b = tf.einsum('bhwc,bhwk->bkhwc', x, b) # outer product over 2 ranks b = tf.reshape(b, (-1, *Config.data.shape)) # batchify (B*A, H, W, C) fm = nn.predict(b, batch_size=Config.data.batch_size) fm = tf.nn.sigmoid(fm) fm = tf.reshape(fm, (x.shape[0], acts_used, fm.shape[1])) # unbatchify s = tf.einsum('bhwk,bkc->bchw', a, fm) s = tf.nn.relu(s) s = s[..., tf.newaxis] return l, s ``` ## Dataset ### Augmentation Policy ``` def default_policy_fn(image): # image = tf.image.resize_with_crop_or_pad(image, *Config.data.size) # mask = tf.image.resize_with_crop_or_pad(mask, *Config.data.size) return image ``` ### Preparing and Performance Settings ``` data_path = Config.data.path %%bash if [ ! -d /root/tensorflow_datasets/amazon-from-space/ ]; then mkdir -p /root/tensorflow_datasets/amazon-from-space/ # gdown --id 12wCmah0FFPIjI78YJ2g_YWFy97gaA5S9 --output /root/tensorflow_datasets/amazon-from-space/train-jpg.tfrecords cp /content/drive/MyDrive/datasets/amazon-from-space/train-jpg.tfrecords \ /root/tensorflow_datasets/amazon-from-space/ else echo "Dir $data_path found. Skipping." fi class AmazonFromSpace: num_train_samples = 40479 num_test_samples = 61191 classes_ = np.asarray( ['agriculture', 'artisinal_mine', 'bare_ground', 'blooming', 'blow_down', 'clear', 'cloudy', 'conventional_mine', 'cultivation', 'habitation', 'haze', 'partly_cloudy', 'primary', 'road', 'selective_logging', 'slash_burn', 'water']) @classmethod def int2str(cls, indices): return cls.classes_[indices] @staticmethod def _bytes_feature(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value.tobytes()])) @staticmethod def _int64_feature(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) @staticmethod def decode_fn(record_bytes): r = tf.io.parse_single_example(record_bytes, { 'filename': tf.io.FixedLenFeature([], tf.string), 'image': tf.io.FixedLenFeature([], tf.string), 'height': tf.io.FixedLenFeature([], tf.int64, default_value=[256]), 'width': tf.io.FixedLenFeature([], tf.int64, default_value=[256]), 'channels': tf.io.FixedLenFeature([], tf.int64, default_value=[3]), 'label': tf.io.VarLenFeature(tf.int64), }) r['image'] = tf.reshape(tf.io.decode_raw(r['image'], tf.uint8), (r['height'], r['width'], r['channels'])) r['label'] = tf.sparse.to_dense(r['label']) return r @classmethod def load(cls, tfrecords_path): return tf.data.TFRecordDataset(tfrecords_path).map(cls.decode_fn, num_parallel_calls=tf.data.AUTOTUNE) CLASSES = AmazonFromSpace.classes_ int2str = AmazonFromSpace.int2str num_samples = AmazonFromSpace.num_train_samples num_train_samples = int((1-Config.training.valid_size)*num_samples) num_valid_samples = int(Config.training.valid_size*num_samples) from functools import partial @tf.function def load_fn(d, augment=False): image = d['image'] labels = d['label'] image = tf.cast(image, tf.float32) image = tf.ensure_shape(image, Config.data.shape) # image, _ = adjust_resolution(image) image = (augment_policy_fn(image) if augment else default_policy_fn(image)) image = preprocess(image) return image, labels_to_one_hot(labels) def adjust_resolution(image): es = tf.constant(Config.data.size, tf.float32) xs = tf.cast(tf.shape(image)[:2], tf.float32) ratio = tf.reduce_min(es / xs) xsn = tf.cast(tf.math.ceil(ratio * xs), tf.int32) image = tf.image.resize(image, xsn, preserve_aspect_ratio=True, method='nearest') return image, ratio def labels_to_one_hot(labels): return tf.reduce_max( tf.one_hot(labels, depth=CLASSES.shape[0]), axis=0) def prepare(ds, batch_size, cache=False, shuffle=False, augment=False): if cache: ds = ds.cache() if shuffle: ds = ds.shuffle(Config.data.shuffle_buffer_size, reshuffle_each_iteration=True, seed=Config.data.train_shuffle_seed) return (ds.map(partial(load_fn, augment=augment), num_parallel_calls=tf.data.AUTOTUNE) .batch(batch_size, drop_remainder=True) .prefetch(Config.data.prefetch_buffer_size)) train_dataset = AmazonFromSpace.load(f'{data_dir}/amazon-from-space/train-jpg.tfrecords') valid_dataset = train_dataset.take(num_valid_samples) train_dataset = train_dataset.skip(num_valid_samples) # train = prepare(train_dataset, Config.data.batch_size) valid = prepare(valid_dataset, Config.data.batch_size) ``` ### Examples in The Dataset ``` #@title for stage, batches, samples in zip(('validation',), (valid,), (num_valid_samples,)): print(stage) print(f' {batches}') print(f' samples: {samples}') print(f' steps : {samples // Config.data.batch_size}') print() #@title for images, labels in valid.take(1): gt = [' '.join((e[:3] for e in CLASSES[l].astype(str))) for l in labels.numpy().astype(bool)] visualize(to_image(images[:16]), gt, rows=4, figsize=(12, 10)); ``` ## Network ``` print(f'Loading {Config.model.backbone.__name__}') backbone = Config.model.backbone( input_shape=Config.data.shape, include_top=Config.model.include_top, classifier_activation=Config.model.classifier_activation, ) from tensorflow.keras.layers import Conv2D, Dense, Dropout, GlobalAveragePooling2D class DenseKur(Dense): """Dense with Softmax Weights. """ def call(self, inputs): kernel = self.kernel ag = kernel # tf.abs(kernel) ag = ag - tf.reduce_max(ag, axis=-1, keepdims=True) ag = tf.nn.softmax(ag) outputs = inputs @ (ag*kernel) if self.use_bias: outputs = tf.nn.bias_add(outputs, self.bias) if self.activation is not None: outputs = self.activation(outputs) return outputs def build_specific_classifier( backbone, classes, dropout_rate=0.5, name=None, gpl='avg_pool', ): x = tf.keras.Input(Config.data.shape, name='images') y = backbone(x) y = GlobalAveragePooling2D(name='avg_pool')(y) # y = Dense(classes, name='predictions')(y) y = DenseKur(classes, name='predictions')(y) return tf.keras.Model(x, y, name=name) backbone.trainable = False nn = build_specific_classifier(backbone, len(CLASSES), name='resnet101_afs') if Config.model.fine_tune_layers: print(f'Unfreezing {Config.model.fine_tune_layers:.0%} layers.') backbone.trainable = True frozen_layer_ix = int((1-Config.model.fine_tune_layers) * len(backbone.layers)) for ix, l in enumerate(backbone.layers): l.trainable = (ix > frozen_layer_ix and (not isinstance(l, tf.keras.layers.BatchNormalization) or not Config.model.freeze_batch_norm)) print(f'Loading weights from {Config.model.weights}') nn.load_weights(Config.model.weights) backbone.trainable = False nn_s = tf.keras.Model( inputs=nn.inputs, outputs=[nn.output, nn.get_layer(Config.model.gap_layer_name).input], name='nn_spatial') sW, sb = nn_s.get_layer('predictions').weights sW = sW * tf.nn.softmax(sW - tf.reduce_max(sW, axis=-1, keepdims=True)) ``` ## Saliency Methods ``` DT = 0.5 λ_pos = Config.explaining.λ_pos λ_neg = Config.explaining.λ_neg λ_bg = Config.explaining.λ_bg ``` ### Min-Max CAM (ours, test ongoing) $M(f, x)^u_{ij} = \sum_k [w^u_k - \frac{1}{|N_x|} \sum_{n\in N_x} w_k^n] A_{i,j}^k$ Where $N_x = C_x\setminus \{u\}$ ``` @tf.function def min_max_sigmoid_cam(x, y): print(f'Min-Max CAM (tracing x:{x.shape} p:{y.shape})') l, a = nn_s(x, training=False) c = len(CLASSES) s_n = tf.reduce_sum(sW, axis=-1, keepdims=True) s_n = s_n - sW w = λ_pos*sW - λ_neg*s_n/(c-1) maps = tf.einsum('bhwk,ku->buhw', a, w) return l, maps[..., tf.newaxis] @tf.function def contextual_min_max_sigmoid_cam(x, y): print(f'Contextual Min-Max CAM (tracing x:{x.shape} p:{y.shape})') l, a = nn_s(x, training=False) p = tf.nn.sigmoid(l) d = tf.cast(p > DT, tf.float32) c = tf.reduce_sum(d, axis=-1) c = tf.reshape(c, (-1, 1, 1)) # detections (b, 1) w = d[:, tf.newaxis, :] * sW[tf.newaxis, ...] # expand kernels in d and batches in sW w_n = tf.reduce_sum(w, axis=-1, keepdims=True) w_n = w_n - w w = λ_pos*sW - λ_neg*w_n / tf.maximum(c-1, 1) maps = tf.einsum('bhwk,bku->buhw', a, w) return l, maps[..., tf.newaxis] ``` #### Contextual ReLU MinMax $M(f, x)^u_{ij} = \sum_k [w^{u+}_k - \frac{1}{|N_x|} \sum_{n\in N_x} w^{n+}_k +\frac{1}{|C_i|}\sum_{n\in C_i} w^{n-}_k] A_{i,j}^k$ Where $N_x = C_x\setminus \{u\}$ ``` @tf.function def contextual_relu_min_max_sigmoid_cam(x, y): print(f'Contextual ReLU Min-Max CAM (tracing x:{x.shape} p:{y.shape})') l, a = nn_s(x, training=False) p = tf.nn.sigmoid(l) d = tf.cast(p > DT, tf.float32) c = tf.reshape(tf.reduce_sum(d, axis=-1), (-1, 1, 1)) # select only detected w = d[:, tf.newaxis, :] * sW[tf.newaxis, ...] # expand kernels in d and batches in sW wa = tf.reduce_sum(w, axis=-1, keepdims=True) wn = wa - w w = ( λ_pos * tf.nn.relu(sW) - λ_neg * tf.nn.relu(wn) / tf.maximum(c-1, 1) + λ_bg * tf.minimum(0., wa) / tf.maximum(c, 1)) maps = tf.einsum('bhwk,bku->buhw', a, w) return l, maps[..., tf.newaxis] @tf.function def contextual_relu_min_max_sigmoid_cam_2(x, y): l, a = nn_s(x, training=False) p = tf.nn.sigmoid(l) aw = tf.einsum('bhwk,ku->buhw', a, sW) d = p > DT c = tf.reduce_sum(tf.cast(d, tf.float32), axis=-1) c = tf.reshape(c, (-1, 1, 1, 1)) e = tf.repeat(d[:, tf.newaxis, ...], d.shape[1], axis=1) e = tf.linalg.set_diag(e, tf.fill(e.shape[:-1], False)) z = tf.fill(aw.shape, -np.inf) an = tf.where(e[..., tf.newaxis, tf.newaxis], aw[:, tf.newaxis, ...], z[:, tf.newaxis, ...]) ab = tf.where(d[..., tf.newaxis, tf.newaxis], aw, z) an = tf.reduce_max(an, axis=2) ab = tf.reduce_max(ab, axis=2, keepdims=True) maps = ( tf.maximum(0., aw) - tf.maximum(0., an) + tf.minimum(0., ab) ) return l, maps[..., tf.newaxis] ``` ### Min-Max Grad-CAM (ours, test ongoing) $M(f, x)^u_{ij} = \text{relu}(\sum_k \sum_{l,m}\frac{\partial J_u}{\partial A_{l,m}^k} A_{i,j}^k)$ Where $J_u = S_u - \frac{1}{|N|} \sum_{n\in N_x} S_n$ $N_x = C_x\setminus \{u\}$ ``` def min_max_activation_gain(y, s): c = len(CLASSES) # shape(s) == (b, c) s_n = tf.reduce_sum(s, axis=-1, keepdims=True) # shape(s_n) == (b) s_n = s_n-s # shape(s_n) == (b, c) return λ_pos*s - λ_neg*s_n / (c-1) @tf.function def min_max_sigmoid_gradcam(x, y): print(f'Min-Max Grad-CAM (tracing x:{x.shape} p:{y.shape})') with tf.GradientTape(watch_accessed_variables=False) as t: t.watch(x) l, a = nn_s(x, training=False) loss = min_max_activation_gain(y, l) dlda = t.batch_jacobian(loss, a) weights = tf.reduce_sum(dlda, axis=(-3, -2)) maps = tf.einsum('bhwc,buc->buhw', a, weights) return l, maps[..., tf.newaxis] def contextual_min_max_activation_gain(y, s, p): d = tf.cast(p > DT, tf.float32) c = tf.reduce_sum(d, axis=-1, keepdims=True) # only detections sd = s*d s_n = tf.reduce_sum(sd, axis=-1, keepdims=True) # sum logits detected (b, 1) return λ_pos*s - λ_neg*(s_n - sd)/tf.maximum(c-1, 1) @tf.function def contextual_min_max_sigmoid_gradcam(x, y): print(f'Contextual Min-Max Grad-CAM (tracing x:{x.shape} p:{y.shape})') with tf.GradientTape(watch_accessed_variables=False) as t: t.watch(x) l, a = nn_s(x, training=False) p = tf.nn.sigmoid(l) loss = contextual_min_max_activation_gain(y, l, p) dlda = t.batch_jacobian(loss, a) weights = tf.reduce_sum(dlda, axis=(-3, -2)) maps = tf.einsum('bhwc,buc->buhw', a, weights) # a*weights return l, maps[..., tf.newaxis] def contextual_relu_min_max_activation_gain(y, s): p = tf.nn.sigmoid(s) d = tf.cast(p > DT, tf.float32) c = tf.reduce_sum(d, axis=-1, keepdims=True) sd = s*d # only detections sa = tf.reduce_sum(sd, axis=-1, keepdims=True) # sum logits detected (b, 1) sn = sa - sd return tf.stack(( λ_pos * s, λ_neg * sn / tf.maximum(c-1, 1), λ_bg * (sn+sd) / tf.maximum(c, 1) ), axis=1) @tf.function def contextual_relu_min_max_sigmoid_gradcam(x, y): print(f'Contextual ReLU Min-Max Grad-CAM (tracing x:{x.shape} y:{y.shape})') with tf.GradientTape(watch_accessed_variables=False) as t: t.watch(x) l, a = nn_s(x, training=False) loss = contextual_relu_min_max_activation_gain(y, l) dlda = t.batch_jacobian(loss, a) w, wn, wa = dlda[:, 0], dlda[:, 1], dlda[:, 2] w = ( tf.nn.relu(w) - tf.nn.relu(wn) + tf.minimum(0., wa)) weights = tf.reduce_sum(w, axis=(-3, -2)) maps = tf.einsum('bhwc,buc->buhw', a, weights) return l, maps[..., tf.newaxis] ``` ## Qualitative Analysis ``` #@title def visualize_explaining_many( x, y, p, maps, N=None, max_detections=3 ): N = N or len(x) plt.figure(figsize=(16, 2*N)) rows = N cols = 1+2*max_detections actual = [','.join(CLASSES[_y]) for _y in y.numpy().astype(bool)] for ix in range(N): detections = p[ix] > DT visualize_explaining( x[ix], actual[ix], detections, p[ix], maps[ix], i0=cols*ix, rows=rows, cols=cols, full=False, max_detections=max_detections ) plt.tight_layout() def visualize_explaining(image, labels, detections, probs, cams, full=True, i0=0, rows=2, cols=None, max_detections=3): detections = detections.numpy() im = to_image(image) _maps = tf.boolean_mask(cams, detections) _maps = tf.image.resize(_maps, Config.data.size) _masked = to_image(masked(image, _maps)) plots = [im, *_maps[:max_detections]] title = [labels] + [f'{d} {p:.0%}' for d, p in zip(CLASSES[detections], probs.numpy()[detections])] visualize(plots, title, full=full, rows=rows, cols=cols, i0=i0) for ix, s in enumerate(_maps[:max_detections]): plt.subplot(rows, cols, i0+len(plots)+ix+1) plot_heatmap(im, s[..., 0]) cams = {} for x, y in valid.take(1): l = tf.convert_to_tensor(nn.predict(x)) p = tf.nn.sigmoid(l) # Only samples with two or more objects: s = tf.reduce_sum(tf.cast(p > DT, tf.int32), axis=1) > 1 x, y, l, p = x[s], y[s], l[s], p[s] ``` #### CAM ``` _, maps = sigmoid_cam(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) cams['cam'] = maps visualize_explaining_many(x, y, p, maps) ``` #### Grad-CAM ``` _, maps = sigmoid_gradcam(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) # visualize_explaining_many(x, y, p, maps) cams['gradcam'] = maps ``` #### Grad-CAM++ ``` _, maps = sigmoid_gradcampp(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) # visualize_explaining_many(x, y, p, maps) cams['gradcampp'] = maps ``` ### Score-CAM ``` _, maps = sigmoid_scorecam(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) visualize_explaining_many(x, y, p, maps) cams['scorecam'] = maps ``` ### Min-Max CAM #### Vanilla ``` %%time _, maps = min_max_sigmoid_cam(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) # visualize_explaining_many(x, y, p, maps) cams['minmax_cam'] = maps ``` #### Contextual ``` %%time _, maps = contextual_min_max_sigmoid_cam(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) # visualize_explaining_many(x, y, p, maps) cams['contextual_minmax_cam'] = maps ``` #### Contextual ReLU ``` %%time _, maps = contextual_relu_min_max_sigmoid_cam(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) # visualize_explaining_many(x, y, p, maps) cams['contextual_relu_minmax_cam'] = maps %%time _, maps = contextual_relu_min_max_sigmoid_cam_2(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) # visualize_explaining_many(x, y, p, maps) cams['contextual_relu_minmax_cam_2'] = maps ``` ### Min-Max Grad-CAM #### Vanilla ``` _, maps = min_max_sigmoid_gradcam(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) visualize_explaining_many(x, y, p, maps) cams['minmax_gradcam'] = maps ``` #### Contextual ``` _, maps = contextual_min_max_sigmoid_gradcam(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) visualize_explaining_many(x, y, p, maps) cams['contextual_minmax_gradcam'] = maps ``` #### Contextual ReLU ``` _, maps = contextual_relu_min_max_sigmoid_gradcam(x, y) maps = tf.nn.relu(maps) maps = normalize(maps) visualize_explaining_many(x, y, p, maps) cams['contextual_relu_minmax_gradcam'] = maps ``` ### Summary ``` observing = 'cam minmax_cam contextual_minmax_cam contextual_relu_minmax_cam'.split() titles = 'Input CAM MM C-MM CG-MM'.split() print('Results selected for vis:', *observing, sep='\n ') #@title detections = p > DT indices = tf.where(detections) sample_ix, label_ix = indices[:, 0], indices[:, 1] visualize( sum(zip(to_image(tf.gather(x, sample_ix)[:48]).numpy(), *(tf.image.resize(cams[c][detections][:48], Config.data.size).numpy() for c in observing)), ()), title=titles, rows=sample_ix[:48].shape[0], figsize=(12, 80) ); #@title plt.figure(figsize=(12, 80)) selected_images = to_image(tf.gather(x, sample_ix)[:48]).numpy() rows = len(selected_images) cols = len(observing) + 1 for ix, im in enumerate(selected_images): plt.subplot(rows, cols, ix*cols + 1) plt.imshow(im) plt.axis('off') for j, method in enumerate(observing): map = cams[method][detections][ix] map = tf.image.resize(map, Config.data.size) plt.subplot(rows, cols, ix*cols + j + 2) plot_heatmap(im, map[..., 0]) if ix == 0: plt.title(titles[j+1]) plt.tight_layout() ``` ## Quantitative Analysis #### Metrics ##### **Increase in Confidence %** > *Removing background noise should improve confidence (higher=better)* $\frac{1}{∑ |C_i|} ∑^N_i∑^{C_i}_c [Y^c_i < O_{ic}^c] 100$ Thought: this probably works better for *softmax* classifiers. ``` def increase_in_confidence( p, # f(x) (batch, classes) y, # f(x)[f(x) > 0.5] (detections, 1) o, # f(x*mask(x, m)) (detections, classes) samples_ix, units_ix ): oc = tf.gather(o, units_ix, axis=1, batch_dims=1) # (detections, 1) incr = np.zeros(p.shape, np.uint32) incr[samples_ix, units_ix] = tf.cast(y < oc, tf.uint32).numpy() return incr.sum(axis=0) ``` ##### **Average Drop %** > *Masking with an accurate mask should not decrease confidence (lower=better)* $\frac{1}{∑ |C_i|} ∑_i^N ∑_c^{C_i} \frac{max(0, Y_i^c − O_{ic}^c)}{Y_i^c} 100$ Measures if your mask is correctly positioned on top of the important regions that determine the class of interest. ``` def average_drop(p, y, o, samples_ix, units_ix): oc = tf.gather(o, units_ix, axis=1, batch_dims=1) drop = np.zeros(p.shape) drop[samples_ix, units_ix] = (tf.nn.relu(y - oc) / y).numpy() return drop.sum(axis=0) ``` ##### **Average Retention % (ours, testing ongoing)** > *Masking the input with an accurate complement mask for class $c$ should decrease confidence in class $c$ (higher=better)* $\frac{1}{∑ |C_i|} ∑_i^N ∑_c^{C_i} \frac{max(0, Y_i^c − \bar{O}_{ic}^c)}{Y_i^c} 100$ Where $\bar{O}_{ic}^c = f(x_i \circ (1-\psi(M(f, x_i)_{hw}^c))^c$ Masking the input $x_i$ for all classes except $c$ should cause the model's confidence in $c$ to drop. ###### **Average Drop & Retention % (ours)** \begin{align} \frac{\text{drop} + (1-\text{retention})}{2} &= \frac{1}{2 ∑ |C_i|} ∑_i^N ∑_c^{C_i} [\frac{max(0, Y_i^c − O_{ic}^c)}{Y_i^c} + (1-\frac{max(0, Y_i^c − \bar{O}_{ic}^c)}{Y_i^c})] 100 \\ &= \frac{1}{2 ∑ |C_i|} ∑_i^N ∑_c^{C_i} [1 + \frac{Y_i^c − O_{ic}^c - (Y_i^c − \bar{O}_{ic}^c)}{Y_i^c}] 100 \\ &= \frac{1}{2 ∑ |C_i|} ∑_i^N ∑_c^{C_i} [1 + \frac{\bar{O}_{ic}^c − O_{ic}^c}{Y_i^c}] 100 \end{align} Where * $O_{ic}^c = f(x_i \circ \psi(M(f, x_i)_{hw}^c)^c$ * $\bar{O}_{ic}^c = f(x_i \circ (1-\psi(M(f, x_i)_{hw}^c))^c$ ##### **Average Drop of Others % (ours, testing ongoing)** > *An ideal mask for class $c$ is not retaining any objects of other classes (higher=better)* $\frac{1}{∑ |C_i|} ∑_i^N ∑_c^{C_i} \frac{1}{|D_i|} ∑_d^{D_i} \frac{max(0, Y_i^d − O_{ic}^d)}{Y_i^d} 100$ Masking the input $x_i$ for a given class $c$ should cause the confidence in other classes to drop. I.e., $f(x_i\circ \psi(M(f, x_i)^c_{hw}))^d \sim 0, \forall d\in D_i = C_i\setminus \{c\}$. For single-label problems, $D_i = \emptyset$ and Average Drop of Others is not defined. How to solve this? ``` def average_drop_of_others(p, s, y, o, samples_ix, units_ix): # Drop of all units, for all detections d = tf.gather(p, samples_ix) d = tf.nn.relu(d - o) / d # Remove drop of class `c` and non-detected classes detected = tf.cast(tf.gather(s, samples_ix), tf.float32) d = d*detected d = tf.reduce_sum(d, axis=-1) - tf.gather(d, units_ix, axis=1, batch_dims=1) c = tf.reduce_sum(detected, axis=-1) # Normalize by the number of peer labels for detection `c` d = d / tf.maximum(1., c -1) drop = np.zeros(p.shape) drop[samples_ix, units_ix] = d.numpy() return drop.sum(axis=0) ``` ##### **Average Retention of Others % (ours, testing ongoing)** > *An ideal mask complement for class $c$ should cover all objects of the other classes (lower=better)* $\frac{1}{∑ |D_i|} ∑_i^N ∑_c^{C_i} \frac{1}{|D_i|} ∑_d^{D_i} \frac{max(0, Y_i^d − \bar{O}_{ic}^d)}{Y_i^d} 100$ Masking the input $x_i$ for all classes except $c$ should cause the confidence in other classes to stay the same or increase. I.e., $f(x_i\circ (1-\psi(M(f, x_i)^c_{ij})))^d \approx f(x_i)^d, \forall d\in D_i = C_i\setminus \{c\}$. #### Experiments ``` #@title Testing Loop def experiment_with(dataset, cam_method, cam_modifier): print(f'Testing {cam_method.__name__}') t = time() r = cam_evaluation(nn, dataset, cam_method=cam_method, cam_modifier=cam_modifier) print(f'elapsed: {(time() - t)/60:.1f} minutes', end='\n\n') return r.assign(method=cam_method.__name__) def cam_evaluation(nn, dataset, cam_method, cam_modifier): metric_names = ('increase %', 'avg drop %', 'avg retention %', 'avg drop of others %', 'avg retention of others %', 'detections') metrics = (np.zeros(len(CLASSES), np.uint16), np.zeros(len(CLASSES)), np.zeros(len(CLASSES)), np.zeros(len(CLASSES)), np.zeros(len(CLASSES)), np.zeros(len(CLASSES), np.uint16)) try: for step, (x, y) in enumerate(dataset): p, maps = cam_method(x, y) p = tf.nn.sigmoid(p) maps = cam_modifier(maps) for e, f in zip(metrics, cam_evaluation_step(nn, x, p, maps)): e += f print('.', end='' if (step+1) % 80 else '\n') print() except KeyboardInterrupt: print('interrupted') metrics, detections = metrics[:-1], metrics[-1] results = {n: 100*m/detections for n, m in zip(metric_names, metrics)} results['label'] = CLASSES results['detections'] = detections results = pd.DataFrame(results) print(f'Average Drop %: {results["avg drop %"].mean():.4}%') print(f'Average Increase %: {results["increase %"].mean():.4}%') return results def cam_evaluation_step(nn, x, p, m): s = p > DT w = tf.where(s) samples_ix, units_ix = w[:, 0], w[:, 1] md = tf.image.resize(m[s], Config.data.size) detections = tf.reduce_sum(tf.cast(s, tf.uint32), axis=0) y = p[s] # (batch, c) --> (detections) xs = tf.gather(x, samples_ix) # (batch, 300, 300, 3) --> (detections, 300, 300, 3) o = nn.predict(masked(xs, md), batch_size=Config.data.batch_size) o = tf.nn.sigmoid(o) co = nn.predict(masked(xs, 1 -md), batch_size=Config.data.batch_size) co = tf.nn.sigmoid(co) samples_ix, units_ix = samples_ix.numpy(), units_ix.numpy() incr = increase_in_confidence(p, y, o, samples_ix, units_ix) drop = average_drop(p, y, o, samples_ix, units_ix) rete = average_drop(p, y, co, samples_ix, units_ix) drop_of_others = average_drop_of_others(p, s, y, o, samples_ix, units_ix) rete_of_others = average_drop_of_others(p, s, y, co, samples_ix, units_ix) return incr, drop, rete, drop_of_others, rete_of_others, detections.numpy() methods_being_tested = ( # Baseline # sigmoid_cam, # sigmoid_gradcam, # Best solutions # sigmoid_gradcampp, sigmoid_scorecam, # Ours # min_max_sigmoid_cam, # contextual_min_max_sigmoid_cam, # contextual_relu_min_max_sigmoid_cam, # contextual_relu_min_max_sigmoid_cam_2, ) relu_and_normalize = lambda c: normalize(tf.nn.relu(c)) results = pd.concat( [ experiment_with(valid, m, relu_and_normalize) for m in methods_being_tested ] ) # if os.path.exists(Config.report): # raise FileExistsError('You are asking me to override a report file.') results.to_csv(Config.report, index=False) ``` #### Report ``` results = pd.read_csv(Config.report) methods = ( 'sigmoid_cam', 'sigmoid_gradcampp', 'sigmoid_scorecam', 'min_max_sigmoid_cam', 'contextual_min_max_sigmoid_cam', 'contextual_relu_min_max_sigmoid_cam', ) methods_detailed = ( 'sigmoid_cam', 'sigmoid_scorecam', 'contextual_min_max_sigmoid_cam', 'contextual_relu_min_max_sigmoid_cam' ) metric_names = ( 'increase %', 'avg drop %', 'avg retention %', 'avg drop of others %', 'avg retention of others %', 'f1 score', 'f1 score negatives' ) minimizing_metrics = {'avg drop %', 'avg retention of others %', 'f1 score negatives'} def fb_score(a, b, beta=1): beta2 = beta**2 denom = (beta2 * b + a) denom[denom == 0.] = 1 return (1+beta2) * a * b / denom results['f1/2 score'] = fb_score(results['avg retention %'], results['avg drop of others %'], beta=1/2) results['f1 score'] = fb_score(results['avg retention %'], results['avg drop of others %'], beta=1) results['f2 score'] = fb_score(results['avg retention %'], results['avg drop of others %'], beta=2) results['f1/2 score negatives'] = fb_score(results['avg drop %'], results['avg retention of others %'], beta=1/2) results['f1 score negatives'] = fb_score(results['avg drop %'], results['avg retention of others %'], beta=1) results['f2 score negatives'] = fb_score(results['avg drop %'], results['avg retention of others %'], beta=2) (results .drop('detections', axis=1) .groupby('method') .mean() .round(4)/100) #@title Macro Average (Class-Balanced) macro_avg = ( results .groupby('method') .mean() .reindex(methods)[list(metric_names)] ) / 100 macro_avg_hm = macro_avg.copy() for m in minimizing_metrics: macro_avg_hm[m] = 1-macro_avg_hm[m] macro_avg_hm -= macro_avg_hm.min(axis=0) macro_avg_hm /= macro_avg_hm.max(axis=0) + 1e-07 plt.figure(figsize=(6, 6)) sns.heatmap( macro_avg_hm, fmt='.2%', annot=macro_avg, cmap='RdPu', cbar=False, xticklabels=[c.replace('_', ' ') for c in macro_avg.columns], yticklabels=[i.replace('_', ' ') for i in macro_avg.index], ); #@title Weighted Average (Class Frequency Weighted) total_detections = ( results .groupby('method') .agg({'detections': 'sum'}) .rename(columns={'detections': 'total_detections'}) ) w_avg = results.merge(total_detections, how='left', left_on='method', right_index=True) metric_results = { m: w_avg[m] * w_avg.detections / w_avg.total_detections for m in metric_names } metric_results['method'] = w_avg.method metric_results['label'] = w_avg.label w_avg = ( pd.DataFrame(metric_results) .groupby('method') .sum() .reindex(methods) / 100 ) hm = w_avg.copy() for m in minimizing_metrics: hm[m] = 1-hm[m] hm -= hm.min(axis=0) hm /= hm.max(axis=0) + 1e-07 plt.figure(figsize=(6, 6)) sns.heatmap( hm, fmt='.2%', annot=w_avg, cmap='RdPu', cbar=False, xticklabels=[c.replace('_', ' ') for c in w_avg.columns], yticklabels=[i.replace('_', ' ') for i in w_avg.index] ); #@title Detailed Results per Class plt.figure(figsize=(16, 6)) sns.boxplot( data=results[results.method.isin(methods_detailed)] .melt(('method', 'label'), metric_names, 'metric'), hue='method', x='metric', y='value' ); #@title F1 Score by Label and CAM Method plt.figure(figsize=(16, 6)) sns.barplot( data=results.sort_values('f1 score', ascending=False), hue='method', y='f1 score', x='label' ) plt.xticks(rotation=-45); ```
github_jupyter
# Transfer Learning Template ``` %load_ext autoreload %autoreload 2 %matplotlib inline import os, json, sys, time, random import numpy as np import torch from torch.optim import Adam from easydict import EasyDict import matplotlib.pyplot as plt from steves_models.steves_ptn import Steves_Prototypical_Network from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper from steves_utils.iterable_aggregator import Iterable_Aggregator from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig from steves_utils.torch_sequential_builder import build_sequential from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path) from steves_utils.PTN.utils import independent_accuracy_assesment from torch.utils.data import DataLoader from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory from steves_utils.ptn_do_report import ( get_loss_curve, get_results_table, get_parameters_table, get_domain_accuracies, ) from steves_utils.transforms import get_chained_transform ``` # Allowed Parameters These are allowed parameters, not defaults Each of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present) Papermill uses the cell tag "parameters" to inject the real parameters below this cell. Enable tags to see what I mean ``` required_parameters = { "experiment_name", "lr", "device", "seed", "dataset_seed", "n_shot", "n_query", "n_way", "train_k_factor", "val_k_factor", "test_k_factor", "n_epoch", "patience", "criteria_for_best", "x_net", "datasets", "torch_default_dtype", "NUM_LOGS_PER_EPOCH", "BEST_MODEL_PATH", "x_shape", } from steves_utils.CORES.utils import ( ALL_NODES, ALL_NODES_MINIMUM_1000_EXAMPLES, ALL_DAYS ) from steves_utils.ORACLE.utils_v2 import ( ALL_DISTANCES_FEET_NARROWED, ALL_RUNS, ALL_SERIAL_NUMBERS, ) standalone_parameters = {} standalone_parameters["experiment_name"] = "STANDALONE PTN" standalone_parameters["lr"] = 0.001 standalone_parameters["device"] = "cuda" standalone_parameters["seed"] = 1337 standalone_parameters["dataset_seed"] = 1337 standalone_parameters["n_way"] = 8 standalone_parameters["n_shot"] = 3 standalone_parameters["n_query"] = 2 standalone_parameters["train_k_factor"] = 1 standalone_parameters["val_k_factor"] = 2 standalone_parameters["test_k_factor"] = 2 standalone_parameters["n_epoch"] = 50 standalone_parameters["patience"] = 10 standalone_parameters["criteria_for_best"] = "source_loss" standalone_parameters["datasets"] = [ { "labels": ALL_SERIAL_NUMBERS, "domains": ALL_DISTANCES_FEET_NARROWED, "num_examples_per_domain_per_label": 100, "pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"), "source_or_target_dataset": "source", "x_transforms": ["unit_mag", "minus_two"], "episode_transforms": [], "domain_prefix": "ORACLE_" }, { "labels": ALL_NODES, "domains": ALL_DAYS, "num_examples_per_domain_per_label": 100, "pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"), "source_or_target_dataset": "target", "x_transforms": ["unit_power", "times_zero"], "episode_transforms": [], "domain_prefix": "CORES_" } ] standalone_parameters["torch_default_dtype"] = "torch.float32" standalone_parameters["x_net"] = [ {"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}}, {"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":256}}, {"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features":256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ] # Parameters relevant to results # These parameters will basically never need to change standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10 standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth" # Parameters parameters = { "experiment_name": "tl_1v2:cores-oracle.run1.framed", "device": "cuda", "lr": 0.0001, "n_shot": 3, "n_query": 2, "train_k_factor": 3, "val_k_factor": 2, "test_k_factor": 2, "torch_default_dtype": "torch.float32", "n_epoch": 50, "patience": 3, "criteria_for_best": "target_accuracy", "x_net": [ {"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}}, { "class": "Conv2d", "kargs": { "in_channels": 1, "out_channels": 256, "kernel_size": [1, 7], "bias": False, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 256}}, { "class": "Conv2d", "kargs": { "in_channels": 256, "out_channels": 80, "kernel_size": [2, 7], "bias": True, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features": 256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ], "NUM_LOGS_PER_EPOCH": 10, "BEST_MODEL_PATH": "./best_model.pth", "n_way": 16, "datasets": [ { "labels": [ "1-10.", "1-11.", "1-15.", "1-16.", "1-17.", "1-18.", "1-19.", "10-4.", "10-7.", "11-1.", "11-14.", "11-17.", "11-20.", "11-7.", "13-20.", "13-8.", "14-10.", "14-11.", "14-14.", "14-7.", "15-1.", "15-20.", "16-1.", "16-16.", "17-10.", "17-11.", "17-2.", "19-1.", "19-16.", "19-19.", "19-20.", "19-3.", "2-10.", "2-11.", "2-17.", "2-18.", "2-20.", "2-3.", "2-4.", "2-5.", "2-6.", "2-7.", "2-8.", "3-13.", "3-18.", "3-3.", "4-1.", "4-10.", "4-11.", "4-19.", "5-5.", "6-15.", "7-10.", "7-14.", "8-18.", "8-20.", "8-3.", "8-8.", ], "domains": [1, 2, 3, 4, 5], "num_examples_per_domain_per_label": -1, "pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl", "source_or_target_dataset": "source", "x_transforms": ["unit_mag"], "episode_transforms": [], "domain_prefix": "CORES_", }, { "labels": [ "3123D52", "3123D65", "3123D79", "3123D80", "3123D54", "3123D70", "3123D7B", "3123D89", "3123D58", "3123D76", "3123D7D", "3123EFE", "3123D64", "3123D78", "3123D7E", "3124E4A", ], "domains": [32, 38, 8, 44, 14, 50, 20, 26], "num_examples_per_domain_per_label": 2000, "pickle_path": "/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl", "source_or_target_dataset": "target", "x_transforms": ["unit_mag"], "episode_transforms": [], "domain_prefix": "ORACLE.run1_", }, ], "dataset_seed": 500, "seed": 500, } # Set this to True if you want to run this template directly STANDALONE = False if STANDALONE: print("parameters not injected, running with standalone_parameters") parameters = standalone_parameters if not 'parameters' in locals() and not 'parameters' in globals(): raise Exception("Parameter injection failed") #Use an easy dict for all the parameters p = EasyDict(parameters) if "x_shape" not in p: p.x_shape = [2,256] # Default to this if we dont supply x_shape supplied_keys = set(p.keys()) if supplied_keys != required_parameters: print("Parameters are incorrect") if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters)) if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys)) raise RuntimeError("Parameters are incorrect") ################################### # Set the RNGs and make it all deterministic ################################### np.random.seed(p.seed) random.seed(p.seed) torch.manual_seed(p.seed) torch.use_deterministic_algorithms(True) ########################################### # The stratified datasets honor this ########################################### torch.set_default_dtype(eval(p.torch_default_dtype)) ################################### # Build the network(s) # Note: It's critical to do this AFTER setting the RNG ################################### x_net = build_sequential(p.x_net) start_time_secs = time.time() p.domains_source = [] p.domains_target = [] train_original_source = [] val_original_source = [] test_original_source = [] train_original_target = [] val_original_target = [] test_original_target = [] # global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag # global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag def add_dataset( labels, domains, pickle_path, x_transforms, episode_transforms, domain_prefix, num_examples_per_domain_per_label, source_or_target_dataset:str, iterator_seed=p.seed, dataset_seed=p.dataset_seed, n_shot=p.n_shot, n_way=p.n_way, n_query=p.n_query, train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor), ): if x_transforms == []: x_transform = None else: x_transform = get_chained_transform(x_transforms) if episode_transforms == []: episode_transform = None else: raise Exception("episode_transforms not implemented") episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1]) eaf = Episodic_Accessor_Factory( labels=labels, domains=domains, num_examples_per_domain_per_label=num_examples_per_domain_per_label, iterator_seed=iterator_seed, dataset_seed=dataset_seed, n_shot=n_shot, n_way=n_way, n_query=n_query, train_val_test_k_factors=train_val_test_k_factors, pickle_path=pickle_path, x_transform_func=x_transform, ) train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test() train = Lazy_Iterable_Wrapper(train, episode_transform) val = Lazy_Iterable_Wrapper(val, episode_transform) test = Lazy_Iterable_Wrapper(test, episode_transform) if source_or_target_dataset=="source": train_original_source.append(train) val_original_source.append(val) test_original_source.append(test) p.domains_source.extend( [domain_prefix + str(u) for u in domains] ) elif source_or_target_dataset=="target": train_original_target.append(train) val_original_target.append(val) test_original_target.append(test) p.domains_target.extend( [domain_prefix + str(u) for u in domains] ) else: raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}") for ds in p.datasets: add_dataset(**ds) # from steves_utils.CORES.utils import ( # ALL_NODES, # ALL_NODES_MINIMUM_1000_EXAMPLES, # ALL_DAYS # ) # add_dataset( # labels=ALL_NODES, # domains = ALL_DAYS, # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"cores_{u}" # ) # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # add_dataset( # labels=ALL_SERIAL_NUMBERS, # domains = list(set(ALL_DISTANCES_FEET) - {2,62}), # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"), # source_or_target_dataset="source", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"oracle1_{u}" # ) # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # add_dataset( # labels=ALL_SERIAL_NUMBERS, # domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}), # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"), # source_or_target_dataset="source", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"oracle2_{u}" # ) # add_dataset( # labels=list(range(19)), # domains = [0,1,2], # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"met_{u}" # ) # # from steves_utils.wisig.utils import ( # # ALL_NODES_MINIMUM_100_EXAMPLES, # # ALL_NODES_MINIMUM_500_EXAMPLES, # # ALL_NODES_MINIMUM_1000_EXAMPLES, # # ALL_DAYS # # ) # import steves_utils.wisig.utils as wisig # add_dataset( # labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES, # domains = wisig.ALL_DAYS, # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"wisig_{u}" # ) ################################### # Build the dataset ################################### train_original_source = Iterable_Aggregator(train_original_source, p.seed) val_original_source = Iterable_Aggregator(val_original_source, p.seed) test_original_source = Iterable_Aggregator(test_original_source, p.seed) train_original_target = Iterable_Aggregator(train_original_target, p.seed) val_original_target = Iterable_Aggregator(val_original_target, p.seed) test_original_target = Iterable_Aggregator(test_original_target, p.seed) # For CNN We only use X and Y. And we only train on the source. # Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda) val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda) test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda) train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda) val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda) test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda) datasets = EasyDict({ "source": { "original": {"train":train_original_source, "val":val_original_source, "test":test_original_source}, "processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source} }, "target": { "original": {"train":train_original_target, "val":val_original_target, "test":test_original_target}, "processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target} }, }) from steves_utils.transforms import get_average_magnitude, get_average_power print(set([u for u,_ in val_original_source])) print(set([u for u,_ in val_original_target])) s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source)) print(s_x) # for ds in [ # train_processed_source, # val_processed_source, # test_processed_source, # train_processed_target, # val_processed_target, # test_processed_target # ]: # for s_x, s_y, q_x, q_y, _ in ds: # for X in (s_x, q_x): # for x in X: # assert np.isclose(get_average_magnitude(x.numpy()), 1.0) # assert np.isclose(get_average_power(x.numpy()), 1.0) ################################### # Build the model ################################### # easfsl only wants a tuple for the shape model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape)) optimizer = Adam(params=model.parameters(), lr=p.lr) ################################### # train ################################### jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device) jig.train( train_iterable=datasets.source.processed.train, source_val_iterable=datasets.source.processed.val, target_val_iterable=datasets.target.processed.val, num_epochs=p.n_epoch, num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH, patience=p.patience, optimizer=optimizer, criteria_for_best=p.criteria_for_best, ) total_experiment_time_secs = time.time() - start_time_secs ################################### # Evaluate the model ################################### source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test) target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test) source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val) target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val) history = jig.get_history() total_epochs_trained = len(history["epoch_indices"]) val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val)) confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl) per_domain_accuracy = per_domain_accuracy_from_confusion(confusion) # Add a key to per_domain_accuracy for if it was a source domain for domain, accuracy in per_domain_accuracy.items(): per_domain_accuracy[domain] = { "accuracy": accuracy, "source?": domain in p.domains_source } # Do an independent accuracy assesment JUST TO BE SURE! # _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device) # _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device) # _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device) # _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device) # assert(_source_test_label_accuracy == source_test_label_accuracy) # assert(_target_test_label_accuracy == target_test_label_accuracy) # assert(_source_val_label_accuracy == source_val_label_accuracy) # assert(_target_val_label_accuracy == target_val_label_accuracy) experiment = { "experiment_name": p.experiment_name, "parameters": dict(p), "results": { "source_test_label_accuracy": source_test_label_accuracy, "source_test_label_loss": source_test_label_loss, "target_test_label_accuracy": target_test_label_accuracy, "target_test_label_loss": target_test_label_loss, "source_val_label_accuracy": source_val_label_accuracy, "source_val_label_loss": source_val_label_loss, "target_val_label_accuracy": target_val_label_accuracy, "target_val_label_loss": target_val_label_loss, "total_epochs_trained": total_epochs_trained, "total_experiment_time_secs": total_experiment_time_secs, "confusion": confusion, "per_domain_accuracy": per_domain_accuracy, }, "history": history, "dataset_metrics": get_dataset_metrics(datasets, "ptn"), } ax = get_loss_curve(experiment) plt.show() get_results_table(experiment) get_domain_accuracies(experiment) print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"]) print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"]) json.dumps(experiment) ```
github_jupyter
# Gaussian Mixture Model vs. KMeans **Assignment**: Comparison of the KMeans method and GMM for clustering using the Fashion-MNIST dataset. Explore the parameters of GMM. <hr> ### Table of Content - **0. Introduction** - Task Description - Importing Modules - **1. Importing the Dataset** - Importing - Data Pre-Processing - **2. Implementing our own gridsearch** - **3. Implementation of KMeans** - Simple Implementation From Scratch (without gridsearch) - Implementation using SciKit-Learn - **4. Implementation of GMM** - Simple Implementation From Scratch (without gridsearch) - Implementation using SciKit-Learn - **5. Comparing the best results of both KMeans and GMM** - **6. Exploring the parameters of GMM, i.e. Implementing Efficient Greedy Learning** - **7. References** <hr> ## 0. Introduction ### 0.1 Task Description ### 0.2 Importing Modules ``` from keras.datasets import fashion_mnist from itertools import cycle import math import matplotlib.pyplot as plt import numpy as np import pandas as pd import random as rd from sklearn.cluster import KMeans from sklearn.datasets import make_blobs from sklearn.metrics import homogeneity_score from sklearn.metrics import silhouette_score from sklearn.metrics import v_measure_score from sklearn.mixture import GaussianMixture import warnings warnings.filterwarnings('ignore') warnings.simplefilter('ignore') ``` ## 1. Importing the dataset: Fashion-MNIST ### 1.1. Importing ``` (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() print(f"Shape of x_train: {x_train.shape}") print(f"Shape of y_train: {y_train.shape}") print(f"Shape of x_test: {x_test.shape}") print(f"Shape of y_test: {y_test.shape}") labelNames = ["T-shirt/Top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"] #Sample images length = 3 plt.figure(figsize=(7, 7)) for i in range(length * length): temp = rd.randint(0, len(x_train)+1) image = x_train[temp] plt.subplot(length, length, i+1) plt.imshow(image, cmap='gray') plt.xticks([]) plt.yticks([]) plt.xlabel(labelNames[y_train[temp]]) plt.tight_layout() plt.show() ``` ### 1.2. Data Preprocessing **Normalization**: Each item in the dataset is a grayscale picture (1 channel) where each pixel is a value lying between $0$ and $255$. We need to rescale each image to the [0,1] range, i.e., normalize our images. This rescaling is done by dividing each pixel value by 255. ``` # Before print(f"One pixel before normalization: {x_train[0][5][15]}") # Normalization x_train = x_train / 255. x_test = x_test / 255. # After print(f"One pixel after normalization: {x_train[0][5][15]}") ``` **Flattening**: Since each image is a 2d picture of 28 pixels by 28, we need to reshape our data so it can be fed in a model. To do so, we reshape our image into a single dimension of size 28\*28, i.e. 784. ``` x_train_processed = x_train.reshape(len(x_train), 784) x_test_processed = x_test.reshape(len(x_test), 784) print(f"Shape of x_train_processed: {x_train_processed.shape}") print(f"Shape of x_test_processed: {x_test_processed.shape}") ``` ## 2. Implementing a Clustering GridSearch ``` class GridSearch(): def __init__(self, model, hyperparameters, n_jobs): """ """ if model == "kmeans": self.model = "kmeans" else: self.model = "gaussian" self.hyperparameters = hyperparameters self.n_jobs = n_jobs self.training_results = {} def fit(self, x_train, y_train): """ """ model_number = 0 if self.model=="kmeans": for component in self.hyperparameters["n"]: for initialization in hyperparameters["init"]: for algorithm in hyperparameters["algorithm"]: model_number += 1 model = KMeans(n_clusters=component, init=initialization, algorithm=algorithm, n_jobs=self.n_jobs, random_state=0) model.fit(x_train) h_score, v_score, sil_score = self.scoring(model, x_train, y_train) print(f"KMeans #{model_number} trained with {component} components, " + f"{initialization} initialization and {algorithm} algorithm, " + f"yielding homogeneity, v_measure and silhouette scores of {h_score}, " + f"{v_score} and {sil_score} resp.") self.training_results[model_number] = {"n_clusters": component, "init": initialization, "algorithm": algorithm, "model": model, "homogeneity_score": h_score, "v_measure_score": v_score} else: for component in hyperparameters["n"]: for covariance in hyperparameters["covariance_type"]: model_number += 1 model = GaussianMixtureModel(n_components=component, covariance_type=covariance, n_jobs=self.n_jobs, random_state=0) model.fit(x_train) h_score, v_score, sil_score = self.scoring(model, x_train, y_train) print(f"GMM #{model_number} trained with {component} components and cov. type {covariance}, " + f"yielding homogeneity, v_measure and silhouette scores of {h_score}, {v_score} and " + f"{sil_score} resp.") self.training_results[model_number] = {"n_components":component, "covariance_type":covariance, "model": model, "homogeneity_score": h_score, "v_measure_score": v_score} return self.training_results def scoring(self, model, X, y): """ """ y_pred = model.predict(X) v_score = v_measure_score(y, y_pred) h_score = homogeneity_score(y, y_pred) sil_score = silhouette_score(X, y_pred, metric='euclidean') return h_score, v_score, sil_score def predict(self, model_number, x_test, y_test): """ """ model = self.training_results[model_number]["model"] h_score, v_score, sil_score = self.scoring(model, x_test, y_test) print(f"GMM #{model_number} yielded homogeneity, v_measure and silhouette scores "+ f"of {h_score}, {v_score} and {sil_score} resp. on the test set") return model.predict(x_test) ``` ## 3. Implementation of KMeans ### 3.1. Simple Implementation From Scratch (without gridsearch) First of all, let's build our own KNN to see how it works. Using the ``make_blobs`` function from the SciKit-Learn library we implement a Kmeans using euclidian distance metrics. ``` def generate_data(number_of_points): """ Generates a set of 2-dimensional coordinates, their x and y axes random values between 0 and 99. The set is generate with a random number of centers (between 2 and 9) """ X, _ = make_blobs(n_samples=number_of_points, centers=rd.randint(2,10), n_features=2, random_state=0) return X def compare_dict(dict1, dict2): """ Compares two dictionaries, asserting that they are identical or not. """ for key in dict1.keys(): if not np.array_equal(dict1[key], dict2[key]): return False return True def euclidian_distance(p1, p2): """ Calculates the euclidian distance between two points in the R² space. """ distance = 0 for idx, item in enumerate(p1): distance += (item - p2[idx]) ** 2 return math.sqrt(distance) class k_means(): def __init__(self, nb_of_data_points, k=2): """ Initializes the k-means class. """ self.data = generate_data(nb_of_data_points) self.k = k self.colors = cycle(["g","r","b","c","m","y"]) self.iterated = 0 # Generates an empty dictionary to store each step's centroids self.model_centroids = {} # Generates an empty dictionary to store each step's classification self.model_classifications = {} def fit(self, max_iterations=100): """ Fits the model. """ self.iterated = 0 # Select the initial k centroids centroids = rd.sample(list(self.data), 2) # Iterates to fit the model for iteration in range(max_iterations): print(f"Epoch {iteration}") # Generates an empty dictionary to store the step's classification step_classification = {} for k in range(self.k): step_classification[k] = [] # Calculates the euclidian distance between each data points and each centroids # Records the points in the classification corresponding to its nearest centroid for point in self.data: distances = list(map(lambda x: euclidian_distance(point, x), centroids)) argmin = min(range(len(distances)), key=distances.__getitem__) step_classification[argmin].append(point) # Records the state of the model after the iteration's fitting self.model_centroids[iteration] = centroids self.model_classifications[iteration] = step_classification # If no change has been identified between this iteration and the last, the model will stop. if len(self.model_classifications)>1: if compare_dict(self.model_classifications[iteration], self.model_classifications[iteration-1]): self.iterated = iteration print(f"No significant change has been achieved during epoch {self.iterated}. "+\ "Model is considered fitted.") break # Updates the centroids centroids = [] for classification in step_classification.values(): if classification == []: centroids.append(np.zeros(3)) else: centroids.append(np.mean(classification, axis=0)) def plot_data(self): """ Plots the distribution of the data """ # Declares the plot plt.figure(figsize=(6,6)) # Plots the data without colors if the dataset was not iterated over. if self.model_centroids == {} or self.iterated == 0: plt.scatter(self.data[:,0],self.data[:,1]) else: # Plots the centroids first for centroid in self.model_centroids[self.iterated]: plt.scatter(centroid[0], centroid[1], marker="o", color="k", s=50, linewidths=5) # plots the data points for classification in self.model_classifications[self.iterated].values(): color = next(self.colors) for feature in classification: plt.scatter(feature[0], feature[1], marker='x', color=color, s=20, linewidths=2) plt.show() model = k_means(1000, 3) model.plot_data() model.fit() model.plot_data() ``` ### 3.2. Implementation using SciKit-Learn Now that we know how it works, we can apply the KNN model offered by the SciKit-Learn library onto the Fashion MNIST dataset. ``` hyperparameters = {"n":range(8,15), "init":["k-means++","random"], "algorithm":["full", "elkan"]} grid_search = GridSearch("kmeans", hyperparameters, n_jobs=-1) results = grid_search.fit(x_train_processed, y_train) print("Predicting fashion items on the test set") y_pred = grid_search.predict(2, x_test_processed, y_test) label_number = len(np.unique(y_pred)) #2D matrix for an array of indexes of the given label cluster_index= [[] for i in range(label_number)] for i, label in enumerate(y_pred): for n in range(label_number): if label == n: cluster_index[n].append(i) else: continue for j, _ in enumerate(labelNames): plt.figure(figsize=(3,3)); clust = j #enter label number to visualise T-Shirt/Top num = 10 for i in range(1,num): plt.subplot(1, 1, i) #(Number of rows, Number of column per row, item number) plt.imshow(x_test_processed[cluster_index[clust][i+20]].reshape(x_test.shape[1], x_test.shape[2]), cmap = plt.cm.binary); plt.show() ``` ## 4. Implementation of GMM ### 4.1. Simple Implementation From Scratch (without gridsearch) ### 4.2. Implementation using SciKit-Learn ``` hyperparameters = {"n_components":range(10,21), "covariance_type":["full", "tied", "diag", "spherical"]} grid_search = GridSearch("gmm", hyperparameters, n_jobs=-1) results = grid_search.fit(x_train_processed, y_train) print("Best estimator found by grid search:") print(grid_search.best_estimator_) print("Best score found by grid search:") print(grid_search.best_score_) print("Best parameters found by grid search:") print(grid_search.best_params_) print("Predicting fashion items on the test set") y_pred = grid_search.predict(x_test_processed) grid_search = GridSearch("gmm", hyperparameters, n_jobs=-1) results = grid_search.fit(x_train_processed, y_train) print("Best estimator found by grid search:") print(grid_search.best_estimator_) print("Best score found by grid search:") print(grid_search.best_score_) print("Best parameters found by grid search:") print(grid_search.best_params_) print("Predicting fashion items on the test set") y_pred = grid_search.predict(x_test_processed) label_number = 10 #2D matrix for an array of indexes of the given label cluster_index= [[] for i in range(label_number)] for i, label in enumerate(y_pred): for n in range(label_number): if label == n: cluster_index[n].append(i) else: continue for j, _ in enumerate(labelNames): plt.figure(figsize=(6,6)); clust = j #enter label number to visualise T-Shirt/Top num = 10 for i in range(1,num): plt.subplot(3, 3, i) #(Number of rows, Number of column per row, item number) plt.imshow(x_test_processed[cluster_index[clust][i+20]].reshape(x_test.shape[1], x_test.shape[2]), cmap = plt.cm.binary); plt.show() ``` ## 5. Comparing the best results of both KMeans and GMM ## 6. Exploring the parameters of GMM, i.e. Implementing Efficient Greedy Learning Based on ``Verbeek, Jakob & Vlassis, Nikos & Krose, B.. (2003). Efficient Greedy Learning of Gaussian Mixture Models. Neural computation. 15. 469-85. 10.1162/089976603762553004``. ## 7. References used for this exercise - https://www.researchgate.net/publication/10896453_Efficient_Greedy_Learning_of_Gaussian_Mixture_Models - https://www.kaggle.com/c/ttic-31020-hw5-fmnist-gmm/leaderboard (leaderboard for GMM classification of Fashion-MNIST) - https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html - https://scikit-learn.org/stable/modules/generated/sklearn.metrics.v_measure_score.html - https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html - https://scikit-learn.org/stable/modules/generated/sklearn.metrics.homogeneity_score.html - https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html
github_jupyter
# WindSE Grid Convergence Example **Jordan Perr-Sauer, CSCI5636, Fall 2021** ``` import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ``` ## WindSE Background WindSE is a tool developed by NREL based on Fenics. It solves the Navier-Stokes equations to estimate the power output of a wind farm, given inflow conditions, topology of the terrain, and the location and characteristics of the wind turbines. The tool was developed to aid in the optimization of wind farm layouts. WindSE supports two types of finite elements, `taylor_hood` and `linear`. The `taylor_hood` element uses quadratic elements the velocity field and linear elements for the pressure field. This type of element is used here. WindSE supports both 2d and 3d models. ## WindSE API There are two ways to use WindSE. The first is through the command line. Provided an input parameters file (in YAML format), you can run that file and also override certain parameters in the file with the `windse run` command: ``` !windse run -p domain:nx:50 -p domain:ny:50 convergence-2D-3-Turbine-0x.yaml ``` A second way to interface with WindSE is to use the windse_driver python module. This provides a high level interface to setting up and solving a WindSE problem using the parameters file. ``` params = df.Initialize("convergence-2D-3-Turbine-0x.yaml") dom, farm = df.BuildDomain(params) problem = df.BuildProblem(params,dom,farm) solver = df.BuildSolver(params,problem) ``` Of course, it is also possible to set up and solve custom problems by interfacing with the windse internal classes directly (that is, not using the driver). # 2D Wind Farm with 3 Turbines ## Run Experiments First, make sure the output directory is empty. Then, you can run the experiments from the command line: ```python experiment_2d.py``` - Base File: convergence-2D-3-Turbine.yaml - nx-ny in 10...200 (by 10) ```python experiment_2d_options.py``` Parameters: - Base File: convergence-2D-3-Turbine.yaml - nx-ny in {60, 80, ... 200} - inflow angle in {0.0, $\pi$/16} ## Examine Experiment Output ### Meshes The meshes are displayed through ParaView. We can observe the automatic cylindrical refinement around the turbine locations. <img src="static/exp1-mesh-n10.png" width=200 height=200 /> <img src="static/exp1-mesh-n50.png" width=200 height=200 /> <img src="static/exp1-mesh-n100.png" width=200 height=200 /> ### Steady State Solution <img src="static/exp1-velocity-n100.png" width=600 height=400 /> ## Convergence of Power Output Here, the power output is computed for each of the three turbines. ``` import experiment_2d df = experiment_2d.get_results() df["sum"] = df["sum"] / 10**3 df["diff"] = df["sum"].diff().abs() df.plot(x="dofs", y="diff", logx=True, logy=True) plt.xlabel("Number of DOFs") plt.ylabel("Difference (Power, KW)") x = df["dofs"][1:] plt.plot(x, (x.max()*10**4)/x**2, "b--", label="1/x^2") plt.plot(x, (x.max()*10**1)/x, "g--", label="1/x") plt.show() df = experiment_2d.get_results() df = df.melt(id_vars=["dofs"], value_vars=["Turbine 1", "Turbine 2", "Turbine 3"]) sns.lineplot(data=df, x="dofs", y="value", hue="variable") plt.xlabel("Number of DOFs") plt.ylabel("Power (KW)") plt.xscale("log") plt.show() ``` ### Two Inflow Angles Here, we run the same experimental setup between nx=[60,200] and for two different inflow conditions. <img src="static/exp2-velocity.png" width=300 height=300 /> <img src="static/exp1-velocity-n100.png" width=300 height=300 /> Left: Inflow = $\pi$/16, Right: Inflow = 0.0 ``` import experiment_2d_options df = experiment_2d_options.get_results() df["sum"] = df["sum"] / 10**3 df["diff"] = df.sort_values("dofs").groupby("inflow")["sum"].diff().abs() df["inflow"] = df["inflow"].astype("str") sns.lineplot(data=df, x="dofs", y="diff", hue="inflow") plt.xscale("log") plt.yscale("log") x = df["dofs"][2:] plt.plot(x, (x.max()*10**6)/x**2, "b--", label="1/x^2") plt.plot(x, (x.max()*10**1)/x, "g--", label="1/x") df = experiment_2d_options.get_results() df["sum"] = df["sum"] / 10**3 df["inflow"] = df["inflow"].astype("str") df = df.melt(id_vars=["dofs", "inflow"], value_vars=["Turbine 1", "Turbine 2", "Turbine 3"]) sns.lineplot(data=df, x="dofs", y="value", hue="variable", style="inflow") plt.xlabel("Number of DOFs") plt.ylabel("Power (KW)") plt.xscale("log") plt.show() ```
github_jupyter
``` import pandas as pd from sklearn import feature_selection as skfs from sklearn import preprocessing as skpp from sklearn import model_selection as ms import numpy as np import datetime as dt ``` # Level 3 ``` def dummify(dataframe, *not_to_dummy, dummy_na=True): print("""DUMMIFYING: Creating dummy columns from categorical variables. This is necessary as sci-kit learn models always treat feature columns as continuous variables. Therefore, we change categorical variables to binary pseudo-continuous variables ({0})""".format(dt.datetime.now())) print("""Dummifying (Step1): We make a deep copy of the dataframe. If we did not do this, Python would change our original dataframe, but we don't want this""") dummy_tmp1 = dataframe.copy(deep=True) print("""Dummifying (Step2): dummy_tmp2: We drop columns from the table that we do not want to be dummified""") dummy_tmp2 = dummy_tmp1.drop(not_to_dummy, axis=1, errors='ignore') print("""Dummifying (Step3): dummy_tmp3: Transforming all categorical columns of dummy_tmp2 to dummy-variables""") dummy_tmp3 = pd.get_dummies(dummy_tmp2, dummy_na=dummy_na) print("""Dummifying (Step4): Concatenating (aka. joining)""") print("""Dummifying (Step4a): dataframe2: Make a deep copy of the dummy_tmp3 table""") dataframe2 = dummy_tmp3.copy(deep=True) for col_name in not_to_dummy: # Iterate through columns col = dataframe[col_name] dataframe2 = pd.concat([dataframe2, col], 1) # Print which columns were transformed: print('Dummified: {}'.format(dataframe.dtypes[dataframe.dtypes == object].keys())) return dataframe2 def variance_threshold_select(dataframe, thresh=0.0, na_replacement=-999): print('Transforming: Deleting low variance columns ({0})'.format(dt.datetime.now())) dataframe1 = dataframe.copy(deep=True) # Make a deep copy of the dataframe selector = VarianceThreshold(thresh) selector.fit(dataframe1.fillna(na_replacement)) # Fill NA values as VarianceThreshold cannot deal with those dataframe2 = dataframe.loc[:,selector.get_support(indices=False)] # Get new dataframe with columns deleted that have NA values return dataframe2 ``` # Level 2 ``` def read_csv_file(path='Data/dataset.csv', y_value=''): # Function definition: The path is standard as Data/dataset.csv but you can change it ########## # PRINTING information, before function begins print('Your file path is: {}. If you get a file not found error, try to change your file path. Dont forget the .csv at the end of the file'.format(path)) if y_value: print('You have specified a y-column: {}. You get: x-table, y-variable'.format(y_value)) else: print('You have not specified a y-column. You get: table or variable'.format(y_value)) ###### # MAIN # dataframe1 is a dataframe. More information on Data Structures: http://pandas.pydata.org/pandas-docs/stable/dsintro.html dataframe1 = pd.read_csv('Data/dataset.csv') # With this function, we load the CSV, the argument is the path of the CSV if y_value: # If we defined that there is a y_value in the file, then lets create our x and y variables # Creating the y and the x variable y = dataframe1[y_value] # We choose our y_value column as our y column. x = dataframe1.drop(y_value, axis=1) # Here, we drop our y column to create an x variable with no y variable in it return x, y # Returns a tuple with x, y variable. You can get them using x, y = read_csv_file(y_value='your_value') else: return dataframe1 # Returns a dataframe if no y_value was provided def preprocess_x(dataframe, threshold=0.0): # dataframe0 stands for dataframe 0 dataframe1 = dataframe.copy(deep=True) # We make a deep copy of the dataframe. If we did not do this, Python would change our original dataframe, but we don't want this dataframe2 = dummify(dataframe1) # Creating dummies from categorical variables (for more info, see http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html) dataframe3 = variance_threshold_select(dataframe2, thresh=threshold) # We delete variables return dataframe3 def preprocess_y(column): le = skpp.LabelEncoder() column = le.fit_transform(column) return(column) ``` # Level 1 ``` # Standardized Functions x, y = read_csv_file(y_value='Survived') x = preprocess_x(x) y = preprocess_y(y) x_train, x_test, y_train, y_test = ms.train_test_split(x, y) ```
github_jupyter
This is toy example code of normalization model presented in `xx`. ``` # import required libraries import pymc3 as pm import numpy as np from theano import tensor as tt import theano from theano import scan import pandas as pd from pymc3.math import logsumexp import matplotlib.pyplot as plt import pymc3.distributions.transforms as tr import pickle import os import re print(os.getcwd()) #the current directory is notebook directory. %matplotlib inline os.chdir('../') # change to home model home directory. print(os.getcwd()) # import library import bldgnorm as bn # create toy dataset (see bldgnorm/utility.py) dims,df=bn.create_toy_dataset() ## df contains data for the model. # t_in: Averaged weekly indoor air temperature [C]. # e: Averaged weekly heating and cooling power [kW]. # z: Known group cluster index. # t_out: Outdoor air temperature [C]. # id_w: Week number. # id_h: Household number. # start_date: The first day of each week in the data. # dims contains dimensional infomration of data and variables. # x_val_max and y_val_max are the largest values of x and y for data scaling. dims # Prior values and input data for the training at the first week (week=0) prior_values, input_values=bn.create_initial_inputs(df=df,dims=dims) # Visualize toydata # x_val=df['t_in'].to_numpy() y_val=df['e'].to_numpy() z_val=df['z'].to_numpy() id_w_val=df['id_w'].to_numpy() id_h_val=df['id_h'].to_numpy() n_W=(np.max(id_w_val)+1).astype('int') print("Different groups are presented with different colors.") fig, ax=plt.subplots(ncols=int(n_W/3), figsize=(9,3)) for plot_w in np.linspace(start=0,stop=(int(n_W/3-1)),num=int(n_W/3)).astype('int'): ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==0)&(id_w_val==plot_w)],y_val[(z_val==0)&(id_w_val==plot_w)],'bx') ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==1)&(id_w_val==plot_w)],y_val[(z_val==1)&(id_w_val==plot_w)],'gx') ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==2)&(id_w_val==plot_w)],y_val[(z_val==2)&(id_w_val==plot_w)],'rx') ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==3)&(id_w_val==plot_w)],y_val[(z_val==3)&(id_w_val==plot_w)],'yx') fig.text(0.5, 0.01, 'Temperature[$^{\circ}$C]', ha='center') fig.text(0.05, 0.5, 'Power [kW]', va='center', rotation='vertical') fig, ax=plt.subplots(ncols=int(n_W/3), figsize=(9,3)) for plot_w in np.linspace(start=int(n_W/3),stop=(int(n_W/3*2-1)),num=int(n_W/3)).astype('int'): ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==0)&(id_w_val==plot_w)],y_val[(z_val==0)&(id_w_val==plot_w)],'bx') ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==1)&(id_w_val==plot_w)],y_val[(z_val==1)&(id_w_val==plot_w)],'gx') ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==2)&(id_w_val==plot_w)],y_val[(z_val==2)&(id_w_val==plot_w)],'rx') ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==3)&(id_w_val==plot_w)],y_val[(z_val==3)&(id_w_val==plot_w)],'yx') fig.text(0.5, 0.01, 'Temperature[$^{\circ}$C]', ha='center') fig.text(0.05, 0.5, 'Power [kW]', va='center', rotation='vertical') fig, ax=plt.subplots(ncols=int(n_W/3), figsize=(9,3)) for plot_w in np.linspace(start=int(n_W/3*2),stop=int(n_W/3*3-1),num=int(n_W/3)).astype('int'): ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==0)&(id_w_val==plot_w)],y_val[(z_val==0)&(id_w_val==plot_w)],'bx') ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==1)&(id_w_val==plot_w)],y_val[(z_val==1)&(id_w_val==plot_w)],'gx') ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==2)&(id_w_val==plot_w)],y_val[(z_val==2)&(id_w_val==plot_w)],'rx') ax[np.remainder(plot_w,int(n_W/3)).astype('int')].plot(x_val[(z_val==3)&(id_w_val==plot_w)],y_val[(z_val==3)&(id_w_val==plot_w)],'yx') fig.text(0.5, 0.01, 'Temperature [$^{\circ}$C]', ha='center') fig.text(0.05, 0.5, 'Power [kW]', va='center', rotation='vertical') # function to train / update models def update_model(df,output_dir,dims=None,new_training=False,n_training=30000,n_sample=10000,repeat_num=0): if new_training: ''' New training. Prior values are initialized. This is technically week=0. For the model update, the previous week's posteriors are used as priors for the week when new_training=False. ''' print("new training") if dims==None: raise ValueError("dims information is required for new training.") prior_values, input_values=bn.create_initial_inputs(df=df,dims=dims) start_date=input_values['start_date'] end_date=(pd.Timestamp(start_date,tz="UTC")+pd.Timedelta(days=7)).strftime('%Y-%m-%d') # save directory for model output save_dir=f'{output_dir}/result_{start_date}_{end_date}/' if not(os.path.isdir(output_dir)): os.mkdir(output_dir) print(f'directory {output_dir} is created') if not(os.path.isdir(save_dir)): os.mkdir(save_dir) print(f'directory {save_dir} is created') # Random seed seed_num=(np.datetime64(start_date).astype('long')+repeat_num*10).astype('int') # Create model object. advi=bn.normalization_model(prior_values=prior_values,input_values=input_values) # Inference advi.fit(n = n_training,obj_optimizer = pm.adam(learning_rate=0.001)) # Store the ELBO values of ADVI. curr_hist=advi.hist[-1] # During the repeated model inference, ADVI is initialized in different random seeds. # Therefore, simulation runs with different initial points. # Then, pick the best results by making a comparison with different ADVI ELBO values. if not(os.path.isfile(f'{save_dir}advi_score.pkl')): pickle.dump(curr_hist,open(f'{save_dir}advi_score.pkl','wb')) prev_hist=1e+10 else: prev_hist=pickle.load(open(f'{save_dir}advi_score.pkl','rb')) # if current inference is better (i.e., ELBO value is smaller), store the model outputs. if curr_hist<prev_hist: plt.plot(advi.hist) plt.ylim((np.min(advi.hist)-100),np.max(advi.hist)/10) plt.savefig(f'{save_dir}advi_hist.png') plt.show() samples=advi.approx.sample(n_sample) sample_dict=bn.create_sample_dict(samples) param_post=bn.create_prior_values(sample_dict) z_pred,s_pred=bn.predict_group(input_values=input_values,sample_dict=sample_dict) pred={"z_pred":z_pred,"s_pred":s_pred} pickle.dump(input_values,open(f'{save_dir}input_values.pkl','wb')) pickle.dump(sample_dict,open(f'{save_dir}sample_dict.pkl','wb')) pickle.dump(param_post,open(f'{save_dir}param_post.pkl','wb')) pickle.dump(dims,open(f'{save_dir}dims.pkl','wb')) dims['n_H']=input_values['n_H'] pickle.dump(dims,open(f'{save_dir}dims.pkl','wb')) pickle.dump(pred,open(f'{save_dir}pred.pkl','wb')) else: # Update model (Bayesian udpate) print("Update training") df=df.sort_values(by=['start_date']) df.reset_index(inplace=True,drop=True) start_date=df['start_date'][0].strftime('%Y-%m-%d') end_date=(pd.Timestamp(start_date,tz="UTC")+pd.Timedelta(days=7)).strftime('%Y-%m-%d') prev_date=(pd.Timestamp(start_date,tz="UTC")-pd.Timedelta(days=7)).strftime('%Y-%m-%d') save_dir=f'{output_dir}/result_{start_date}_{end_date}/' read_dir=f'{output_dir}/result_{prev_date}_{start_date}/' if not(os.path.isdir(output_dir)): os.mkdir(output_dir) print(f'directory {output_dir} is created') if not(os.path.isdir(save_dir)): os.mkdir(save_dir) print(f'directory {save_dir} is created') if not(os.path.isdir(read_dir)): raise ValueError(f'Previous week data is not available. Check {read_dir}.') # read previous week data. sample_dict=pickle.load(open(f'{read_dir}sample_dict.pkl','rb')) prior_values=param_post=pickle.load(open(f'{read_dir}param_post.pkl','rb')) dims=pickle.load(open(f'{read_dir}dims.pkl','rb')) # create input values for this week. input_values=bn.create_input_values(df=df,dims=dims) n_H=input_values['n_H'] n_H_prev=param_post['mu_sd'].shape[0] if n_H>n_H_prev: print("New house added") H_increased=n_H-n_H_prev param_post['mu_sd']=np.concatenate([param_post['mu_sd'],np.tile([-4.0],H_increased)]) param_post['sd_sd']=np.concatenate([param_post['sd_sd'],np.tile([.5],H_increased)]) advi=bn.normalization_model(prior_values=prior_values,input_values=input_values) advi.fit(n = n_training,obj_optimizer = pm.adam(learning_rate=0.001)) curr_hist=advi.hist[-1] if not(os.path.isfile(f'{save_dir}advi_score.pkl')): pickle.dump(curr_hist,open(f'{save_dir}advi_score.pkl','wb')) prev_hist=1e+10 else: prev_hist=pickle.load(open(f'{save_dir}advi_score.pkl','rb')) if curr_hist<prev_hist: plt.plot(advi.hist) plt.ylim((np.min(advi.hist)-100),np.max(advi.hist)/10) plt.savefig(f'{save_dir}advi_hist.png') plt.show() samples=advi.approx.sample(n_sample) sample_dict=bn.create_sample_dict(samples) param_post=bn.create_prior_values(sample_dict) z_pred,s_pred=bn.predict_group(input_values=input_values,sample_dict=sample_dict) pred={"z_pred":z_pred,"s_pred":s_pred} pickle.dump(input_values,open(f'{save_dir}input_values.pkl','wb')) pickle.dump(sample_dict,open(f'{save_dir}sample_dict.pkl','wb')) pickle.dump(param_post,open(f'{save_dir}param_post.pkl','wb')) pickle.dump(dims,open(f'{save_dir}dims.pkl','wb')) dims['n_H']=input_values['n_H'] pickle.dump(dims,open(f'{save_dir}dims.pkl','wb')) pickle.dump(pred,open(f'{save_dir}pred.pkl','wb')) # Repeat inference 3 times to save time. total_repeat_num=3 for repeat_num in np.arange(total_repeat_num): # train model update_model(output_dir='output/toy_example',new_training=True,df=df[df['id_w']==0],dims=dims,n_training=30000,repeat_num=repeat_num) # Posterior predictive check (visualization.) ppc=bn.PostProcessing(output_dir=f'{os.path.abspath("output/toy_example/result_2020-01-05_2020-01-12")}/') ppc.ppc_all() print(np.mean(ppc.sample_dict['pi'],axis=0)) print(ppc.input_values['t_out'][0]) print(ppc.param_post['mu_phi_winter']) print(ppc.param_post['sd_phi_winter']) print(ppc.param_post['mu_phi_summer']) print(ppc.param_post['sd_phi_summer']) import glob # Model update for i in np.linspace(1,8,8).astype('int'): for repeat_num in np.arange(total_repeat_num): update_model(output_dir='output/toy_example',new_training=False,df=df[df['id_w']==i],n_training=30000,repeat_num=repeat_num) dir_list=glob.glob("output/toy_example/*/") dir_list.sort() ppc=bn.PostProcessing(output_dir=f'{os.path.abspath(dir_list[i])}/') ppc.ppc_all() print(np.mean(ppc.sample_dict['pi'],axis=0)) print(ppc.input_values['t_out'][0]) print(ppc.param_post['mu_phi_winter']) print(ppc.param_post['sd_phi_winter']) print(ppc.param_post['mu_phi_summer']) print(ppc.param_post['sd_phi_summer']) ``` Although it took 3 weeks to find 2 groups in the summer season during the update process, the model finds 4, 3, and 2 groups for each season. The toy example demonstrates the calculation process of proposed building normalization model.
github_jupyter
# Network inference of categorical variables: non-sequential data ``` import sys import numpy as np from scipy import linalg from sklearn.preprocessing import OneHotEncoder import matplotlib.pyplot as plt %matplotlib inline import inference # setting parameter: np.random.seed(1) n = 20 # number of positions m = 3 # number of values at each position l = int(10*((n*m)**2)) # number of samples g = 2. nm = n*m def itab(n,m): i1 = np.zeros(n) i2 = np.zeros(n) for i in range(n): i1[i] = i*m i2[i] = (i+1)*m return i1.astype(int),i2.astype(int) # generate coupling matrix w0: def generate_interactions(n,m,g): nm = n*m w = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm)) i1tab,i2tab = itab(n,m) for i in range(n): i1,i2 = i1tab[i],i2tab[i] w[i1:i2,:] -= w[i1:i2,:].mean(axis=0) for i in range(n): i1,i2 = i1tab[i],i2tab[i] w[i1:i2,i1:i2] = 0. # no self-interactions for i in range(nm): for j in range(nm): if j > i: w[i,j] = w[j,i] return w i1tab,i2tab = itab(n,m) w0 = inference.generate_interactions(n,m,g) #plt.imshow(w0,cmap='rainbow',origin='lower') #plt.clim(-0.5,0.5) #plt.colorbar(fraction=0.045, pad=0.05,ticks=[-0.5,0,0.5]) #plt.show() #print(w0) # 2018.11.07: equilibrium def generate_sequences_vp_tai(w,n,m,l): nm = n*m nrepeat = 50*n nrelax = m b = np.zeros(nm) s0 = np.random.randint(0,m,size=(l,n)) # integer values enc = OneHotEncoder(n_values=m) s = enc.fit_transform(s0).toarray() e_old = np.sum(s*(s.dot(w.T)),axis=1) for irepeat in range(nrepeat): for i in range(n): for irelax in range(nrelax): r_trial = np.random.randint(0,m,size=l) s0_trial = s0.copy() s0_trial[:,i] = r_trial s = enc.fit_transform(s0_trial).toarray() e_new = np.sum(s*(s.dot(w.T)),axis=1) t = np.exp(e_new - e_old) > np.random.rand(l) s0[t,i] = r_trial[t] e_old[t] = e_new[t] if irepeat%(5*n) == 0: print(irepeat,np.mean(e_old)) return enc.fit_transform(s0).toarray() s = generate_sequences_vp_tai(w0,n,m,l) ## 2018.11.07: for non sequencial data def fit_additive(s,n,m): nloop = 10 i1tab,i2tab = itab(n,m) nm = n*m nm1 = nm - m w_infer = np.zeros((nm,nm)) for i in range(n): i1,i2 = i1tab[i],i2tab[i] # remove column i x = np.hstack([s[:,:i1],s[:,i2:]]) x_av = np.mean(x,axis=0) dx = x - x_av c = np.cov(dx,rowvar=False,bias=True) c_inv = linalg.pinv(c,rcond=1e-15) #print(c_inv.shape) h = s[:,i1:i2].copy() for iloop in range(nloop): h_av = h.mean(axis=0) dh = h - h_av dhdx = dh[:,:,np.newaxis]*dx[:,np.newaxis,:] dhdx_av = dhdx.mean(axis=0) w = np.dot(dhdx_av,c_inv) #w = w - w.mean(axis=0) h = np.dot(x,w.T) p = np.exp(h) p_sum = p.sum(axis=1) #p /= p_sum[:,np.newaxis] for k in range(m): p[:,k] = p[:,k]/p_sum[:] h += s[:,i1:i2] - p w_infer[i1:i2,:i1] = w[:,:i1] w_infer[i1:i2,i2:] = w[:,i1:] return w_infer w2 = fit_additive(s,n,m) plt.plot([-1,1],[-1,1],'r--') plt.scatter(w0,w2) def fit_multiplicative(s,n,m,l): i1tab,i2tab = itab(n,m) nloop = 10 nm1 = nm - m w_infer = np.zeros((nm,nm)) wini = np.random.normal(0.0,1./np.sqrt(nm),size=(nm,nm1)) for i in range(n): i1,i2 = i1tab[i],i2tab[i] x = np.hstack([s[:,:i1],s[:,i2:]]) y = s.copy() # covariance[ia,ib] cab_inv = np.empty((m,m,nm1,nm1)) eps = np.empty((m,m,l)) for ia in range(m): for ib in range(m): if ib != ia: eps[ia,ib,:] = y[:,i1+ia] - y[:,i1+ib] which_ab = eps[ia,ib,:] !=0. xab = x[which_ab] # ---------------------------- xab_av = np.mean(xab,axis=0) dxab = xab - xab_av cab = np.cov(dxab,rowvar=False,bias=True) cab_inv[ia,ib,:,:] = linalg.pinv(cab,rcond=1e-15) w = wini[i1:i2,:].copy() cost = np.full(nloop,100.) for iloop in range(nloop): h = np.dot(x,w.T) # stopping criterion -------------------- p = np.exp(h) p_sum = p.sum(axis=1) p /= p_sum[:,np.newaxis] cost[iloop] = ((y[:,i1:i2] - p[:,:])**2).mean() if iloop > 1 and cost[iloop] >= cost[iloop-1]: break for ia in range(m): wa = np.zeros(nm1) for ib in range(m): if ib != ia: which_ab = eps[ia,ib,:] !=0. eps_ab = eps[ia,ib,which_ab] xab = x[which_ab] # ---------------------------- xab_av = np.mean(xab,axis=0) dxab = xab - xab_av h_ab = h[which_ab,ia] - h[which_ab,ib] ha = np.divide(eps_ab*h_ab,np.tanh(h_ab/2.), out=np.zeros_like(h_ab), where=h_ab!=0) dhdx = (ha - ha.mean())[:,np.newaxis]*dxab dhdx_av = dhdx.mean(axis=0) wab = cab_inv[ia,ib,:,:].dot(dhdx_av) # wa - wb wa += wab w[ia,:] = wa/m w_infer[i1:i2,:i1] = w[:,:i1] w_infer[i1:i2,i2:] = w[:,i1:] return w_infer w_infer = fit_multiplicative(s,n,m,l) plt.plot([-1,1],[-1,1],'r--') plt.scatter(w0,w_infer) #plt.scatter(w0[0:3,3:],w[0:3,:]) ```
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Running TFLite models <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite/Week%201/Examples/TFLite_Week1_Linear_Regression.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite/Week%201/Examples/TFLite_Week1_Linear_Regression.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> </table> ## Setup ``` try: %tensorflow_version 2.x except: pass import pathlib import numpy as np import matplotlib.pyplot as plt import tensorflow as tf print('\u2022 Using TensorFlow Version:', tf.__version__) ``` ## Create a Basic Model of the Form y = mx + c ``` # Create a simple Keras model. x = [-1, 0, 1, 2, 3, 4] y = [-3, -1, 1, 3, 5, 7] model = tf.keras.models.Sequential([ tf.keras.layers.Dense(units=1, input_shape=[1]) ]) model.compile(optimizer='sgd', loss='mean_squared_error') model.fit(x, y, epochs=200) ``` ## Generate a SavedModel ``` export_dir = 'saved_model/1' tf.saved_model.save(model, export_dir) ``` ## Convert the SavedModel to TFLite ``` # Convert the model. converter = tf.lite.TFLiteConverter.from_saved_model(export_dir) tflite_model = converter.convert() tflite_model_file = pathlib.Path('model.tflite') tflite_model_file.write_bytes(tflite_model) ``` ## Initialize the TFLite Interpreter To Try It Out ``` # Load TFLite model and allocate tensors. interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() # Get input and output tensors. input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Test the TensorFlow Lite model on random input data. input_shape = input_details[0]['shape'] inputs, outputs = [], [] for _ in range(100): input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32) interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() tflite_results = interpreter.get_tensor(output_details[0]['index']) # Test the TensorFlow model on random input data. tf_results = model(tf.constant(input_data)) output_data = np.array(tf_results) inputs.append(input_data[0][0]) outputs.append(output_data[0][0]) ``` ## Visualize the Model ``` %matplotlib inline plt.plot(inputs, outputs, 'r') plt.show() ``` ## Download the TFLite Model File If you are running this notebook in a Colab, you can run the cell below to download the tflite model to your local disk. **Note**: If the file does not download when you run the cell, try running the cell a second time. ``` try: from google.colab import files files.download(tflite_model_file) except: pass ```
github_jupyter
## Facial keypoints detection In this task you will create facial keypoint detector based on CNN regressor. ![title](example.png) ### Load and preprocess data Script `get_data.py` unpacks data — images and labelled points. 6000 images are located in `images` folder and keypoint coordinates are in `gt.csv` file. Run the cell below to unpack data. ``` from get_data import unpack unpack('facial-keypoints-data.zip') ``` Now you have to read `gt.csv` file and images from `images` dir. File `gt.csv` contains header and ground truth points for every image in `images` folder. It has 29 columns. First column is a filename and next 28 columns are `x` and `y` coordinates for 14 facepoints. We will make following preprocessing: 1. Scale all images to resolution $100 \times 100$ pixels. 2. Scale all coordinates to range $[-0.5; 0.5]$. To obtain that, divide all x's by width (or number of columns) of image, and divide all y's by height (or number of rows) of image and subtract 0.5 from all values. Function `load_imgs_and_keypoint` should return a tuple of two numpy arrays: `imgs` of shape `(N, 100, 100, 3)`, where `N` is the number of images and `points` of shape `(N, 28)`. ``` ### Useful routines for preparing data from numpy import array, zeros from os.path import join from skimage.color import gray2rgb from skimage.io import imread from skimage.transform import resize def load_imgs_and_keypoints(dirname='facial-keypoints'): # Write your code for loading images and points here pass imgs, points = load_imgs_and_keypoints() # Example of output %matplotlib inline from skimage.io import imshow imshow(imgs[0]) points[0] ``` ### Visualize data Let's prepare a function to visualize points on image. Such function obtains two arguments: an image and a vector of points' coordinates and draws points on image (just like first image in this notebook). ``` import matplotlib.pyplot as plt # Circle may be useful for drawing points on face # See matplotlib documentation for more info from matplotlib.patches import Circle def visualize_points(img, points): # Write here function which obtains image and normalized # coordinates and visualizes points on image pass visualize_points(imgs[1], points[1]) ``` ### Train/val split Run the following code to obtain train/validation split for training neural network. ``` from sklearn.model_selection import train_test_split imgs_train, imgs_val, points_train, points_val = train_test_split(imgs, points, test_size=0.1) ``` ### Simple data augmentation For better training we will use simple data augmentation — flipping an image and points. Implement function flip_img which flips an image and its' points. Make sure that points are flipped correctly! For instance, points on right eye now should be points on left eye (i.e. you have to mirror coordinates and swap corresponding points on the left and right sides of the face). VIsualize an example of original and flipped image. ``` def flip_img(img, points): # Write your code for flipping here pass f_img, f_points = flip_img(imgs[1], points[1]) visualize_points(f_img, f_points) ``` Time to augment our training sample. Apply flip to every image in training sample. As a result you should obtain two arrays: `aug_imgs_train` and `aug_points_train` which contain original images and points along with flipped ones. ``` # Write your code here visualize_points(aug_imgs_train[2], aug_points_train[2]) visualize_points(aug_imgs_train[3], aug_points_train[3]) ``` ### Network architecture and training Now let's define neural network regressor. It will have 28 outputs, 2 numbers per point. The precise architecture is up to you. We recommend to add 2-3 (`Conv2D` + `MaxPooling2D`) pairs, then `Flatten` and 2-3 `Dense` layers. Don't forget about ReLU activations. We also recommend to add `Dropout` to every `Dense` layer (with p from 0.2 to 0.5) to prevent overfitting. ``` from keras.models import Sequential from keras.layers import ( Conv2D, MaxPooling2D, Flatten, Dense, Dropout ) model = Sequential() # Define here your model ``` Time to train! Since we are training a regressor, make sure that you use mean squared error (mse) as loss. Feel free to experiment with optimization method (SGD, Adam, etc.) and its' parameters. ``` # ModelCheckpoint can be used for saving model during training. # Saved models are useful for finetuning your model # See keras documentation for more info from keras.callbacks import ModelCheckpoint from keras.optimizers import SGD, Adam # Choose optimizer, compile model and run training ``` ### Visualize results Now visualize neural network results on several images from validation sample. Make sure that your network outputs different points for images (i.e. it doesn't output some constant). ``` # Example of output ```
github_jupyter
``` suppressMessages(library("readxl")) suppressMessages(library("ggplot2")) suppressMessages(library("SummarizedExperiment")) suppressMessages(library("dplyr")) suppressMessages(library("rafalib")) suppressMessages(library("limma")) suppressMessages(library("e1071")) suppressMessages(library("xtable")) # FC - Frontal Cortex FC_Data <- read_excel("../data/TMT_Summary_Data.xlsx", sheet=1, skip=2) TMT_Summary_Data <- FC_Data nGenes <- nrow(TMT_Summary_Data) nCols <- ncol(TMT_Summary_Data) # Convert zeros to NA TMT_Summary_Data[TMT_Summary_Data == 0] <- NA # Remove NAs from the dataset nanIdx <- is.na(TMT_Summary_Data[,7:nCols]) numNans <- rowSums(nanIdx) filtered_TMT_Summary_Data <- TMT_Summary_Data[numNans < 1,] # Save the data save(filtered_TMT_Summary_Data, file='../data/filtered_TMT_Summary_Data_FC.RData') # Import TMT data into a Summarized Experiment filtered_TMT_Summary_Data_SE <- filtered_TMT_Summary_Data colnames(filtered_TMT_Summary_Data_SE) <- NULL rowData <- DataFrame(Accession_ID=filtered_TMT_Summary_Data_SE[,1], Gene=filtered_TMT_Summary_Data_SE[,2], Description=filtered_TMT_Summary_Data_SE[,3]) colData <- DataFrame(Disease=c("Alzheimers","Alzheimers","Control","Control","Parkinsons","Parkinsons","Comorbid","Comorbid"), row.names=c("AD1","AD2","CTL1","CTL2","PD1","PD2","ADPD1","ADPD2")) exp <- SummarizedExperiment(assays=list(batch1=(filtered_TMT_Summary_Data_SE[,7:14]), batch2=(filtered_TMT_Summary_Data_SE[,15:22]), batch3=(filtered_TMT_Summary_Data_SE[,23:30]), batch4=(filtered_TMT_Summary_Data_SE[,31:38]), batch5=(filtered_TMT_Summary_Data_SE[,39:46])), rowData=rowData, colData=colData) # Display samples in box-plots. Color by attribute and color by batch number. Raw_TMT_Summary_Data <- filtered_TMT_Summary_Data[,7:nCols] batchColors = rep(c('red','red','black','black','blue','blue','green','green'), 5) p <- ggplot(stack(Raw_TMT_Summary_Data), aes(x = ind, y = values)) + geom_boxplot(aes(fill=values)) + geom_point(aes(color=ind)) p + scale_color_manual(values= batchColors) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab('Patient sample') + ylab(expression(paste(Log[10],'-fold expression'))) + ggtitle('Frontal Cortex Tissue Samples - Unprocessed Data') + ggsave(file='../Figures/Fig1a-FrontalCortex.png') # Quantile normalization of data Quantile_TMT_Summary_Data <- as.data.frame(normalizeBetweenArrays(filtered_TMT_Summary_Data[,7:nCols], method='quantile')) save(Quantile_TMT_Summary_Data, file='../data/Quantile_TMT_Summary_Data_FC.RData') p2 <- ggplot(stack(Quantile_TMT_Summary_Data), aes(x = ind, y = values)) + geom_boxplot(aes(fill=values)) + geom_point(aes(color=ind)) p2 + scale_color_manual(values= batchColors) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab('Patient sample') + ylab(expression(paste(Log[10],'-fold expression'))) + ggtitle('Frontal Cortex Tissue Samples - Quantile Normalized Data') + ggsave(file='../Figures/Fig1b-FrontalCortex.png') # Plot summary of statistics summary(Quantile_TMT_Summary_Data) qData <- Quantile_TMT_Summary_Data # Extract samples for each disease-state Ctl <- as.data.frame(sapply(1:10, function(i) select(qData, sprintf("CTL%d",i)))) PD <- as.data.frame(sapply(1:10, function(i) select(qData, sprintf("PD%d",i)))) AD <- as.data.frame(sapply(1:10, function(i) select(qData, sprintf("AD%d",i)))) AD_PD <- as.data.frame(sapply(1:10, function(i) select(qData, sprintf("ADPD%d",i)))) Ctl_vs_PD <- cbind(Ctl, PD) Ctl_vs_AD <- cbind(Ctl, AD) Ctl_vs_AD_PD <- cbind(Ctl, AD_PD) # PCA analysis for all classes PCA <- prcomp(t(qData)) png('../Figures/Fig2-Frontal-Cortex.png') plot(PCA$x[,1], PCA$x[,2], pch=16, col=batchColors, xlab='PCA 1', ylab='PCA 2') dev.off() # Perform PCA analysis on Ctl versus disease-state PCA_CtlvsPD <- prcomp(t(Ctl_vs_PD)) PCA_CtlvsAD <- prcomp(t(Ctl_vs_AD)) PCA_CtlvsADPD <- prcomp(t(Ctl_vs_AD_PD)) # Plot PCA CtlvsPD_Colors <- c(rep('black',10), rep('blue', 10)) CtlvsAD_Colors <- c(rep('black',10), rep('red', 10)) CtlvsPDAD_Colors <- c(rep('black',10), rep('green', 10)) png('../Figures/Fig3a-Frontal-Cortex_CtlvsPD.png') pairs(PCA_CtlvsPD$x[,1:3], col=CtlvsPD_Colors, pch=16, oma=c(3,3,3,15)) par(xpd = TRUE) legend('bottomright',fill=c('black','blue'), legend = c('Ctl','PD')) dev.off() png('../Figures/Fig3b-Frontal-Cortex_CtlvsAD.png') pairs(PCA_CtlvsAD$x[,1:3], col=CtlvsAD_Colors, pch=16, oma=c(3,3,3,15)) par(xpd = TRUE) legend('bottomright',fill=c('black','red'), legend = c('Ctl','AD')) dev.off() png('../Figures/Fig3c-Frontal-Cortex_CtlvsADPD.png') pairs(PCA_CtlvsADPD$x[,1:3], col=CtlvsPDAD_Colors, pch=16, oma=c(3,3,3,15)) par(xpd = TRUE) legend('bottomright',fill=c('black','green'), legend = c('Ctl','PD_AD')) dev.off() # plot pairwise classification CP <- data.frame(x=PCA_CtlvsPD$x[,1:2], y=as.factor(c(rep("Ctl",10),rep("PD",10)))) CA <- data.frame(x=PCA_CtlvsAD$x[,1:2], y=as.factor(c(rep("Ctl",10),rep("AD",10)))) CAP <- data.frame(x=PCA_CtlvsADPD$x[,1:2], y=as.factor(c(rep("Ctl",10),rep("ADPD",10)))) svmfit=svm(y~., data=CP, kernel="radial", gamma=0.1, cost=10) png('../Figures/Fig4a-Frontal-Cortex_CtlvsPD.png') plot(svmfit, CP, pch=16) dev.off() table(true=CP[, "y"], pred=predict(svmfit, newdata=CP[,])) svmfit=svm(y~., data=CA, kernel="radial", gamma=0.1, cost=10) png('../Figures/Fig4b-Frontal-Cortex_CtlvsAD.png') plot(svmfit, CA) dev.off() table(true=CA[, "y"], pred=predict(svmfit, newdata=CA[,])) svmfit=svm(y~., data=CAP, kernel="radial", gamma=0.1, cost=10) png('../Figures/Fig4c-Frontal-Cortex_CtlvsADPD.png') plot(svmfit, CAP) dev.off() table(true=CAP[, "y"], pred=predict(svmfit, newdata=CAP[,])) xtable(table(true=CP[, "y"], pred=predict(svmfit, newdata=CP[,]))) xtable(table(true=CA[, "y"], pred=predict(svmfit, newdata=CA[,]))) xtable(table(true=CAP[, "y"], pred=predict(svmfit, newdata=CAP[,]))) ```
github_jupyter
``` import os import math import sys import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline ``` ### Немного о ресурсах в Spark Spark управляет ресурсами через Driver, а ресурсы - распределенные Executors. ![](cwrMN.png) `Executor` это рабочий процесс, который запускает индивидуальные заданий в общем Spark Job. Они запускаются с процессом, выполняют свою задачу и посылают результаты на `driver`. `Driver` это процесс осуществляющий контроль над запущенным Spark Job. Он конвертирует код в задачи и составляет расписание задач для `executors`. Рабочий процесс, который они делуют на основе вашего года можно изобрать так: - driver запрашивает ресурсы у cluster manager - запускаются executors, если доступны выделеные ресурсы - driver планирует работу и трансформирует ваш код в план выполнения - executors выполняют задачу и отправляют отчет о своей работе на driver - каждый процесс должен быть завершен и driver по нему должен получить результат (отчет), иначе процесс должен быть перезапущен driver'ом - spark.stop() завершает работу и освобождает ресурсы. ![](1y2zm.png) #### Сравним работу hadoop и spark `Процесс работы Hadoop:` ![](Image-1.png) ![](Image-2.png) ![](Image-3.png) ![](Image-4.png) ----------------------------------------- `Процесс работы Spark(client mode):` ![](Image-5.png) ![](Image-6.png) `Процесс работы Spark(cluster mode):` ![](Image-7.png) ![](Image-8.png) **Для режима cluster mode нужна интеграция с YARN** Для этого нужны настроенные параметры в Spark для взаимодействия с Hadoop: - настроить HADOOP_CONF_DIR переменную среды - настроить SPARK_HOME переменную среды - в SPARK_HOME/conf/spark-defaults.conf установить параметр spark.master в yarn: `spark.master | yarn` - копировать SPARK_HOME/jars в hdfs для взаимодействия между нодами: `hdfs dfs -put *.jar /user/spark/share/lib` Пример `.bashrc`: ```bash export HADOOP_CONF_DIR=/<path of hadoop dir>/etc/hadoop export YARN_CONF_DIR=/<path of hadoop dir>/etc/hadoop export SPARK_HOME=/<path of spark dir> export LD_LIBRARY_PATH=/<path of hadoop dir>/lib/native:$LD_LIBRARY_PATH ``` Добавить изменения в конфиг `spark-default.conf`: ```bash spark.master yarn spark.yarn.jars hdfs://path_to_master:9000/user/spark/share/lib/*.jar spark.executor.memory 1g spark.driver.memory 512m spark.yarn.am.memory 512m ``` ## Расчет параметров для Spark ``` # установите параметры кластера # сколько хотите задействовать excecutors # (это может быть любое число от 1 до +inf, это нужно для расчетов в качестве индексов) max_executor = 300 # количествол узлов в кластере (запомним, что 1 Node = 1 Worker process) # ед. - штук nodes = 10 nodes = nodes - 1 # количество процессоров на кластере (сумма по всем нодам) (в примере: вы будете использовать 1/3 от кластера) # ед. - штук cpu = 80 cpu = int((cpu - 1) / 3) # количество ОЗУ на кластере (в примере: вы будете использовать 1/3 от кластера) # ед. - мегабайт memo = 1048576 memo = int((memo - 1024) / 3) # установите коэффициент overhead памяти (здесь базовое значение) overhead_coef = 0.1 # кол-во параллеризма - базовое количество партиций в RDD parallelizm_per_core = 5 # фактор партиционирования в hdfs partition_factor = 3 ``` ### Запомним! Мы распеределяем ресурсы для Spark, надо учитывать, что Spark выполяет такие операции: ![](SXQHj.jpg) ``` # создаем DataFrame с установкой параметров (кол-во executors устанавливаем как нарастающее значение) df = pd.DataFrame(dict(executors=np.arange(1, max_executor))) # рассчёт кол-ва памяти на 1 executor # можно использовать формулу NUM_CORES * ((EXECUTOR_MEMORY + MEMORY_OVERHEAD) / EXECUTOR_CORES) (результат Гб на executor) # но здесь ещё один метод рассчета df['total_memo_per_executor'] = np.floor((memo / df.executors) * 0.9) # overhead по стандарту 10% от executor df['total_memooverhead_per_executor'] = df['total_memo_per_executor'] * 0.10 # остаток не используемой памяти на ноде df['unused_memo_per_node'] = memo - (df.executors * df['total_memo_per_executor']) # сколько займет процессоров df['total_core_per_executor'] = np.floor(cpu / df.executors) # остаток процессоров на ноде df['unused_core_per_nodes'] = cpu - (df.executors * df['total_core_per_executor']) # расчет памяти на executor df['overhead_memo'] = (df['total_memo_per_executor'] * overhead_coef) df['executor_memo'] = df['total_memo_per_executor'] - df['overhead_memo'] # кол-во процессор на executor df['executor_cores'] = np.floor(cpu / df.executors) # минус 1 для driver df['executor_instance'] = (df.executors *df['executor_cores']) - 1 # расчитываем или меняем по коду в ручную (можно начать с 2 для parallelizm_per_core) df['parallelism'] = df['executor_instance'] * df['executor_cores'] * parallelizm_per_core df['num_partitions'] = df['executor_instance'] * df['executor_cores'] * partition_factor # % используемой памяти на кластере df['used_memo_persentage'] = (1- ((df['overhead_memo'] + df['executor_memo']) / memo)) * 100 # % использумых процессоров на кластере df['used_cpu_persentage'] = ((cpu - df['unused_core_per_nodes']) / cpu) * 100 df.head(10) # посмотрим на графике распределение ресурсов (df[(df['used_memo_persentage'] > 0) & \ (df['used_cpu_persentage'] > 0) & \ (df['used_memo_persentage'] <= 100) & \ (df['used_cpu_persentage'] <= 100) & \ (df['executor_instance'] > 0) & \ (df['parallelism'] > 0) & \ (df['num_partitions'] > 0) ])[['executors', 'executor_cores', 'executor_instance', 'used_cpu_persentage' ]].plot(kind='box', figsize=(6,6), title='Распределение значений по параметрам') # посмотрим на графике распределение ресурсов fig, ax = plt.subplots() tdf1 = (df[(df['used_memo_persentage'] > 0) & \ (df['used_cpu_persentage'] > 0) & \ (df['used_memo_persentage'] <= 100) & \ (df['used_cpu_persentage'] <= 100) & \ (df['executor_instance'] > 0) & \ (df['parallelism'] > 0) & \ (df['num_partitions'] > 0) ])[['executors', 'executor_cores', 'executor_instance', 'used_cpu_persentage', 'used_memo_persentage' ]].plot(ax = ax, figsize=(10, 6)) tdf2 = (df[(df['used_memo_persentage'] > 0) & \ (df['used_cpu_persentage'] > 0) & \ (df['used_memo_persentage'] <= 100) & \ (df['used_cpu_persentage'] <= 100) & \ (df['executor_instance'] > 0) & \ (df['parallelism'] > 0) & \ (df['num_partitions'] > 0) ])[[ 'executor_memo' ]] ax2 = ax.twinx() rspine = ax2.spines['right'] rspine.set_position(('axes', 1.15)) ax2.set_frame_on(True) ax2.patch.set_visible(False) fig.subplots_adjust(right=0.7) tdf2.plot(ax=ax2, color='black') ax2.legend(bbox_to_anchor=(1.5, 0.5)) # установите количество используйемой памяти и процессоров # например: в расчете вы использовали 1/3 от доступных ресурсов (так как вас 3 человека на кластере) # сделайте выбор всех доступных вам ресурсов df_opt = df[(df['used_memo_persentage'] == df['used_memo_persentage'].max())] df_opt.head() # вы видете, что max() по ОЗУ превышает кол-во CPU и вы не можете использовать такие параметры # сделаем выбор по другим фильтрам (чтобы получить нормальные параметры) # executor_instance # parallelism # num_partitions # установим максимальное использование выделенных ресурсов. df_opt = df[(df['executor_instance'] > 0) & \ (df['parallelism'] > 0) & \ (df['num_partitions'] > 0)] df_opt = df_opt[(df_opt['used_memo_persentage'] == df_opt['used_memo_persentage'].max())] df_opt # запишем параметры в переменные sparkYarnExecutorMemoryOverhead = "{}Mb".format(df_opt['overhead_memo'].astype('int').values[0]) sparkExecutorsMemory = "{}Mb".format(df_opt['executor_memo'].astype('int').values[0]) sparkDriverMemory = "{}Mb".format(df_opt['executor_memo'].astype('int').values[0]) sparkDriverMaxResultSize = "{}Mb".format(df_opt['executor_memo'].astype('int').values[0] if df_opt['executor_memo'].astype('int').values[0] <= 4080 else 4080) sparkExecutorCores = "{}".format(df_opt['executor_cores'].astype('int').values[0]) sparkDriverCores = "{}".format(df_opt['executor_cores'].astype('int').values[0]) defParallelism = "{}".format(df_opt['parallelism'].astype('int').values[0]) sparkDriverMemory # Добавим параметры # sparkDynamicAllocationEnabled при "true" будет динамическое выделение ресурсов на весь доступный объем sparkDynamicAllocationEnabled = "true" sparkShuffleServiceEnabled = "true" # время освобождения ресурсов sparkDynamicAllocationExecutorIdleTimeout = "60s" # освобождение кэшированных данных sparkDynamicAllocationCachedExecutorIdleTimeout = "600s" # ограничиваем кол-во ресурсов на 1 spark session sparkDynamicAllocationMaxExecutors = "{}".format(df_opt['executors'].astype('int').values[0]) sparkDynamicAllocationMixExecutors = "{}".format(int(df_opt['executors'].values[0] /10)) # устанавливаем сериалайзер sparkSerializer = "org.apache.spark.serializer.KryoSerializer" # перебераем порты для соединения, вдруг что-то будет занято sparkPortMaxRetries = "10" # архивируем папку spark и загружаем на hdfs для ускорения работы sparkYarnArchive = "hdfs:///nes/spark.zip" # устанавливаем работы GC sparkExecutorExtraJavaOptions = "-XX:+UseG1GC -XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark -XX:InitiatingHeapOccupancyPercent=35 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:OnOutOfMemoryError='kill -9 %p'", spark.driver.extraJavaOptions = "-XX:+UseG1GC -XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark -XX:InitiatingHeapOccupancyPercent=35 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:OnOutOfMemoryError='kill -9 %p'", yarnNodemanagerVmem_check_enabled = "false", yarnNodemanagerPmem_check_enabled = "false" ```
github_jupyter
<table width=60% > <tr style="background-color: white;"> <td><img src='https://www.creativedestructionlab.com/wp-content/uploads/2018/05/xanadu.jpg'></td>></td> </tr> </table> --- <img src='https://raw.githubusercontent.com/XanaduAI/strawberryfields/master/doc/_static/strawberry-fields-text.png'> --- <br> <center> <h1> Gaussian boson sampling tutorial </h1></center> To get a feel for how Strawberry Fields works, let's try coding a quantum program, Gaussian boson sampling. ## Background information: Gaussian states A Gaussian state is one that can be described by a [Gaussian function](https://en.wikipedia.org/wiki/Gaussian_function) in the phase space. For example, for a single mode Gaussian state, squeezed in the $x$ quadrature by squeezing operator $S(r)$, could be described by the following [Wigner quasiprobability distribution](Wigner quasiprobability distribution): $$W(x,p) = \frac{2}{\pi}e^{-2\sigma^2(x-\bar{x})^2 - 2(p-\bar{p})^2/\sigma^2}$$ where $\sigma$ represents the **squeezing**, and $\bar{x}$ and $\bar{p}$ are the mean **displacement**, respectively. For multimode states containing $N$ modes, this can be generalised; Gaussian states are uniquely defined by a [multivariate Gaussian function](https://en.wikipedia.org/wiki/Multivariate_normal_distribution), defined in terms of the **vector of means** ${\mu}$ and a **covariance matrix** $\sigma$. ### The position and momentum basis For example, consider a single mode in the position and momentum quadrature basis (the default for Strawberry Fields). Assuming a Gaussian state with displacement $\alpha = \bar{x}+i\bar{p}$ and squeezing $\xi = r e^{i\phi}$ in the phase space, it has a vector of means and a covariance matrix given by: $$ \mu = (\bar{x},\bar{p}),~~~~~~\sigma = SS\dagger=R(\phi/2)\begin{bmatrix}e^{-2r} & 0 \\0 & e^{2r} \\\end{bmatrix}R(\phi/2)^T$$ where $S$ is the squeezing operator, and $R(\phi)$ is the standard two-dimensional rotation matrix. For multiple modes, in Strawberry Fields we use the convention $$ \mu = (\bar{x}_1,\bar{x}_2,\dots,\bar{x}_N,\bar{p}_1,\bar{p}_2,\dots,\bar{p}_N)$$ and therefore, considering $\phi=0$ for convenience, the multimode covariance matrix is simply $$\sigma = \text{diag}(e^{-2r_1},\dots,e^{-2r_N},e^{2r_1},\dots,e^{2r_N})\in\mathbb{C}^{2N\times 2N}$$ If a continuous-variable state *cannot* be represented in the above form (for example, a single photon Fock state or a cat state), then it is non-Gaussian. ### The annihilation and creation operator basis If we are instead working in the creation and annihilation operator basis, we can use the transformation of the single mode squeezing operator $$ S(\xi) \left[\begin{matrix}\hat{a}\\\hat{a}^\dagger\end{matrix}\right] = \left[\begin{matrix}\cosh(r)&-e^{i\phi}\sinh(r)\\-e^{-i\phi}\sinh(r)&\cosh(r)\end{matrix}\right] \left[\begin{matrix}\hat{a}\\\hat{a}^\dagger\end{matrix}\right]$$ resulting in $$\sigma = SS^\dagger = \left[\begin{matrix}\cosh(2r)&-e^{i\phi}\sinh(2r)\\-e^{-i\phi}\sinh(2r)&\cosh(2r)\end{matrix}\right]$$ For multiple Gaussian states with non-zero squeezing, the covariance matrix in this basis simply generalises to $$\sigma = \text{diag}(S_1S_1^\dagger,\dots,S_NS_N^\dagger)\in\mathbb{C}^{2N\times 2N}$$ ## Introduction to Gaussian boson sampling <div class="alert alert-info"> “If you need to wait exponential time for \[your single photon sources to emit simultaneously\], then there would seem to be no advantage over classical computation. This is the reason why so far, boson sampling has only been demonstrated with 3-4 photons. When faced with these problems, until recently, all we could do was shrug our shoulders.” - [Scott Aaronson](https://www.scottaaronson.com/blog/?p=1579) </div> While [boson sampling](https://en.wikipedia.org/wiki/Boson_sampling) allows the experimental implementation of a quantum sampling problem that it countably hard classically, one of the main issues it has in experimental setups is one of **scalability**, due to its dependence on an array of simultaneously emitting single photon sources. Currently, most physical implementations of boson sampling make use of a process known as [Spontaneous Parametric Down-Conversion](http://en.wikipedia.org/wiki/Spontaneous_parametric_down-conversion) to generate the single photon source inputs. Unfortunately, this method is non-deterministic - as the number of modes in the apparatus increases, the average time required until every photon source emits a simultaneous photon increases *exponentially*. In order to simulate a *deterministic* single photon source array, several variations on boson sampling have been proposed; the most well known being scattershot boson sampling ([Lund, 2014](https://link.aps.org/doi/10.1103/PhysRevLett.113.100502)). However, a recent boson sampling variation by [Hamilton et al.](https://link.aps.org/doi/10.1103/PhysRevLett.119.170501) negates the need for single photon Fock states altogether, by showing that **incident Gaussian states** - in this case, single mode squeezed states - can produce problems in the same computational complexity class as boson sampling. Even more significantly, this negates the scalability problem with single photon sources, as single mode squeezed states can be easily simultaneously generated experimentally. Aside from changing the input states from single photon Fock states to Gaussian states, the Gaussian boson sampling scheme appears quite similar to that of boson sampling: 1. $N$ single mode squeezed states $\left|{\xi_i}\right\rangle$, with squeezing parameters $\xi_i=r_ie^{i\phi_i}$, enter an $N$ mode linear interferometer with unitary $U$. <br> 2. The output of the interferometer is denoted $\left|{\psi'}\right\rangle$. Each output mode is then measured in the Fock basis, $\bigotimes_i n_i\left|{n_i}\middle\rangle\middle\langle{n_i}\right|$. Without loss of generality, we can absorb the squeezing parameter $\phi$ into the interferometer, and set $\phi=0$ for convenience. The covariance matrix **in the creation and annihilation operator basis** at the output of the interferometer is then given by: $$\sigma_{out} = \frac{1}{2} \left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right]\sigma_{in} \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right]$$ Using phase space methods, [Hamilton et al.](https://link.aps.org/doi/10.1103/PhysRevLett.119.170501) showed that the probability of measuring a Fock state is given by $$\left|\left\langle{n_1,n_2,\dots,n_N}\middle|{\psi'}\right\rangle\right|^2 = \frac{\left|\text{Haf}[(U\bigoplus_i\tanh(r_i)U^T)]_{st}\right|^2}{n_1!n_2!\cdots n_N!\sqrt{|\sigma_{out}+I/2|}},$$ i.e. the sampled single photon probability distribution is proportional to the **Hafnian** of a submatrix of $U\bigoplus_i\tanh(r_i)U^T$, dependent upon the output covariance matrix. <div class="alert alert-success" style="border: 0px; border-left: 3px solid #119a68; color: black; background-color: #daf0e9"> <p style="color: #119a68;">**The Hafnian**</p> The Hafnian of a matrix is defined by <br><br> $$\text{Haf}(A) = \frac{1}{n!2^n}\sum_{\sigma=S_{2N}}\prod_{i=1}^N A_{\sigma(2i-1)\sigma(2i)}$$ <br> $S_{2N}$ is the set of all permutations of $2N$ elements. In graph theory, the Hafnian calculates the number of perfect <a href="https://en.wikipedia.org/wiki/Matching_(graph_theory)">matchings</a> in an **arbitrary graph** with adjacency matrix $A$. <br> Compare this to the permanent, which calculates the number of perfect matchings on a *bipartite* graph - the Hafnian turns out to be a generalisation of the permanent, with the relationship $$\begin{align} \text{Per(A)} = \text{Haf}\left(\left[\begin{matrix} 0&A\\ A^T&0 \end{matrix}\right]\right) \end{align}$$ As any algorithm that could calculate (or even approximate) the Hafnian could also calculate the permanent - a #P problem - it follows that calculating or approximating the Hafnian must also be a classically hard problem. </div> ### Equally squeezed input states In the case where all the input states are squeezed equally with squeezing factor $\xi=r$ (i.e. so $\phi=0$), we can simplify the denominator into a much nicer form. It can be easily seen that, due to the unitarity of $U$, $$\left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right] \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right] = \left[ \begin{matrix}UU^\dagger&0\\0&U^*U^T\end{matrix} \right] =I$$ Thus, we have $$\begin{align} \sigma_{out} +\frac{1}{2}I &= \sigma_{out} + \frac{1}{2} \left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right] \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right] = \left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right] \frac{1}{2} \left(\sigma_{in}+I\right) \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right] \end{align}$$ where we have subtituted in the expression for $\sigma_{out}$. Taking the determinants of both sides, the two block diagonal matrices containing $U$ are unitary, and thus have determinant 1, resulting in $$\left|\sigma_{out} +\frac{1}{2}I\right| =\left|\frac{1}{2}\left(\sigma_{in}+I\right)\right|=\left|\frac{1}{2}\left(SS^\dagger+I\right)\right| $$ By expanding out the right hand side, and using various trig identities, it is easy to see that this simply reduces to $\cosh^{2N}(r)$ where $N$ is the number of modes; thus the Gaussian boson sampling problem in the case of equally squeezed input modes reduces to $$\left|\left\langle{n_1,n_2,\dots,n_N}\middle|{\psi'}\right\rangle\right|^2 = \frac{\left|\text{Haf}[(UU^T\tanh(r))]_{st}\right|^2}{n_1!n_2!\cdots n_N!\cosh^N(r)},$$ ## The Gaussian boson sampling circuit The multimode linear interferometer can be decomposed into two-mode beamsplitters (`BSgate`) and single-mode phase shifters (`Rgate`) (<a href="https://doi.org/10.1103/physrevlett.73.58">Reck, 1994</a>), allowing for an almost trivial translation into a continuous-variable quantum circuit. For example, in the case of a 4 mode interferometer, with arbitrary $4\times 4$ unitary $U$, the continuous-variable quantum circuit for Gaussian boson sampling is given by <img src="https://s3.amazonaws.com/xanadu-img/gaussian_boson_sampling.svg" width=70%/> In the above, * the single mode squeeze states all apply identical squeezing $\xi=r$, * the detectors perform Fock state measurements (i.e. measuring the photon number of each mode), * the parameters of the beamsplitters and the rotation gates determines the unitary $U$. For $N$ input modes, we must have a minimum of $N$ columns in the beamsplitter array ([Clements, 2016](https://arxiv.org/abs/1603.08788)). ## Simulating boson sampling in Strawberry Fields ``` import strawberryfields as sf from strawberryfields.ops import * from strawberryfields.utils import random_interferometer ``` Strawberry Fields makes this easy; there is an `Interferometer` quantum operation, and a utility function that allows us to generate the matrix representing a random interferometer. ``` U = random_interferometer(4) ``` The lack of Fock states and non-linear operations means we can use the Gaussian backend to simulate Gaussian boson sampling. In this example program, we are using input states with squeezing parameter $\xi=1$, and the randomly chosen interferometer generated above. ``` eng, q = sf.Engine(4) with eng: # prepare the input squeezed states S = Sgate(1) All(S) | q # interferometer Interferometer(U) | q state = eng.run('gaussian') ``` We can see the decomposed beamsplitters and rotation gates, by calling `eng.print_applied()`: ``` eng.print_applied() ``` <div class="alert alert-success" style="border: 0px; border-left: 3px solid #119a68; color: black; background-color: #daf0e9"> <p style="color: #119a68;">**Available decompositions**</p> Check out our <a href="https://strawberryfields.readthedocs.io/en/stable/conventions/decompositions.html">documentation</a> to see the available CV decompositions available in Strawberry Fields. </div> ## Analysis Let's now verify the Gaussian boson sampling result, by comparing the output Fock state probabilities to the Hafnian, using the relationship $$\left|\left\langle{n_1,n_2,\dots,n_N}\middle|{\psi'}\right\rangle\right|^2 = \frac{\left|\text{Haf}[(UU^T\tanh(r))]_{st}\right|^2}{n_1!n_2!\cdots n_N!\cosh^N(r)}$$ ### Calculating the Hafnian For the right hand side numerator, we first calculate the submatrix $[(UU^T\tanh(r))]_{st}$: ``` B = (np.dot(U, U.T) * np.tanh(1)) ``` In Gaussian boson sampling, we determine the submatrix by taking the rows and columns corresponding to the measured Fock state. For example, to calculate the submatrix in the case of the output measurement $\left|{1,1,0,0}\right\rangle$, ``` B[:,[0,1]][[0,1]] ``` To calculate the Hafnian in Python, we can use the direct definition $$\text{Haf}(A) = \frac{1}{n!2^n} \sum_{\sigma \in S_{2n}} \prod_{j=1}^n A_{\sigma(2j - 1), \sigma(2j)}$$ Notice that this function counts each term in the definition multiple times, and renormalizes to remove the multiple counts by dividing by a factor $\frac{1}{n!2^n}$. **This function is extremely slow!** ``` from itertools import permutations from scipy.special import factorial def Haf(M): n=len(M) m=int(n/2) haf=0.0 for i in permutations(range(n)): prod=1.0 for j in range(m): prod*=M[i[2*j],i[2*j+1]] haf+=prod return haf/(factorial(m)*(2**m)) ``` ## Comparing to the SF result In Strawberry Fields, both Fock and Gaussian states have the method `fock_prob()`, which returns the probability of measuring that particular Fock state. #### Let's compare the case of measuring at the output state $\left|0,1,0,1\right\rangle$: ``` B = (np.dot(U,U.T) * np.tanh(1))[:, [1,3]][[1,3]] np.abs(Haf(B))**2 / np.cosh(1)**4 state.fock_prob([0,1,0,1]) ``` #### For the measurement result $\left|2,0,0,0\right\rangle$: ``` B = (np.dot(U,U.T) * np.tanh(1))[:, [0,0]][[0,0]] np.abs(Haf(B))**2 / (2*np.cosh(1)**4) state.fock_prob([2,0,0,0]) ``` #### For the measurement result $\left|1,1,0,0\right\rangle$: ``` B = (np.dot(U,U.T) * np.tanh(1))[:, [0,1]][[0,1]] np.abs(Haf(B))**2 / np.cosh(1)**4 state.fock_prob([1,1,0,0]) ``` #### For the measurement result $\left|1,1,1,1\right\rangle$, this corresponds to the full matrix $B$: ``` B = (np.dot(U,U.T) * np.tanh(1)) np.abs(Haf(B))**2 / np.cosh(1)**4 state.fock_prob([1,1,1,1]) ``` #### For the measurement result $\left|0,0,0,0\right\rangle$, this corresponds to a **null** submatrix, which has a Hafnian of 1: ``` 1/np.cosh(1)**4 state.fock_prob([0,0,0,0]) ``` As you can see, like in the boson sampling tutorial, they agree with almost negligable difference. <div class="alert alert-success" style="border: 0px; border-left: 3px solid #119a68; color: black; background-color: #daf0e9"> <p style="color: #119a68;">**Exercises**</p> Repeat this notebook with <ol> <li> A Fock backend such as NumPy, instead of the Gaussian backend</li> <li> Different beamsplitter and rotation parameters</li> <li> Input states with *differing* squeezed values $r_i$. You will need to modify the code to take into account the fact that the output covariance matrix determinant must now be calculated! </ol> </div>
github_jupyter
# Dynamic factors and coincident indices Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data. Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the [Index of Coincident Economic Indicators](http://www.newyorkfed.org/research/regional_economy/coincident_summary.html)) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them. Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index. ## Macroeconomic data The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on [FRED](https://research.stlouisfed.org/fred2/); the ID of the series used below is given in parentheses): - Industrial production (IPMAN) - Real aggregate income (excluding transfer payments) (W875RX1) - Manufacturing and trade sales (CMRMTSPL) - Employees on non-farm payrolls (PAYEMS) In all cases, the data is at the monthly frequency and has been seasonally adjusted; the time-frame considered is 1972 - 2005. ``` %matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt np.set_printoptions(precision=4, suppress=True, linewidth=120) from pandas_datareader.data import DataReader # Get the datasets from FRED start = '1979-01-01' end = '2014-12-01' indprod = DataReader('IPMAN', 'fred', start=start, end=end) income = DataReader('W875RX1', 'fred', start=start, end=end) sales = DataReader('CMRMTSPL', 'fred', start=start, end=end) emp = DataReader('PAYEMS', 'fred', start=start, end=end) # dta = pd.concat((indprod, income, sales, emp), axis=1) # dta.columns = ['indprod', 'income', 'sales', 'emp'] ``` **Note**: in a recent update on FRED (8/12/15) the time series CMRMTSPL was truncated to begin in 1997; this is probably a mistake due to the fact that CMRMTSPL is a spliced series, so the earlier period is from the series HMRMT and the latter period is defined by CMRMT. This has since (02/11/16) been corrected, however the series could also be constructed by hand from HMRMT and CMRMT, as shown below (process taken from the notes in the Alfred xls file). ``` # HMRMT = DataReader('HMRMT', 'fred', start='1967-01-01', end=end) # CMRMT = DataReader('CMRMT', 'fred', start='1997-01-01', end=end) # HMRMT_growth = HMRMT.diff() / HMRMT.shift() # sales = pd.Series(np.zeros(emp.shape[0]), index=emp.index) # # Fill in the recent entries (1997 onwards) # sales[CMRMT.index] = CMRMT # # Backfill the previous entries (pre 1997) # idx = sales.loc[:'1997-01-01'].index # for t in range(len(idx)-1, 0, -1): # month = idx[t] # prev_month = idx[t-1] # sales.loc[prev_month] = sales.loc[month] / (1 + HMRMT_growth.loc[prev_month].values) dta = pd.concat((indprod, income, sales, emp), axis=1) dta.columns = ['indprod', 'income', 'sales', 'emp'] dta.loc[:, 'indprod':'emp'].plot(subplots=True, layout=(2, 2), figsize=(15, 6)); ``` Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated. As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized. ``` # Create log-differenced series dta['dln_indprod'] = (np.log(dta.indprod)).diff() * 100 dta['dln_income'] = (np.log(dta.income)).diff() * 100 dta['dln_sales'] = (np.log(dta.sales)).diff() * 100 dta['dln_emp'] = (np.log(dta.emp)).diff() * 100 # De-mean and standardize dta['std_indprod'] = (dta['dln_indprod'] - dta['dln_indprod'].mean()) / dta['dln_indprod'].std() dta['std_income'] = (dta['dln_income'] - dta['dln_income'].mean()) / dta['dln_income'].std() dta['std_sales'] = (dta['dln_sales'] - dta['dln_sales'].mean()) / dta['dln_sales'].std() dta['std_emp'] = (dta['dln_emp'] - dta['dln_emp'].mean()) / dta['dln_emp'].std() ``` ## Dynamic factors A general dynamic factor model is written as: $$ \begin{align} y_t & = \Lambda f_t + B x_t + u_t \\ f_t & = A_1 f_{t-1} + \dots + A_p f_{t-p} + \eta_t \qquad \eta_t \sim N(0, I)\\ u_t & = C_1 u_{t-1} + \dots + C_q u_{t-q} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \Sigma) \end{align} $$ where $y_t$ are observed data, $f_t$ are the unobserved factors (evolving as a vector autoregression), $x_t$ are (optional) exogenous variables, and $u_t$ is the error, or "idiosyncratic", process ($u_t$ is also optionally allowed to be autocorrelated). The $\Lambda$ matrix is often referred to as the matrix of "factor loadings". The variance of the factor error term is set to the identity matrix to ensure identification of the unobserved factors. This model can be cast into state space form, and the unobserved factor estimated via the Kalman filter. The likelihood can be evaluated as a byproduct of the filtering recursions, and maximum likelihood estimation used to estimate the parameters. ## Model specification The specific dynamic factor model in this application has 1 unobserved factor which is assumed to follow an AR(2) proces. The innovations $\varepsilon_t$ are assumed to be independent (so that $\Sigma$ is a diagonal matrix) and the error term associated with each equation, $u_{i,t}$ is assumed to follow an independent AR(2) process. Thus the specification considered here is: $$ \begin{align} y_{i,t} & = \lambda_i f_t + u_{i,t} \\ u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \\ f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\\ \end{align} $$ where $i$ is one of: `[indprod, income, sales, emp ]`. This model can be formulated using the `DynamicFactor` model built-in to Statsmodels. In particular, we have the following specification: - `k_factors = 1` - (there is 1 unobserved factor) - `factor_order = 2` - (it follows an AR(2) process) - `error_var = False` - (the errors evolve as independent AR processes rather than jointly as a VAR - note that this is the default option, so it is not specified below) - `error_order = 2` - (the errors are autocorrelated of order 2: i.e. AR(2) processes) - `error_cov_type = 'diagonal'` - (the innovations are uncorrelated; this is again the default) Once the model is created, the parameters can be estimated via maximum likelihood; this is done using the `fit()` method. **Note**: recall that we have de-meaned and standardized the data; this will be important in interpreting the results that follow. **Aside**: in their empirical example, Kim and Nelson (1999) actually consider a slightly different model in which the employment variable is allowed to also depend on lagged values of the factor - this model does not fit into the built-in `DynamicFactor` class, but can be accomodated by using a subclass to implement the required new parameters and restrictions - see Appendix A, below. ## Parameter estimation Multivariate models can have a relatively large number of parameters, and it may be difficult to escape from local minima to find the maximized likelihood. In an attempt to mitigate this problem, I perform an initial maximization step (from the model-defined starting paramters) using the modified Powell method available in Scipy (see the minimize documentation for more information). The resulting parameters are then used as starting parameters in the standard LBFGS optimization method. ``` # Get the endogenous data endog = dta.loc['1979-02-01':, 'std_indprod':'std_emp'] # Create the model mod = sm.tsa.DynamicFactor(endog, k_factors=1, factor_order=2, error_order=2) initial_res = mod.fit(method='powell', disp=False) res = mod.fit(initial_res.params, disp=False) ``` ## Estimates Once the model has been estimated, there are two components that we can use for analysis or inference: - The estimated parameters - The estimated factor ### Parameters The estimated parameters can be helpful in understanding the implications of the model, although in models with a larger number of observed variables and / or unobserved factors they can be difficult to interpret. One reason for this difficulty is due to identification issues between the factor loadings and the unobserved factors. One easy-to-see identification issue is the sign of the loadings and the factors: an equivalent model to the one displayed below would result from reversing the signs of all factor loadings and the unobserved factor. Here, one of the easy-to-interpret implications in this model is the persistence of the unobserved factor: we find that exhibits substantial persistence. ``` print(res.summary(separate_params=False)) ``` ### Estimated factors While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons: 1. The sign-related identification issue described above. 2. Since the data was differenced, the estimated factor explains the variation in the differenced data, not the original data. It is for these reasons that the coincident index is created (see below). With these reservations, the unobserved factor is plotted below, along with the NBER indicators for US recessions. It appears that the factor is successful at picking up some degree of business cycle activity. ``` fig, ax = plt.subplots(figsize=(13,3)) # Plot the factor dates = endog.index._mpl_repr() ax.plot(dates, res.factors.filtered[0], label='Factor') ax.legend() # Retrieve and also plot the NBER recession indicators rec = DataReader('USREC', 'fred', start=start, end=end) ylim = ax.get_ylim() ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1); ``` ## Post-estimation Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not. In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables). In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income. ``` res.plot_coefficients_of_determination(figsize=(8,2)); ``` ## Coincident Index As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991). In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED). ``` usphci = DataReader('USPHCI', 'fred', start='1979-01-01', end='2014-12-01')['USPHCI'] usphci.plot(figsize=(13,3)); dusphci = usphci.diff()[1:].values def compute_coincident_index(mod, res): # Estimate W(1) spec = res.specification design = mod.ssm['design'] transition = mod.ssm['transition'] ss_kalman_gain = res.filter_results.kalman_gain[:,:,-1] k_states = ss_kalman_gain.shape[0] W1 = np.linalg.inv(np.eye(k_states) - np.dot( np.eye(k_states) - np.dot(ss_kalman_gain, design), transition )).dot(ss_kalman_gain)[0] # Compute the factor mean vector factor_mean = np.dot(W1, dta.loc['1972-02-01':, 'dln_indprod':'dln_emp'].mean()) # Normalize the factors factor = res.factors.filtered[0] factor *= np.std(usphci.diff()[1:]) / np.std(factor) # Compute the coincident index coincident_index = np.zeros(mod.nobs+1) # The initial value is arbitrary; here it is set to # facilitate comparison coincident_index[0] = usphci.iloc[0] * factor_mean / dusphci.mean() for t in range(0, mod.nobs): coincident_index[t+1] = coincident_index[t] + factor[t] + factor_mean # Attach dates coincident_index = pd.Series(coincident_index, index=dta.index).iloc[1:] # Normalize to use the same base year as USPHCI coincident_index *= (usphci.loc['1992-07-01'] / coincident_index.loc['1992-07-01']) return coincident_index ``` Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI. ``` fig, ax = plt.subplots(figsize=(13,3)) # Compute the index coincident_index = compute_coincident_index(mod, res) # Plot the factor dates = endog.index._mpl_repr() ax.plot(dates, coincident_index, label='Coincident index') ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI') ax.legend(loc='lower right') # Retrieve and also plot the NBER recession indicators ylim = ax.get_ylim() ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1); ``` ## Appendix 1: Extending the dynamic factor model Recall that the previous specification was described by: $$ \begin{align} y_{i,t} & = \lambda_i f_t + u_{i,t} \\ u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \\ f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\\ \end{align} $$ Written in state space form, the previous specification of the model had the following observation equation: $$ \begin{bmatrix} y_{\text{indprod}, t} \\ y_{\text{income}, t} \\ y_{\text{sales}, t} \\ y_{\text{emp}, t} \\ \end{bmatrix} = \begin{bmatrix} \lambda_\text{indprod} & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \lambda_\text{income} & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \lambda_\text{sales} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ \lambda_\text{emp} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} f_t \\ f_{t-1} \\ u_{\text{indprod}, t} \\ u_{\text{income}, t} \\ u_{\text{sales}, t} \\ u_{\text{emp}, t} \\ u_{\text{indprod}, t-1} \\ u_{\text{income}, t-1} \\ u_{\text{sales}, t-1} \\ u_{\text{emp}, t-1} \\ \end{bmatrix} $$ and transition equation: $$ \begin{bmatrix} f_t \\ f_{t-1} \\ u_{\text{indprod}, t} \\ u_{\text{income}, t} \\ u_{\text{sales}, t} \\ u_{\text{emp}, t} \\ u_{\text{indprod}, t-1} \\ u_{\text{income}, t-1} \\ u_{\text{sales}, t-1} \\ u_{\text{emp}, t-1} \\ \end{bmatrix} = \begin{bmatrix} a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \\ 0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \\ 0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \\ 0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} f_{t-1} \\ f_{t-2} \\ u_{\text{indprod}, t-1} \\ u_{\text{income}, t-1} \\ u_{\text{sales}, t-1} \\ u_{\text{emp}, t-1} \\ u_{\text{indprod}, t-2} \\ u_{\text{income}, t-2} \\ u_{\text{sales}, t-2} \\ u_{\text{emp}, t-2} \\ \end{bmatrix} + R \begin{bmatrix} \eta_t \\ \varepsilon_{t} \end{bmatrix} $$ the `DynamicFactor` model handles setting up the state space representation and, in the `DynamicFactor.update` method, it fills in the fitted parameter values into the appropriate locations. The extended specification is the same as in the previous example, except that we also want to allow employment to depend on lagged values of the factor. This creates a change to the $y_{\text{emp},t}$ equation. Now we have: $$ \begin{align} y_{i,t} & = \lambda_i f_t + u_{i,t} \qquad & i \in \{\text{indprod}, \text{income}, \text{sales} \}\\ y_{i,t} & = \lambda_{i,0} f_t + \lambda_{i,1} f_{t-1} + \lambda_{i,2} f_{t-2} + \lambda_{i,2} f_{t-3} + u_{i,t} \qquad & i = \text{emp} \\ u_{i,t} & = c_{i,1} u_{i,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \\ f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\\ \end{align} $$ Now, the corresponding observation equation should look like the following: $$ \begin{bmatrix} y_{\text{indprod}, t} \\ y_{\text{income}, t} \\ y_{\text{sales}, t} \\ y_{\text{emp}, t} \\ \end{bmatrix} = \begin{bmatrix} \lambda_\text{indprod} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \lambda_\text{income} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \lambda_\text{sales} & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ \lambda_\text{emp,1} & \lambda_\text{emp,2} & \lambda_\text{emp,3} & \lambda_\text{emp,4} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} f_t \\ f_{t-1} \\ f_{t-2} \\ f_{t-3} \\ u_{\text{indprod}, t} \\ u_{\text{income}, t} \\ u_{\text{sales}, t} \\ u_{\text{emp}, t} \\ u_{\text{indprod}, t-1} \\ u_{\text{income}, t-1} \\ u_{\text{sales}, t-1} \\ u_{\text{emp}, t-1} \\ \end{bmatrix} $$ Notice that we have introduced two new state variables, $f_{t-2}$ and $f_{t-3}$, which means we need to update the transition equation: $$ \begin{bmatrix} f_t \\ f_{t-1} \\ f_{t-2} \\ f_{t-3} \\ u_{\text{indprod}, t} \\ u_{\text{income}, t} \\ u_{\text{sales}, t} \\ u_{\text{emp}, t} \\ u_{\text{indprod}, t-1} \\ u_{\text{income}, t-1} \\ u_{\text{sales}, t-1} \\ u_{\text{emp}, t-1} \\ \end{bmatrix} = \begin{bmatrix} a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} f_{t-1} \\ f_{t-2} \\ f_{t-3} \\ f_{t-4} \\ u_{\text{indprod}, t-1} \\ u_{\text{income}, t-1} \\ u_{\text{sales}, t-1} \\ u_{\text{emp}, t-1} \\ u_{\text{indprod}, t-2} \\ u_{\text{income}, t-2} \\ u_{\text{sales}, t-2} \\ u_{\text{emp}, t-2} \\ \end{bmatrix} + R \begin{bmatrix} \eta_t \\ \varepsilon_{t} \end{bmatrix} $$ This model cannot be handled out-of-the-box by the `DynamicFactor` class, but it can be handled by creating a subclass when alters the state space representation in the appropriate way. First, notice that if we had set `factor_order = 4`, we would almost have what we wanted. In that case, the last line of the observation equation would be: $$ \begin{bmatrix} \vdots \\ y_{\text{emp}, t} \\ \end{bmatrix} = \begin{bmatrix} \vdots & & & & & & & & & & & \vdots \\ \lambda_\text{emp,1} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} f_t \\ f_{t-1} \\ f_{t-2} \\ f_{t-3} \\ \vdots \end{bmatrix} $$ and the first line of the transition equation would be: $$ \begin{bmatrix} f_t \\ \vdots \end{bmatrix} = \begin{bmatrix} a_1 & a_2 & a_3 & a_4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \vdots & & & & & & & & & & & \vdots \\ \end{bmatrix} \begin{bmatrix} f_{t-1} \\ f_{t-2} \\ f_{t-3} \\ f_{t-4} \\ \vdots \end{bmatrix} + R \begin{bmatrix} \eta_t \\ \varepsilon_{t} \end{bmatrix} $$ Relative to what we want, we have the following differences: 1. In the above situation, the $\lambda_{\text{emp}, j}$ are forced to be zero for $j > 0$, and we want them to be estimated as parameters. 2. We only want the factor to transition according to an AR(2), but under the above situation it is an AR(4). Our strategy will be to subclass `DynamicFactor`, and let it do most of the work (setting up the state space representation, etc.) where it assumes that `factor_order = 4`. The only things we will actually do in the subclass will be to fix those two issues. First, here is the full code of the subclass; it is discussed below. It is important to note at the outset that none of the methods defined below could have been omitted. In fact, the methods `__init__`, `start_params`, `param_names`, `transform_params`, `untransform_params`, and `update` form the core of all state space models in Statsmodels, not just the `DynamicFactor` class. ``` from statsmodels.tsa.statespace import tools class ExtendedDFM(sm.tsa.DynamicFactor): def __init__(self, endog, **kwargs): # Setup the model as if we had a factor order of 4 super(ExtendedDFM, self).__init__( endog, k_factors=1, factor_order=4, error_order=2, **kwargs) # Note: `self.parameters` is an ordered dict with the # keys corresponding to parameter types, and the values # the number of parameters of that type. # Add the new parameters self.parameters['new_loadings'] = 3 # Cache a slice for the location of the 4 factor AR # parameters (a_1, ..., a_4) in the full parameter vector offset = (self.parameters['factor_loadings'] + self.parameters['exog'] + self.parameters['error_cov']) self._params_factor_ar = np.s_[offset:offset+2] self._params_factor_zero = np.s_[offset+2:offset+4] @property def start_params(self): # Add three new loading parameters to the end of the parameter # vector, initialized to zeros (for simplicity; they could # be initialized any way you like) return np.r_[super(ExtendedDFM, self).start_params, 0, 0, 0] @property def param_names(self): # Add the corresponding names for the new loading parameters # (the name can be anything you like) return super(ExtendedDFM, self).param_names + [ 'loading.L%d.f1.%s' % (i, self.endog_names[3]) for i in range(1,4)] def transform_params(self, unconstrained): # Perform the typical DFM transformation (w/o the new parameters) constrained = super(ExtendedDFM, self).transform_params( unconstrained[:-3]) # Redo the factor AR constraint, since we only want an AR(2), # and the previous constraint was for an AR(4) ar_params = unconstrained[self._params_factor_ar] constrained[self._params_factor_ar] = ( tools.constrain_stationary_univariate(ar_params)) # Return all the parameters return np.r_[constrained, unconstrained[-3:]] def untransform_params(self, constrained): # Perform the typical DFM untransformation (w/o the new parameters) unconstrained = super(ExtendedDFM, self).untransform_params( constrained[:-3]) # Redo the factor AR unconstraint, since we only want an AR(2), # and the previous unconstraint was for an AR(4) ar_params = constrained[self._params_factor_ar] unconstrained[self._params_factor_ar] = ( tools.unconstrain_stationary_univariate(ar_params)) # Return all the parameters return np.r_[unconstrained, constrained[-3:]] def update(self, params, transformed=True, complex_step=False): # Peform the transformation, if required if not transformed: params = self.transform_params(params) params[self._params_factor_zero] = 0 # Now perform the usual DFM update, but exclude our new parameters super(ExtendedDFM, self).update(params[:-3], transformed=True, complex_step=complex_step) # Finally, set our new parameters in the design matrix self.ssm['design', 3, 1:4] = params[-3:] ``` So what did we just do? #### `__init__` The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with `factor_order=4`, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks. #### `start_params` `start_params` are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short. #### `param_names` `param_names` are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names. #### `transform_params` and `untransform_params` The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and `transform_params` is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. `untransform_params` is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine). Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons: 1. The version in the `DynamicFactor` class is expecting 3 fewer parameters than we have now. At a minimum, we need to handle the three new parameters. 2. The version in the `DynamicFactor` class constrains the factor lag coefficients to be stationary as though it was an AR(4) model. Since we actually have an AR(2) model, we need to re-do the constraint. We also set the last two autoregressive coefficients to be zero here. #### `update` The most important reason we need to specify a new `update` method is because we have three new parameters that we need to place into the state space formulation. In particular we let the parent `DynamicFactor.update` class handle placing all the parameters except the three new ones in to the state space representation, and then we put the last three in manually. ``` # Create the model extended_mod = ExtendedDFM(endog) initial_extended_res = extended_mod.fit(maxiter=1000, disp=False) extended_res = extended_mod.fit(initial_extended_res.params, method='nm', maxiter=1000) print(extended_res.summary(separate_params=False)) ``` Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters. Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results. ``` extended_res.plot_coefficients_of_determination(figsize=(8,2)); fig, ax = plt.subplots(figsize=(13,3)) # Compute the index extended_coincident_index = compute_coincident_index(extended_mod, extended_res) # Plot the factor dates = endog.index._mpl_repr() ax.plot(dates, coincident_index, '-', linewidth=1, label='Basic model') ax.plot(dates, extended_coincident_index, '--', linewidth=3, label='Extended model') ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI') ax.legend(loc='lower right') ax.set(title='Coincident indices, comparison') # Retrieve and also plot the NBER recession indicators ylim = ax.get_ylim() ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1); ```
github_jupyter
## Plotting the temperature In this exercise, you'll examine the temperature columns from the weather dataset to assess whether the data seems trustworthy. First you'll print the summary statistics, and then you'll visualize the data using a box plot. When deciding whether the values seem reasonable, keep in mind that the temperature is measured in degrees Fahrenheit, not Celsius! Instructions 1. Read `weather.csv` into a DataFrame named `weather`. 2. Select the temperature columns (`TMIN`, `TAVG`, `TMAX`) and print their summary statistics using the `.describe()` method. 3. Create a box plot to visualize the temperature columns. 4. Display the plot. ``` # Import packages import pandas as pd import matplotlib.pyplot as plt # Read 'weather.csv' into a DataFrame named 'weather' weather = pd.read_csv('weather.csv') # Describe the temperature columns print(weather[['TMIN', 'TAVG', 'TMAX']].describe()) # Create a box plot of the temperature columns weather[['TMIN', 'TAVG', 'TMAX']].plot(kind='box') # Display the plot plt.show() ``` ## Plotting the temperature difference In this exercise, you'll continue to assess whether the dataset seems trustworthy by plotting the difference between the maximum and minimum temperatures. What do you notice about the resulting histogram? Does it match your expectations, or do you see anything unusual? Instructions 1. Create a new column in the `weather` DataFrame named `TDIFF` that represents the difference between the maximum and minimum temperatures. 2. Print the summary statistics for `TDIFF` using the `.describe()` method. 3. Create a histogram with 20 bins to visualize `TDIFF`. 4. Display the plot. ``` # Create a 'TDIFF' column that represents temperature difference weather['TDIFF'] = weather['WSF2'] - weather['AWND'] # Describe the 'TDIFF' column print(weather['TDIFF'].describe()) # Create a histogram with 20 bins to visualize 'TDIFF' weather['TDIFF'].plot(kind='hist', bins=20) # Display the plot plt.show() ``` ## Counting bad weather conditions The `weather` DataFrame contains 20 columns that start with `'WT'`, each of which represents a bad weather condition. For example: - `WT05` indicates "Hail" - `WT11` indicates "High or damaging winds" - `WT17` indicates "Freezing rain" For every row in the dataset, each `WT` column contains either a `1` (meaning the condition was present that day) or `NaN` (meaning the condition was not present). In this exercise, you'll quantify "how bad" the weather was each day by counting the number of `1` values in each row. Instructions 1. Copy the columns `WT01` through `WT22` from `weather` to a new DataFrame named `WT`. 2. Calculate the sum of each row in `WT`, and store the results in a new `weather` column named `bad_conditions`. 3. Replace any missing values in `bad_conditions` with a `0`. (This has been done for you.) 4. Create a histogram to visualize `bad_conditions`, and then display the plot. ``` # Copy 'WT01' through 'WT22' to a new DataFrame WT = weather.loc[:, 'WT01':'WT22'] # Calculate the sum of each row in 'WT' weather['bad_conditions'] = WT.sum(axis='columns') # Replace missing values in 'bad_conditions' with '0' weather['bad_conditions'] = weather['bad_conditions'].fillna(0).astype('int') # Create a histogram to visualize 'bad_conditions' weather['bad_conditions'].plot(kind='hist') # Display the plot plt.show() ``` ## Rating the weather conditions In the previous exercise, you counted the number of bad weather conditions each day. In this exercise, you'll use the counts to create a rating system for the weather. The counts range from 0 to 9, and should be converted to ratings as follows: - Convert `0` to `'good'` - Convert `1` through `4` to `'bad'` - Convert `5` through `9` to `'worse'` Instructions 1. Count the unique values in the `bad_conditions` column and sort the index. (This has been done for you.) 2. Create a dictionary called `mapping` that maps the `bad_conditions` integers to strings as specified above. 3. Convert the `bad_conditions` integers to strings using the `mapping` and store the results in a new column called `rating`. 4. Count the unique values in `rating` to verify that the integers were properly converted to strings. ``` # Count the unique values in 'bad_conditions' and sort the index print(weather['bad_conditions'].value_counts().sort_index()) # Create a dictionary that maps integers to strings mapping = {0:'good', 1:'bad', 2:'bad', 3:'bad', 4:'bad', 5:'worse', 6:'worse', 7:'worse', 8:'worse', 9:'worse'} # Convert the 'bad_conditions' integers to strings using the 'mapping' weather['rating'] = weather['bad_conditions'].map(mapping) # Count the unique values in 'rating' print(weather['rating'].value_counts()) ``` ## Changing the data type to category Since the `rating` column only has a few possible values, you'll change its data type to category in order to store the data more efficiently. You'll also specify a logical order for the categories, which will be useful for future exercises. Instructions 1. Create a list object called `cats` that lists the weather ratings in a logical order: `'good'`, `'bad'`, `'worse'`. 2. Change the data type of the `rating` column from object to category. Make sure to use the `cats` list to define the category ordering. 3. Examine the head of the `rating` column to confirm that the categories are logically ordered. ``` # Create a list of weather ratings in logical order cats = ['good', 'bad', 'worse'] # Change the data type of 'rating' to category # weather['rating'] = weather['rating'].astype('category', ordered=True, categories=cats) weather['rating'] = pd.Categorical(weather['rating'], ordered=True, categories=cats) # Examine the head of 'rating' weather['rating'].head() ``` ## Preparing the DataFrames In this exercise, you'll prepare the traffic stop and weather rating DataFrames so that they're ready to be merged: 1. With the `ri` DataFrame, you'll move the `stop_datetime` index to a column since the index will be lost during the merge. 2. With the `weather` DataFrame, you'll select the `DATE` and `rating` columns and put them in a new DataFrame. Instructions 1. Reset the index of the `ri` DataFrame. 2. Examine the head of `ri` to verify that `stop_datetime` is now a DataFrame column, and the index is now the default integer index. 3. Create a new DataFrame named `weather_rating` that contains only the `DATE` and `rating` columns from the `weather` DataFrame. 4. Examine the head of `weather_rating` to verify that it contains the proper columns. ``` # Import ri dataset ri = pd.read_csv('police.csv') ### Fix dataset (check 1_preparing_the_data_for_analysis) # Concatenate 'stop_date' and 'stop_time' (separated by a space) ri['stop_datetime'] = pd.to_datetime(ri['stop_date'].str.cat(ri['stop_time'], sep=' ')) # Set stop_datetime as index ri.set_index('stop_datetime', inplace=True) # Drop the 'county_name' and 'state' columns ri.drop(['county_name', 'state'], axis='columns', inplace=True) # Drop all rows that are missing 'driver_gender' ri.dropna(subset=['driver_gender'], inplace=True) # Drop all rows that are missing 'is_arrested' ri.dropna(subset=['driver_gender'], inplace=True) # Change the data type of 'is_arrested' to 'bool' ri['is_arrested'] = ri['is_arrested'].astype(bool) # Reset the index of 'ri' ri.reset_index(inplace=True) # Examine the head of 'ri' ri.head() # Create a DataFrame from the 'DATE' and 'rating' columns weather_rating = weather[['DATE', 'rating']] # Examine the head of 'weather_rating' weather_rating.head() ``` ## Merging the DataFrames In this exercise, you'll merge the `ri` and `weather_rating` DataFrames into a new DataFrame, `ri_weather`. The DataFrames will be joined using the `stop_date` column from `ri` and the `DATE` column from `weather_rating`. Thankfully the date formatting matches exactly, which is not always the case! Once the merge is complete, you'll set `stop_datetime` as the index, which is the column you saved in the previous exercise. Instructions 1. Examine the shape of the `ri` DataFrame. 2. Merge the `ri` and `weather_rating` DataFrames using a left join. 3. Examine the shape of `ri_weather` to confirm that it has two more columns but the same number of rows as `ri`. 4. Replace the index of `ri_weather` with the `stop_datetime` column. ``` # Examine the shape of 'ri' print(ri.shape) # Merge 'ri' and 'weather_rating' using a left join ri_weather = pd.merge(left=ri, right=weather_rating, left_on='stop_date', right_on='DATE', how='left') # Examine the shape of 'ri_weather' print(ri_weather.shape) # Set 'stop_datetime' as the index of 'ri_weather' ri_weather.set_index('stop_datetime', inplace=True) ``` ## Comparing arrest rates by weather rating Do police officers arrest drivers more often when the weather is bad? Find out below! - First, you'll calculate the overall arrest rate. - Then, you'll calculate the arrest rate for each of the weather ratings you previously assigned. - Finally, you'll add violation type as a second factor in the analysis, to see if that accounts for any differences in the arrest rate. Since you previously defined a logical order for the weather categories, `good < bad < worse`, they will be sorted that way in the results. Instructions 1. Calculate the overall arrest rate by taking the mean of the `is_arrested` Series. 2. Calculate the arrest rate for each weather `rating` using a `.groupby()`. 3. Calculate the arrest rate for each combination of `violation` and `rating`. How do the arrest rates differ by group? ``` # Calculate the overall arrest rate ri_weather['is_arrested'].mean() # Calculate the arrest rate for each 'rating' ri_weather.groupby('rating').is_arrested.mean() # Calculate the arrest rate for each 'violation' and 'rating' ri_weather.groupby(['violation', 'rating']).is_arrested.mean() ``` ## Selecting from a multi-indexed Series The output of a single `.groupby()` operation on multiple columns is a Series with a MultiIndex. Working with this type of object is similar to working with a DataFrame: - The outer index level is like the DataFrame rows. - The inner index level is like the DataFrame columns. In this exercise, you'll practice accessing data from a multi-indexed Series using the `.loc[]` accessor. Instructions 1. Save the output of the `.groupby()` operation from the last exercise as a new object, `arrest_rate`. (This has been done for you.) 2. Print the `arrest_rate` Series and examine it. 3. Print the arrest rate for moving violations in bad weather. 4. Print the arrest rates for speeding violations in all three weather conditions. ``` # Save the output of the groupby operation from the last exercise arrest_rate = ri_weather.groupby(['violation', 'rating']).is_arrested.mean() # Print the 'arrest_rate' Series arrest_rate # Print the arrest rate for moving violations in bad weather arrest_rate.loc['Moving violation', 'bad'] # Print the arrest rates for speeding violations in all three weather conditions arrest_rate.loc['Speeding'] ``` ## Reshaping the arrest rate data In this exercise, you'll start by reshaping the `arrest_rate` Series into a DataFrame. This is a useful step when working with any multi-indexed Series, since it enables you to access the full range of DataFrame methods. Then, you'll create the exact same DataFrame using a pivot table. This is a great example of how pandas often gives you more than one way to reach the same result! Instructions 1. Unstack the `arrest_rate` Series to reshape it into a DataFrame. 2. Create the exact same DataFrame using a pivot table! Each of the three `.pivot_table()` parameters should be specified as one of the `ri_weather` columns. ``` # Unstack the 'arrest_rate' Series into a DataFrame arrest_rate.unstack() # Create the same DataFrame using a pivot table ri_weather.pivot_table(index='violation', columns='rating', values='is_arrested') ```
github_jupyter
# Data collection for the development of a sampling method for market exploration ``` import numpy as np # library to handle data in a vectorized manner import pandas as pd # library for data analsysis pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) import json # library to handle JSON files !conda install -c conda-forge geopy --yes # uncomment this line if you haven't completed the Foursquare API lab from geopy.geocoders import Nominatim # convert an address into latitude and longitude values import requests # library to handle requests from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe # Matplotlib and associated plotting modules import matplotlib.cm as cm import matplotlib.colors as colors # import k-means from clustering stage from sklearn.cluster import KMeans #!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab import folium # map rendering library print('Libraries imported.') ``` # Collecting data for the first hungarian tow: Pécs ``` #getting the coordinates of Pécs, around his point every a cirle with 500 meter radius is explored to find all of the venues from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="my_user_agent") city ="Pécs" country ="Hungary" loc = geolocator.geocode(city+','+ country) print("latitude is : " ,loc.latitude,"\nlongtitude is: " ,loc.longitude) # Creation of map of Pécs using latitude and longitude values in order to check that the coordinates are right latitude_p = 46.076322 longitude_p = 18.2280746 map_p = folium.Map(location=[latitude_p, longitude_p], zoom_start=10) map_p #Defining Foursquare API parameters CLIENT_ID = 'UM25MYXVTAUXQPNQ4BQMTUDPAXBWLGAGQJYIIP2D1SUS23YG' # your Foursquare ID CLIENT_SECRET = 'WLCDIOWXOA0G405WB4B2HGJVIPUCX5KZ5W2CTTDA2KMXQBHF' # your Foursquare Secret VERSION = '20201225' # Foursquare API version LIMIT = 100 # A default Foursquare API limit value print('Your credentails:') print('CLIENT_ID: ' + CLIENT_ID) print('CLIENT_SECRET:' + CLIENT_SECRET) # Necessary URL: radius = 500 url_p = 'https://api.foursquare.com/v2/venues/explore?client_id=UM25MYXVTAUXQPNQ4BQMTUDPAXBWLGAGQJYIIP2D1SUS23YG&client_secret=WLCDIOWXOA0G405WB4B2HGJVIPUCX5KZ5W2CTTDA2KMXQBHF&ll=43.6542559,-79.3606359&v=20201220&radius=500&limit=100'.format(CLIENT_ID, CLIENT_SECRET, latitude_p, longitude_p, VERSION, radius, LIMIT) url_p # Using request.get funcion to obtain the data of the venues in a json file: results_p = requests.get(url_p).json() results_p # function that extracts the category of the venue def get_category_type(row): try: categories_list = row['categories'] except: categories_list = row['venue.categories'] if len(categories_list) == 0: return None else: return categories_list[0]['name'] # preparation and cleaning of the Json file: venues_p = results_p['response']['groups'][0]['items'] nearby_venues_p = json_normalize(venues_p) # flatten JSON # filter columns filtered_columns_p = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] nearby_venues_p = nearby_venues_p.loc[:, filtered_columns_p] # filter the category for each row nearby_venues_p['venue.categories'] = nearby_venues_p.apply(get_category_type, axis=1) # clean columns nearby_venues_p.columns = [col.split(".")[-1] for col in nearby_venues_p.columns] nearby_venues_p.head() #selection of unique vanues nearby_venues_p.categories.unique() # Shape of the dataframe: nearby_venues_p.shape #Counting the unique values: nearby_venues_p['categories'].value_counts() #number of venues, Bakery, Coffee shop, Restaurant, Distribution Center, Breakfast Spot, Chocolate Shop, Dessert Shop, Café #summarizing all restaurants: nearby_venues_p[nearby_venues_p['categories'].str.contains("Restaurant")] #Those vanues were manually selected which were possible targets to sell food and drink ingredients data_p = { 'Town' : ['Pécs'], 'Bakery' : [3], 'Coffee Shop' : [7], 'Restaurant' : [3], 'Distribution Center' : [1], 'Breakfast Shop' : [2], 'Chocolate Shop' : [1], 'Dessert Shop' : [1], 'Café' : [3], } #Visualizing the venues in a dataframe df_p = pd.DataFrame(data_p) df_p ``` # The already described steps will be repeated 7 times with seven different Hungarian towns. The resulted data of the venues will be merge into a final dataframe (it is marked by df_pgksvzta at the abolute bottom of this notebook). After a minimal data cleaning this dataframe (df_pgksvzta) will be the starting data of the furhter investigations. ``` from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="my_user_agent") city ="Győr" country ="Hungary" loc = geolocator.geocode(city+','+ country) print("latitude is : " ,loc.latitude,"\nlongtitude is: " ,loc.longitude) latitude_p = 47.687609 longitude_p = 17.6346815 # Creation of map of Toronto using latitude and longitude values map_g = folium.Map(location=[latitude_g, longitude_g], zoom_start=10) map_g latitude_g = 47.68769 longitude_g = 17.6346815 url_g = 'https://api.foursquare.com/v2/venues/explore?client_id=UM25MYXVTAUXQPNQ4BQMTUDPAXBWLGAGQJYIIP2D1SUS23YG&client_secret=WLCDIOWXOA0G405WB4B2HGJVIPUCX5KZ5W2CTTDA2KMXQBHF&ll=47.687609,17.6346815&v=20201224&radius=500&limit=100'.format(CLIENT_ID, CLIENT_SECRET, latitude_g, longitude_g, VERSION, radius, LIMIT) url_g results_g = requests.get(url_g).json() results_g # function that extracts the category of the venue def get_category_type(row): try: categories_list = row['categories'] except: categories_list = row['venue.categories'] if len(categories_list) == 0: return None else: return categories_list[0]['name'] venues_g = results_g['response']['groups'][0]['items'] nearby_venues_g = json_normalize(venues_g) # flatten JSON # filter columns filtered_columns_g = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] nearby_venues_g = nearby_venues_g.loc[:, filtered_columns_g] # filter the category for each row nearby_venues_g['venue.categories'] = nearby_venues_g.apply(get_category_type, axis=1) # clean columns nearby_venues_g.columns = [col.split(".")[-1] for col in nearby_venues_g.columns] nearby_venues_g.head() nearby_venues_g.categories.unique() nearby_venues_g['categories'].value_counts() nearby_venues_g.shape #Pie Shop, Café, Coffee Shop, Restaurant, Breakfast Spot, Burger Joint, Bakery, Tea Room, Grocery Store, Dessert Shop, Diner, nearby_venues_g[nearby_venues_g['categories'].str.contains("Restaurant")] #Pie Shop, Café, Coffe Shop, Restaurant, Breakfast Spot, Burger Joint, Bakery, Tea Room, Grocery Store, Dessert Shop, Diner, data_g = { 'Town' : ['Győr'], 'Pie shop' : [1], 'Café' : [4], 'Coffee Shop' : [3], 'Restaurant' : [16], 'Breakfast Spot' : [2], 'Burger Joint' : [1], 'Bakery' : [2], 'Tea Room' : [1], 'Grocery Store' : [1], 'Dessert Shop' : [1], 'Diner' : [1] } df_p = pd.DataFrame(data_p) df_p df_g = pd.DataFrame(data_g) df_g df_pg = pd.concat([df_p, df_g]) df_pg from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="my_user_agent") city ="Kaposvár" country ="Hungary" loc = geolocator.geocode(city+','+ country) print("latitude is : " ,loc.latitude,"\nlongtitude is: " ,loc.longitude) latitude_k = 46.3564692 longitude_k = 17.7886886 # Creation of map of Toronto using latitude and longitude values map_k = folium.Map(location=[latitude_k, longitude_k], zoom_start=10) map_k # Necessary URL: radius = 500 url_k = 'https://api.foursquare.com/v2/venues/explore?client_id=UM25MYXVTAUXQPNQ4BQMTUDPAXBWLGAGQJYIIP2D1SUS23YG&client_secret=WLCDIOWXOA0G405WB4B2HGJVIPUCX5KZ5W2CTTDA2KMXQBHF&ll=46.3564692,17.7886886&v=20201224&radius=500&limit=100'.format(CLIENT_ID, CLIENT_SECRET, latitude_p, longitude_p, VERSION, radius, LIMIT) url_k results_k = requests.get(url_k).json() results_k venues_k = results_k['response']['groups'][0]['items'] nearby_venues_k = json_normalize(venues_k) # flatten JSON # filter columns filtered_columns_k = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] nearby_venues_k = nearby_venues_k.loc[:, filtered_columns_k] # filter the category for each row nearby_venues_k['venue.categories'] = nearby_venues_k.apply(get_category_type, axis=1) # clean columns nearby_venues_k.columns = [col.split(".")[-1] for col in nearby_venues_k.columns] nearby_venues_k.head() nearby_venues_k.categories.unique() nearby_venues_k.shape nearby_venues_k['categories'].value_counts() nearby_venues_k[nearby_venues_k['categories'].str.contains("Restaurant")] data_k = { 'Town' : ['Kaposvár'], 'Café' : [5], 'Restaurant' : [5], 'Pizza Place' : [2], 'Dessert Shop' : [1], 'Supermarket' : [1], 'Grocery Store' : [2], 'Burger Joint' : [1], } df_k = pd.DataFrame(data_k) df_k df_pgk = pd.concat([df_pg, df_k]) df_pgk from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="my_user_agent") city ="Szekszárd" country ="Hungary" loc = geolocator.geocode(city+','+ country) print("latitude is : " ,loc.latitude,"\nlongtitude is: " ,loc.longitude) latitude_s = 46.3484884 longitude_s = 18.701663 # Creation of map of Toronto using latitude and longitude values map_s = folium.Map(location=[latitude_s, longitude_s], zoom_start=10) map_s radius = 500 url_s = 'https://api.foursquare.com/v2/venues/explore?client_id=UM25MYXVTAUXQPNQ4BQMTUDPAXBWLGAGQJYIIP2D1SUS23YG&client_secret=WLCDIOWXOA0G405WB4B2HGJVIPUCX5KZ5W2CTTDA2KMXQBHF&ll=46.3484884,18.701663&v=20201224&radius=500&limit=100'.format(CLIENT_ID, CLIENT_SECRET, latitude_p, longitude_p, VERSION, radius, LIMIT) url_s results_s = requests.get(url_s).json() results_s venues_s = results_s['response']['groups'][0]['items'] nearby_venues_s = json_normalize(venues_s) # flatten JSON # filter columns filtered_columns_s = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] nearby_venues_s = nearby_venues_s.loc[:, filtered_columns_s] # filter the category for each row nearby_venues_s['venue.categories'] = nearby_venues_s.apply(get_category_type, axis=1) # clean columns nearby_venues_s.columns = [col.split(".")[-1] for col in nearby_venues_s.columns] nearby_venues_s.head() nearby_venues_s.categories.unique() nearby_venues_s.shape nearby_venues_s['categories'].value_counts() nearby_venues_s[nearby_venues_s['categories'].str.contains("Restaurant")] data_s = { 'Town' : ['Szekszárd'], 'Café' : [5], 'Restaurant' : [4], 'Dessert Shop' : [2], 'Pizza Place' : [1], 'Bed & Breakfast' : [1], 'Grocery Store' : [1], 'Café ' : [1], } df_s = pd.DataFrame(data_s) df_s df_pgks = pd.concat([df_pgk, df_s]) df_pgks from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="my_user_agent") city ="Veszprém" country ="Hungary" loc = geolocator.geocode(city+','+ country) print("latitude is : " ,loc.latitude,"\nlongtitude is: " ,loc.longitude) latitude_v = 47.0928058 longitude_v = 17.9140147 # Creation of map of Toronto using latitude and longitude values map_v = folium.Map(location=[latitude_v, longitude_v], zoom_start=10) map_v radius = 500 url_v = 'https://api.foursquare.com/v2/venues/explore?client_id=UM25MYXVTAUXQPNQ4BQMTUDPAXBWLGAGQJYIIP2D1SUS23YG&client_secret=WLCDIOWXOA0G405WB4B2HGJVIPUCX5KZ5W2CTTDA2KMXQBHF&ll=47.0928058,17.9140147&v=20201224&radius=500&limit=100'.format(CLIENT_ID, CLIENT_SECRET, latitude_p, longitude_p, VERSION, radius, LIMIT) url_v results_v = requests.get(url_v).json() results_v venues_v = results_v['response']['groups'][0]['items'] nearby_venues_v = json_normalize(venues_v) # flatten JSON # filter columns filtered_columns_v = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] nearby_venues_v = nearby_venues_v.loc[:, filtered_columns_v] # filter the category for each row nearby_venues_v['venue.categories'] = nearby_venues_v.apply(get_category_type, axis=1) # clean columns nearby_venues_v.columns = [col.split(".")[-1] for col in nearby_venues_v.columns] nearby_venues_v.head() nearby_venues_v.categories.unique() nearby_venues_v.shape nearby_venues_v['categories'].value_counts() data_v = { 'Town' : ['Veszprém'], 'Café' : [1], 'Restaurant' : [7], 'Coffee Shop': [4], 'Pizza Place' : [1], 'Dessert Shop' : [2], 'Grocery Store' : [2], 'Burger Joint' : [3], 'Bakery': [1], 'Gourmet Shop': [1], 'Shopping Mall': [2], } df_v = pd.DataFrame(data_v) df_v df_pgksv = pd.concat([df_pgks, df_v]) df_pgksv from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="my_user_agent") city ="Zalaegerszeg" country ="Hungary" loc = geolocator.geocode(city+','+ country) print("latitude is : " ,loc.latitude,"\nlongtitude is: " ,loc.longitude) latitude_z = 46.8415803 longitude_z = 16.8456316 # Creation of map of Toronto using latitude and longitude values map_z = folium.Map(location=[latitude_z, longitude_z], zoom_start=10) map_z radius = 500 url_z = 'https://api.foursquare.com/v2/venues/explore?client_id=UM25MYXVTAUXQPNQ4BQMTUDPAXBWLGAGQJYIIP2D1SUS23YG&client_secret=WLCDIOWXOA0G405WB4B2HGJVIPUCX5KZ5W2CTTDA2KMXQBHF&ll=46.8415803,16.8456316&v=20201226&radius=500&limit=100'.format(CLIENT_ID, CLIENT_SECRET, latitude_z, longitude_z, VERSION, radius, LIMIT) url_z results_z = requests.get(url_z).json() results_z venues_z = results_z['response']['groups'][0]['items'] nearby_venues_z = json_normalize(venues_z) # flatten JSON # filter columns filtered_columns_z = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] nearby_venues_z = nearby_venues_z.loc[:, filtered_columns_z] # filter the category for each row nearby_venues_z['venue.categories'] = nearby_venues_z.apply(get_category_type, axis=1) # clean columns nearby_venues_z.columns = [col.split(".")[-1] for col in nearby_venues_z.columns] nearby_venues_z.head() nearby_venues_z.categories.unique() nearby_venues_z.shape nearby_venues_z['categories'].value_counts() nearby_venues_z[nearby_venues_z['categories'].str.contains("Restaurant")] data_z = { 'Town' : ['Zalaegerszeg'], 'Café' : [3], 'Restaurant' : [8], 'Burger Joint' : [1], 'Bakery': [1], } df_z = pd.DataFrame(data_z) df_z df_pgksvz = pd.concat([df_pgksv, df_z]) df_pgksvz from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="my_user_agent") city ="Tatabánya" country ="Hungary" loc = geolocator.geocode(city+','+ country) print("latitude is : " ,loc.latitude,"\nlongtitude is: " ,loc.longitude) latitude_t = 47.5837788 longitude_t = 18.3980308 # Creation of map of Toronto using latitude and longitude values map_t = folium.Map(location=[latitude_t, longitude_t], zoom_start=10) map_t radius = 500 url_t = 'https://api.foursquare.com/v2/venues/explore?client_id=UM25MYXVTAUXQPNQ4BQMTUDPAXBWLGAGQJYIIP2D1SUS23YG&client_secret=WLCDIOWXOA0G405WB4B2HGJVIPUCX5KZ5W2CTTDA2KMXQBHF&ll=47.5837788,18.3980308&v=20201227&radius=500&limit=100'.format(CLIENT_ID, CLIENT_SECRET, latitude_t, longitude_t, VERSION, radius, LIMIT) url_t results_t = requests.get(url_t).json() results_t venues_t = results_t['response']['groups'][0]['items'] nearby_venues_t = json_normalize(venues_t) # flatten JSON # filter columns filtered_columns_t = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] nearby_venues_t = nearby_venues_t.loc[:, filtered_columns_t] # filter the category for each row nearby_venues_t['venue.categories'] = nearby_venues_t.apply(get_category_type, axis=1) # clean columns nearby_venues_t.columns = [col.split(".")[-1] for col in nearby_venues_t.columns] nearby_venues_t.head() nearby_venues_t.categories.unique() nearby_venues_t.shape nearby_venues_t[nearby_venues_t['categories'].str.contains("Restaurant")] nearby_venues_t['categories'].value_counts() data_t = { 'Town' : ['Tatabánya'], 'Café' : [2], 'Grocery store' : [1], 'Bakery': [1], 'Restaurant': [4], 'Pizza Place' : [1], 'Breakfast spot': [1] } df_t = pd.DataFrame(data_t) df_t df_pgksvzt = pd.concat([df_pgksvz, df_t]) df_pgksvzt from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="my_user_agent") city ="Ajka" country ="Hungary" loc = geolocator.geocode(city+','+ country) print("latitude is : " ,loc.latitude,"\nlongtitude is: " ,loc.longitude) latitude_a = 47.1056579 longitude_a = 17.5587276 # Creation of map of Toronto using latitude and longitude values map_a = folium.Map(location=[latitude_a, longitude_a], zoom_start=10) map_a radius = 500 latitude_a = 47.1056579 longitude_a = 17.5587276 url_a = 'https://api.foursquare.com/v2/venues/explore?client_id=UM25MYXVTAUXQPNQ4BQMTUDPAXBWLGAGQJYIIP2D1SUS23YG&client_secret=WLCDIOWXOA0G405WB4B2HGJVIPUCX5KZ5W2CTTDA2KMXQBHF&ll=47.1056579,17.5587276&v=20201226&radius=500&limit=100'.format(CLIENT_ID, CLIENT_SECRET, latitude_a, longitude_a, VERSION, radius, LIMIT) url_a results_a = requests.get(url_a).json() results_a venues_a = results_a['response']['groups'][0]['items'] nearby_venues_a = json_normalize(venues_a) # flatten JSON # filter columns filtered_columns_a = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] nearby_venues_a = nearby_venues_a.loc[:, filtered_columns_a] # filter the category for each row nearby_venues_a['venue.categories'] = nearby_venues_a.apply(get_category_type, axis=1) # clean columns nearby_venues_a.columns = [col.split(".")[-1] for col in nearby_venues_a.columns] nearby_venues_a.head() nearby_venues_a.categories.unique() nearby_venues_a['categories'].value_counts() data_a = { 'Town' : ['Ajka'], 'Café' : [2], 'Dessert Shop' : [1], 'Ice Cream Shop': [1], } df_a = pd.DataFrame(data_a) df_a df_pgksvzta = pd.concat([df_pgksvzt, df_a]) df_pgksvzta df_pgksvzta.fillna(0, inplace = True) df_pgksvzta df_pgksvzta = df_pgksvzta.sort_values(by ='Town' ) df_pgksvzta = df_pgksvzta.sort_index(axis=1) col_name ='Town' first_col = df_pgksvzta.pop(col_name) df_pgksvzta.insert(0, col_name, first_col) df_pgksvzta ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. # AutoML 02: Regression with local compute In this example we use the scikit learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html) to showcase how you can use AutoML for a simple regression problem. Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook. In this notebook you would see 1. Creating an Experiment using an existing Workspace 2. Instantiating AutoMLConfig 3. Training the Model using local compute 4. Exploring the results 5. Testing the fitted model ## Create Experiment As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments. ``` import logging import os import random from matplotlib import pyplot as plt from matplotlib.pyplot import imshow import numpy as np import pandas as pd from sklearn import datasets import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.train.automl import AutoMLConfig from azureml.train.automl.run import AutoMLRun ws = Workspace.from_config() # choose a name for the experiment experiment_name = 'automl-local-regression' # project folder project_folder = './sample_projects/automl-local-regression' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace Name'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) pd.DataFrame(data = output, index = ['']).T ``` ## Diagnostics Opt-in diagnostics for better experience, quality, and security of future releases ``` from azureml.telemetry import set_diagnostics_collection set_diagnostics_collection(send_diagnostics=True) ``` ### Read Data ``` # load diabetes dataset, a well-known built-in small dataset that comes with scikit-learn from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) ``` ## Instantiate Auto ML Config Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment. |Property|Description| |-|-| |**task**|classification or regression| |**primary_metric**|This is the metric that you want to optimize.<br> Regression supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i><br><i>normalized_root_mean_squared_log_error</i>| |**max_time_sec**|Time limit in seconds for each iteration| |**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data| |**n_cross_validations**|Number of cross validation splits| |**X**|(sparse) array-like, shape = [n_samples, n_features]| |**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. | |**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.| ``` automl_config = AutoMLConfig(task='regression', max_time_sec = 600, iterations = 10, primary_metric = 'spearman_correlation', n_cross_validations = 5, debug_log = 'automl.log', verbosity = logging.INFO, X = X_train, y = y_train, path=project_folder) ``` ## Training the Model You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while. You will see the currently running iterations printing to the console. ``` local_run = experiment.submit(automl_config, show_output=True) local_run ``` ## Exploring the results #### Widget for monitoring runs The widget will sit on "loading" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete. NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details. ``` from azureml.train.widgets import RunDetails RunDetails(local_run).show() ``` #### Retrieve All Child Runs You can also use sdk methods to fetch all the child runs and see individual metrics that we log. ``` children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) rundata ``` ### Retrieve the Best Model Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*. ``` best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) ``` #### Best Model based on any other metric Show the run and model that has the smallest `root_mean_squared_error` (which turned out to be the same as the one with largest `spearman_correlation` value): ``` lookup_metric = "root_mean_squared_error" best_run, fitted_model = local_run.get_output(metric=lookup_metric) print(best_run) print(fitted_model) ``` #### Model from a specific iteration Simply show the run and model from the 3rd iteration: ``` iteration = 3 third_run, third_model = local_run.get_output(iteration = iteration) print(third_run) print(third_model) ``` ### Testing the Fitted Model Predict on training and test set, and calculate residual values. ``` y_pred_train = fitted_model.predict(X_train) y_residual_train = y_train - y_pred_train y_pred_test = fitted_model.predict(X_test) y_residual_test = y_test - y_pred_test %matplotlib inline import matplotlib.pyplot as plt import numpy as np from sklearn import datasets from sklearn.metrics import mean_squared_error, r2_score # set up a multi-plot chart f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Regression Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(16) # plot residual values of training set a0.axis([0, 360, -200, 200]) a0.plot(y_residual_train, 'bo', alpha = 0.5) a0.plot([-10,360],[0,0], 'r-', lw = 3) a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12) a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)), fontsize = 12) a0.set_xlabel('Training samples', fontsize = 12) a0.set_ylabel('Residual Values', fontsize = 12) # plot histogram a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step'); a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10); # plot residual values of test set a1.axis([0, 90, -200, 200]) a1.plot(y_residual_test, 'bo', alpha = 0.5) a1.plot([-10,360],[0,0], 'r-', lw = 3) a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12) a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)), fontsize = 12) a1.set_xlabel('Test samples', fontsize = 12) a1.set_yticklabels([]) # plot histogram a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step'); a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10); plt.show() ```
github_jupyter
--- # **Pneumonia Classification using RESNET** --- ``` import cv2 import os import random from PIL import Image, ImageOps import numpy as np from keras import layers from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D from keras.models import Model, load_model import keras.preprocessing as preprocessing from keras.utils import layer_utils from keras.utils.data_utils import get_file from keras import regularizers from keras.applications.imagenet_utils import preprocess_input import pydot from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.utils import plot_model from keras.initializers import glorot_uniform from keras.metrics import Recall from keras.losses import BinaryCrossentropy from keras import optimizers from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder import tensorflow as tf import scipy.misc from matplotlib.pyplot import imshow import matplotlib.pyplot as plt %matplotlib inline import keras.backend as K K.set_image_data_format('channels_last') ``` ## **1. Initializing a TPU system** ``` resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) tf.tpu.experimental.initialize_tpu_system(resolver) print("All devices: ", tf.config.list_logical_devices('TPU')) try: device_name = os.environ['COLAB_TPU_ADDR'] TPU_ADDRESS = 'grpc://' + device_name print('Found TPU at: {}'.format(TPU_ADDRESS)) except KeyError: print('TPU not found') ``` ## **2. Importing data** ``` from google.colab import drive drive.mount('/content/gdrive') def input_fn(x,y,batch_size=0,use_batch=True): # Convert the inputs to a Tensorflow dataset. dataset = tf.data.Dataset.from_tensor_slices((x,y)) # Shuffle, repeat, and batch the examples. dataset = dataset.cache() dataset = dataset.shuffle(1000, reshuffle_each_iteration=True) dataset = dataset.repeat() if use_batch: dataset = dataset.batch(batch_size, drop_remainder=True) return dataset def input_fn_1(x,batch_size=0,use_batch=True): # Convert the inputs to a Tensorflow dataset. dataset = tf.data.Dataset.from_tensor_slices((x)) # Shuffle, repeat, and batch the examples. dataset = dataset.cache() dataset = dataset.shuffle(1000, reshuffle_each_iteration=True) dataset = dataset.repeat() if use_batch: dataset = dataset.batch(batch_size, drop_remainder=True) return dataset def load_data(data_dir, sample_size, use_sample_size, img_dim=64): ds = [] label_dir = os.listdir(data_dir) for label in label_dir: if not(label == '.DS_Store'): label_dir_path = os.path.join(data_dir,label) filenames = os.listdir(label_dir_path) imgs_path = [os.path.join(label_dir_path, img_name) for img_name in filenames if img_name.endswith('.jpeg')] if use_sample_size: imgs_path = imgs_path[:sample_size] print('Number of images of type {}:'.format(label),len(imgs_path)) for img_path in imgs_path: img = Image.open(img_path) img = ImageOps.grayscale(img) img = np.array(img) img = np.reshape(img,(img_dim,img_dim,1)) img = img.astype('float32') img = img/225 ds.append([img,label]) random.seed(1234) random.shuffle(ds) return ds def load_tensorflow_ds(train_val=True, sample_size=255, use_sample_size=True, img_dim=64, train_batch_size=120, val_batch_size=8, test_batch_size=16): if train_val: train_ds = load_data('/content/gdrive/MyDrive/chest_xray/Resized_train_64_all',sample_size, use_sample_size, img_dim) labels = [] images = [] for x in range(len(train_ds)): labels.append(train_ds[x][1]) images.append(train_ds[x][0]) train_size = 5160 X_train = images[:train_size] y_train = labels[:train_size] X_val = images[train_size:] y_val = labels[train_size:] # One-hot-encoding y_train label_encoder = LabelEncoder() integer_encoded = label_encoder.fit_transform(y_train) onehot_encoder = OneHotEncoder(sparse=False) integer_encoded = integer_encoded.reshape(len(integer_encoded), 1) y_train = onehot_encoder.fit_transform(integer_encoded) # One-hot-encoding y_val integer_encoded = label_encoder.fit_transform(y_val) integer_encoded = integer_encoded.reshape(len(integer_encoded), 1) y_val = onehot_encoder.fit_transform(integer_encoded) print('Loading train Tensorflow dataset') train_ds = input_fn(X_train, y_train, train_batch_size) print('Loading val Tensorflow dataset') val_ds = input_fn(X_val, y_val, val_batch_size) train_val_list = [len(X_train),np.array(X_train).shape,np.array(y_train).shape] return train_ds,val_ds,train_val_list else: test_ds = load_data('/content/gdrive/MyDrive/chest_xray/Resized_test_64',sample_size, use_sample_size, img_dim) labels = [] images = [] for x in range(len(test_ds)): labels.append(test_ds[x][1]) images.append(test_ds[x][0]) X_test = images y_test = labels # One-hot-encoding y_test label_encoder = LabelEncoder() integer_encoded = label_encoder.fit_transform(y_test) onehot_encoder = OneHotEncoder(sparse=False) integer_encoded = integer_encoded.reshape(len(integer_encoded), 1) y_test = onehot_encoder.fit_transform(integer_encoded) print('Loading test Tensorflow dataset') test_ds = input_fn(X_test, y_test, test_batch_size) test_list = [len(X_test),np.array(X_test).shape,np.array(y_test).shape] return test_ds,test_list,X_test,y_test ``` Loading train, val, and test datasets ``` print('\033[1mLoading train & val ds\033[0m') train_ds,val_ds, train_val_list = load_tensorflow_ds(use_sample_size=False) print('\033[1mLoafing test ds\033[0m') test_ds,test_list,X_test,y_test = load_tensorflow_ds(train_val=False,use_sample_size=False) x_test = input_fn_1(X_test,624) print ("Number of training examples:" + str(train_val_list[0])) print ("Number of test examples:" + str(test_list[0])) print ("X_train shape:" + str(train_val_list[1])) print ("y_train shape:" + str(train_val_list[2])) print ("X_test shape:" + str(test_list[1])) print ("y_test shape:" + str(test_list[2])) plt.figure(figsize=(10, 10)) i=0 for image, label in (X_train[0:9],y_train[0:9]): ax = plt.subplot(3, 3, i + 1) plt.imshow(image[i].numpy().astype("uint8")) plt.title(label[i]) plt.axis("off") i+=1 ``` ## **3. ResNet architecture** ``` def convolutional_block(X, f, filters, stage, block, s = 2): """ Implementation of the convolutional block Returns: X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value X_shortcut = X ##### MAIN PATH ##### # First component of main path X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0),kernel_regularizer=regularizers.l2(0.01))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(F2, (f, f), strides = (1,1),padding = "same", name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0),kernel_regularizer=regularizers.l2(0.01))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(F3, (1, 1), strides = (1,1), name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0),kernel_regularizer=regularizers.l2(0.01))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X) ##### SHORTCUT PATH #### (≈2 lines) X_shortcut = Conv2D(F3, (1, 1), strides = (s,s), name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0),kernel_regularizer=regularizers.l2(0.01))(X_shortcut) X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X,X_shortcut]) X = Activation('relu')(X) ### END CODE HERE ### return X def identity_block(X, f, filters, stage, block): """ Implementation of the identity block Returns: X -- output of the identity block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value. You'll need this later to add back to the main path. X_shortcut = X # First component of main path X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0),kernel_regularizer=regularizers.l2(0.01))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0),kernel_regularizer=regularizers.l2(0.01))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0),kernel_regularizer=regularizers.l2(0.01))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X,X_shortcut]) X = Activation("relu")(X) ### END CODE HERE ### return X def ResNet(input_shape = (64, 64,1), classes = 2, rate = 0.3): """ Implementation of the popular ResNet architecture Returns: model -- a Model() instance in Keras """ # Define the input as a tensor with shape input_shape X_input = Input(input_shape) # Zero-Padding X = ZeroPadding2D((1, 1))(X_input) # Stage 1 X = Conv2D(64, (3, 3), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = 'bn_conv1')(X) X = Activation('relu')(X) X = MaxPooling2D((3, 3), strides=(2, 2))(X) # Stage 2 X = convolutional_block(X, f = 3, filters = [64, 64, 128], stage = 2, block='a', s = 1) X = identity_block(X, 3, [64, 64, 128], stage=2, block='b') X = identity_block(X, 3, [64, 64, 128], stage=2, block='c') X = identity_block(X, 3, [64, 64, 128], stage=2, block='d') X = tf.keras.layers.Dropout(rate, seed=134)(X) # stage 3 X = convolutional_block(X, f = 3, filters = [64, 64, 128], stage = 3, block='a', s = 1) X = identity_block(X, 3, [64, 64, 128], stage=3, block='b') X = identity_block(X, 3, [64, 64, 128], stage=3, block='c') X = identity_block(X, 3, [64, 64, 128], stage=3, block='d') X = tf.keras.layers.Dropout(rate, seed=234)(X) # Stage 4 X = convolutional_block(X, f = 3, filters = [128, 128, 256], stage = 4, block='a', s = 2) X = identity_block(X, 3, [128, 128, 256], stage=4, block='b') X = identity_block(X, 3, [128, 128, 256], stage=4, block='c') X = identity_block(X, 3, [128, 128, 256], stage=4, block='d') X = tf.keras.layers.Dropout(rate, seed=124)(X) # Stage 5 X = convolutional_block(X, f = 3, filters = [128, 128, 256], stage = 5, block='a', s = 2) X = identity_block(X, 3, [128, 128, 256], stage=5, block='b') X = identity_block(X, 3, [128, 128, 256], stage=5, block='c') X = identity_block(X, 3, [128, 128, 256], stage=5, block='d') X = tf.keras.layers.Dropout(rate, seed=123)(X) # Stage 6 X = convolutional_block(X, f = 3, filters = [256, 256, 512], stage = 6, block='a', s = 2) X = identity_block(X, 3, [256, 256, 512], stage=6, block='b') X = identity_block(X, 3, [256, 256, 512], stage=6, block='c') X = identity_block(X, 3, [256, 256, 512], stage=6, block='d') X = tf.keras.layers.Dropout(rate, seed=14)(X) # Stage 7 X = convolutional_block(X, f = 3, filters = [256, 256, 512], stage = 7, block='a', s = 2) X = identity_block(X, 3, [256, 256, 512], stage=7, block='b') X = identity_block(X, 3, [256, 256, 512], stage=7, block='c') X = identity_block(X, 3, [256, 256, 512], stage=7, block='d') X = tf.keras.layers.Dropout(rate, seed=34)(X) # output layer X = Flatten()(X) X = Dense(classes, activation='sigmoid', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X) # Create model model = Model(inputs = X_input, outputs = X, name='ResNet') return model ``` ## **4. Model training** ``` strategy = tf.distribute.TPUStrategy(resolver) with strategy.scope(): model = ResNet() model.compile(optimizer='adam', loss=BinaryCrossentropy(from_logits=True), metrics=Recall()) steps_per_epoch = 5160 // 120 validation_steps = 56 // 8 weight_for_normal = 5216 / (2 * 1341 ) weight_for_pneumonia = 5216 / (2 * 3875) weights = {0:weight_for_normal, 1:weight_for_pneumonia} from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau model_checkpoint = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_recall_1', mode='max',verbose=1) history = model.fit(train_ds, epochs=50, steps_per_epoch=steps_per_epoch, validation_data=val_ds, validation_steps=validation_steps, callbacks=[model_checkpoint], class_weight=weights) #model.load_weights('.mdl_wts.hdf5') model.load_weights('/content/gdrive/MyDrive/Resnet_models/model_Resnet_4_weights_recall_988.h5') model.save_weights('/content/gdrive/MyDrive/Resnet_models/model_Resnet_weights_recall_988.h5', overwrite=True) preds = model.evaluate(test_ds,steps=39) print ("Loss = " + str(preds[0])) print ("Test Recall = " + str(preds[1])) from sklearn.metrics import classification_report predictions = model.predict(x_test,steps = 1) #predictions = predictions.reshape(1,-1)[0] predictions = np.argmax(predictions, axis=1) #y_test = np.argmax(y_test, axis=1) #report = classification_report(, predicted) print(classification_report(np.argmax(y_test, axis=1), list(predictions), target_names = ['NORMAL (Class 0)','PNEUMONIA (Class 1)'])) from sklearn.metrics import confusion_matrix confusion_matrix(np.argmax(y_test, axis=1),predictions) 255/(255+135) ``` ``` epochs = 70 acc = history.history['recall'] val_acc = history.history['val_recall'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(epochs) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Recall') plt.plot(epochs_range, val_acc, label='Validation Recall') plt.legend(loc='lower right') plt.title('Training and Validation Recall') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() model.summary() len(model.layers) plot_model(model, to_file='resnet_model.png') SVG(model_to_dot(model).create(prog='dot', format='svg')) imshow(X_test[0].astype("uint8")) #plt.axis("off") ```
github_jupyter
## Z Calibration Curves for 3D-DAOSTORM (and sCMOS). In this example we are trying to determine the coefficients $w_o,c,d,A,B,C,D$ in this equation: \begin{equation*} W_{x,y} = w_o \sqrt{1 + \left(\frac{z-c}{d}\right)^2 + A\left(\frac{z-c}{d}\right)^3 + B\left(\frac{z-c}{d}\right)^4 + C\left(\frac{z-c}{d}\right)^5 + D\left(\frac{z-c}{d}\right)^6} \end{equation*} This is a modified form of a typical microscope defocusing curve. $W_x$, $W_y$ are the widths of the localization as measured by 3D-DAOSTORM and $z$ is the localization $z$ offset in $um$. See also [Huang et al, Science, 2008](http://dx.doi.org/10.1126/science.1153529). ### Configuration To perform z-calibration you need a movie of (small) fluorescent beads or single blinking dye molecules on a flat surface such as a coverslip. You then scan the coverslip through the focus of the microscope while recording a movie. In this example we'll simulate blinking dyes on a coverslip. The PSF is created using the pupil function approach and is purely astigmatic. Create an empty directory and change to that directory. ``` import os os.chdir("/home/hbabcock/Data/storm_analysis/jy_testing/") print(os.getcwd()) ``` Generate sample data for this example. ``` import storm_analysis.jupyter_examples.dao3d_zcal as dao3d_zcal dao3d_zcal.configure() ``` ### 3D-DAOSTORM analysis of the calibration movie Set parameters for 3D-DAOSTORM analysis. Note the analysis is done using the `3d` PSF model, a Gaussian with independent widths in X/Y. ``` import storm_analysis.sa_library.parameters as params # Load the parameters. daop = params.ParametersDAO().initFromFile("example.xml") # Set for a single iteration, we don't want multiple iterations of peak finding # as this could cause stretched peaks to get split in half. daop.changeAttr("iterations", 1) # Use a large find max radius. This also reduces peak splitting. daop.changeAttr("find_max_radius", 10) # Use a higher threshold so that we don't get the dimmer localizations. daop.changeAttr("threshold", 18) # Don't do tracking or drift correction. daop.changeAttr("radius", 0.0) daop.changeAttr("drift_correction", 0) # Save the changed parameters. daop.toXMLFile("calibration.xml") ``` Analyze the calibration movie with 3D-DAOSTORM ``` import os import storm_analysis.daostorm_3d.mufit_analysis as mfit if os.path.exists("calib.hdf5"): os.remove("calib.hdf5") mfit.analyze("calib.tif", "calib.hdf5", "calibration.xml") ``` Check results with with overlay images. ``` # Overlay image at z near zero. import storm_analysis.jupyter_examples.overlay_image as overlay_image overlay_image.overlayImage("calib.tif", "calib.hdf5", 40) ``` ### Z calibration First we will need a file containing the z-offsets for each frame. This file contains two columns, the first is whether or not the data in this frame should be used (0 = No, 1 = Yes) and the second contains the z offset in microns. ``` import numpy # In this simulation the z range went from -0.6 microns to 0.6 microns in 10nm steps. z_range = dao3d_zcal.z_range z_offsets = numpy.arange(-z_range, z_range + 0.001, 0.01) valid = numpy.ones(z_offsets.size) # Limit the z range to +- 0.4um. mask = (numpy.abs(z_offsets) > 0.4) valid[mask] = 0.0 numpy.savetxt("z_offsets.txt", numpy.transpose(numpy.vstack((valid, z_offsets)))) ``` Plot Wx / Wy versus Z curves. ``` import matplotlib import matplotlib.pyplot as pyplot # Change default figure size. matplotlib.rcParams['figure.figsize'] = (8,6) import storm_analysis.daostorm_3d.z_calibration as z_cal [wx, wy, z, pixel_size] = z_cal.loadWxWyZData("calib.hdf5", "z_offsets.txt") pyplot.scatter(z, wx, color = 'r') pyplot.scatter(z, wy, color = 'b') pyplot.show() ``` Now measure Z calibration curves. We'll do a second order fit, i.e. A,B will be fit, but not C,D. Note - The fitting is not super robust, so you may have to play with `fit_order` and `p_start` to get it to work. Usually it will work for `fit_order = 0`, but then it might fail for `fit_order = 1` but succeed for `fit_order = 2`. ``` # # The function z_cal.calibrate() will perform all of these steps at once. # fit_order = 2 outliers = 3.0 # Sigma to be considered an outlier. # Initial guess, this is optional, but might be necessary if your setup is # significantly different from what storm-analysis expects. # # It can also help to boot-strap to higher fitting orders. # p_start = [3.2,0.19,0.3] # Fit curves print("Fitting (round 1).") [wx_params, wy_params] = z_cal.fitDefocusingCurves(wx, wy, z, n_additional = 0, z_params = p_start) print(wx_params) p_start = wx_params[:3] # Fit curves. print("Fitting (round 2).") [wx_params, wy_params] = z_cal.fitDefocusingCurves(wx, wy, z, n_additional = fit_order, z_params = p_start) print(wx_params) p_start = wx_params[:3] # Remove outliers. print("Removing outliers.") [t_wx, t_wy, t_z] = z_cal.removeOutliers(wx, wy, z, wx_params, wy_params, outliers) # Redo fit. print("Fitting (round 3).") [wx_params, wy_params] = z_cal.fitDefocusingCurves(t_wx, t_wy, t_z, n_additional = fit_order, z_params = p_start) # Plot fit. z_cal.plotFit(wx, wy, z, t_wx, t_wy, t_z, wx_params, wy_params, z_range = 0.4) # This prints the parameter with the scale expected by 3D-DAOSTORM in the analysis XML file. z_cal.prettyPrint(wx_params, wy_params, pixel_size = pixel_size) ``` Create a parameters file with these calibration values. ``` # Load the parameters. daop = params.ParametersDAO().initFromFile("example.xml") # Update calibration parameters. z_cal.setWxWyParams(daop, wx_params, wy_params, pixel_size) # Do z fitting. daop.changeAttr("do_zfit", 1) # Set maximum allowed distance in wx, wy space that a point can be from the # calibration curve. daop.changeAttr("cutoff", 2.0) # Use a higher threshold as the Gaussian PSF is not a good match for our PSF model, so # we'll get spurious peak splitting if it is too low. daop.changeAttr("threshold", 12) # Don't do tracking or drift correction as this movie is the same as the calibration # movie, every frame has a different z value. daop.changeAttr("radius", 0.0) daop.changeAttr("drift_correction", 0) # Save the changed parameters. daop.toXMLFile("measure.xml") ``` ### Analyze test movie with the z-calibration parameters. ``` if os.path.exists("measure.hdf5"): os.remove("measure.hdf5") mfit.analyze("measure.tif", "measure.hdf5", "measure.xml") ``` Plot Wx / Wy versus Z curves for data from the test movie. ``` [wx, wy, z, pixel_size] = z_cal.loadWxWyZData("measure.hdf5", "z_offsets.txt") pyplot.scatter(z, wx, color = 'r') pyplot.scatter(z, wy, color = 'b') pyplot.show() ``` Plot Wx versus Wy with the z calibration curve overlaid. This can be useful for checking that your calibration curve matches your data. ``` # Load Z calibration parameters. m_params = params.ParametersDAO().initFromFile("measure.xml") [wx_params, wy_params] = m_params.getWidthParams() [min_z, max_z] = m_params.getZRange() # Z range is in microns, want nanometers. min_z = min_z * 1.0e+3 max_z = max_z * 1.0e+3 # Calculate fit z curve at high resolution fz_wx_1 = z_cal.zcalib4(wx_params, numpy.arange(min_z, max_z + 1, 10))/dao3d_zcal.pixel_size fz_wy_1 = z_cal.zcalib4(wy_params, numpy.arange(min_z, max_z + 1, 10))/dao3d_zcal.pixel_size # Calculate fit z curve at 100nm resolution. fz_wx_2 = z_cal.zcalib4(wx_params, numpy.arange(min_z, max_z + 1, 100))/dao3d_zcal.pixel_size fz_wy_2 = z_cal.zcalib4(wy_params, numpy.arange(min_z, max_z + 1, 100))/dao3d_zcal.pixel_size # Make figure. fig = pyplot.figure(figsize = (8,8)) pyplot.scatter(wx, wy, marker = ".") pyplot.scatter(fz_wx_2, fz_wy_2, marker = "o", s = 120, edgecolor = "black", facecolor = 'none', linewidths = 2) pyplot.plot(fz_wx_1, fz_wy_1, color = "black", linewidth = 2) pyplot.xlim(2,10) pyplot.ylim(2,10) pyplot.xlabel("Wx (pixels)") pyplot.ylabel("Wy (pixels)") pyplot.show() ``` Check how well we did at fitting Z. ``` import storm_analysis.sa_library.sa_h5py as saH5Py # Create numpy arrays with the real and the measured z values. measured_z = numpy.array([]) real_z = numpy.array([]) with saH5Py.SAH5Reader("measure.hdf5") as h5: for fnum, locs in h5.localizationsIterator(fields = ["category", "z"]): # The z fit function will place all the localizations that are too # far from the calibration curve into category 9. mask = (locs["category"] != 9) z = locs["z"][mask] measured_z = numpy.concatenate((measured_z, z)) real_z = numpy.concatenate((real_z, numpy.ones(z.size)*z_offsets[fnum])) # Plot fig = pyplot.figure(figsize = (8,8)) ax = fig.add_subplot(1,1,1) ax.scatter(real_z, measured_z, s = 4) ax.plot([-1.0,1.0],[-1.0,1.0], color = 'black', linewidth = 2) ax.axis("equal") ax.axis([-0.5, 0.5, -0.5, 0.5]) pyplot.xlabel("Actual Z (um)") pyplot.ylabel("Measured Z (um)") ``` Change the tolerance for the distance from the calibration curve and redo the Z fit. ``` import shutil import storm_analysis.sa_utilities.fitz_c as fitz_c import storm_analysis.sa_utilities.std_analysis as std_ana m_params = params.ParametersDAO().initFromFile("measure.xml") [wx_params, wy_params] = m_params.getWidthParams() [min_z, max_z] = m_params.getZRange() # Make a copy of the .hdf5 file as this operation will change it in place. shutil.copyfile("measure.hdf5", "measure_copy.hdf5") m_params.changeAttr("cutoff", 0.2) print("cutoff is", m_params.getAttr("cutoff")) # Re-fit z parameters. fitz_c.fitz("measure_copy.hdf5", m_params.getAttr("cutoff"), wx_params, wy_params, min_z, max_z, m_params.getAttr("z_step")) # Mark out of range peaks as category 9. The range is specified by the min_z and max_z parameters. std_ana.zCheck("measure_copy.hdf5", m_params) # Create numpy arrays with the real and the measured z values. measured_z = numpy.array([]) real_z = numpy.array([]) with saH5Py.SAH5Py("measure_copy.hdf5") as h5: for fnum, locs in h5.localizationsIterator(fields = ["category", "z"]): # The z fit function will place all the localizations that are too # far from the calibration curve into category 9. mask = (locs["category"] != 9) z = locs["z"][mask] measured_z = numpy.concatenate((measured_z, z)) real_z = numpy.concatenate((real_z, numpy.ones(z.size)*z_offsets[fnum])) # Plot fig = pyplot.figure(figsize = (8,8)) ax = fig.add_subplot(1,1,1) ax.scatter(real_z, measured_z, s= 4) ax.plot([-1.0,1.0],[-1.0,1.0], color = 'black', linewidth = 2) ax.axis("equal") ax.axis([-0.5, 0.5, -0.5, 0.5]) pyplot.xlabel("Actual Z (um)") pyplot.ylabel("Measured Z (um)") ```
github_jupyter
# Functions ### Python functions are first class, which means you can assign them as variables and pass them into functions just like any other variable. ``` def double(x): """Multiplies its input by 2""" return x * 2 def apply_to_one(f): """Calls the function f with 1 """ return f(1) double_function = double apply_to_one(double) ``` ### It is also easy to create short anonymous functions ``` apply_to_one(lambda x: x + 4) ``` ### Parameters can be given default arguments ``` def my_print(message = "a default message"): print(message) my_print("hello") my_print() ``` ### You can specify arguments by name ``` def full_name(first = "Whats his name", last = "Something"): print(f"{first} {last}") full_name("Joel", "Grus") full_name("Joel") full_name (last="Grus") ``` # Strings ### Single and double quotes ``` single_quotes = 'this is a string' double_quotes = "this is a string" ``` ### Raw strings ``` tab_string = r"\t" len(tab_string) ``` ### Multiline strings ``` multi_line_string = """First line. Second line. Third line. """ print(multi_line_string) ``` ### f-strings ``` first = "Joel" last = "Grus" print(f"{first} {last}") ``` # Exceptions ``` print(0 / 0) try: print(0 / 0) except ZeroDivisionError: print("cannot divide by zero") ``` # Lists ### Types of lists ``` integer_list = [1, 2, 3] heterogenous_list = ["string", 0.1, True] list_of_lists = [integer_list, heterogenous_list, []] print(f"length of integer_list: {len(integer_list)}") print(f"sum of integer_list: {sum(integer_list)}") ``` ### Subset lists ``` x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] zero = x[0] one = x[1] nine = x[-1] eight = x[-2] first_three = x[:3] three_to_end = x[3:] last_three = x[-3:] without_first_and_last = x[1:-1] copy_of_x = x[:] every_third = x[::3] five_to_three = x[5:2:-1] ``` ### Lists are mutable ``` print(x) x[0] = -1 print(x) ``` ### In operator ``` true = 1 in [1, 2, 3] false = 0 in [1, 2, 3] print(true, false) # only use when list is small ``` ### Add to lists ``` x = [1, 2, 3] y = x + [4, 5, 6] z = y.extend([7 , 8, 9]) y.append(10) print(y) ``` ### Unpack lists ``` x, y = [1, 2] _, z = ["dont care about this", 3] print(x) print(y) print(z) ``` # Tuples ### Tuples are immutable ``` my_list = [1, 2] my_tuple = (1, 2) other_tuple = 3, 4 my_list[1] = 3 try: my_tuple[1] = 3 except TypeError: print("cannot modify a tuple") ``` ### Use tuples to ruturn multiple values from functions ``` def sum_and_product(x, y): return (x + y), (x * y) sp = sum_and_product(2, 3) print (sp) s, p = sum_and_product(2, 3) print(s) print(p) ``` ### Multiple assignment ``` x, y = 1, 2 ``` # Dictionaries ### Initiate dictionary ``` empty_dict = {} empty_dict2 = dict() grades = { "Joel": 80, "Tim": 95 } ``` ### Retrieve value using the key ``` joels_grade = grades["Joel"] ``` ### KeyError when value does not exist ``` try: kates_grade = grades["Kate"] except KeyError: print("no grade for Kate!") ``` ### Check for existence ``` joel_has_grade = "Joel" in grades kate_has_grade = "Kate" in grades print(joel_has_grade, kate_has_grade) ``` ### Get method circumvates the exception ``` joels_grade = grades.get("Joel", 0) kates_grade = grades.get("Kate", 0) no_ones_grade = grades.get("No One") print(joels_grade, kates_grade, no_ones_grade) ``` ### Add to dictionary ``` grades["Tim"] = 99 grades["Kate"] = 100 grades ``` ### Dictionaries to represent data ``` tweet = { "user": "joelgrus", "text": "Data Science is Awesome", "retweet_count": 100, "hashtags": ["#data", "#science"] } tweet_keys = tweet.keys() tweet_values = tweet.values() tweet_items = tweet.items() print(tweet_keys) print("") print(tweet_values) print("") print(tweet_items) ``` ### Pythonic way to check for keys, values ``` "user" in tweet "joelgrus" in tweet.values() #slow but only way ``` ### defaultdict ``` document = "My name is Joel Gruss! Please to meet you. I have come from far far away." word_counts = {} for word in document.split(): if word in word_counts: word_counts[word] += 1 else: word_counts[word] = 1 print(word_counts) word_counts = {} for word in document.split(): try: word_counts[word] += 1 except KeyError: word_counts[word] = 1 print(word_counts) word_counts = {} for word in document.split(): previous_count = word_counts.get(word, 0) word_counts[word] = previous_count + 1 print(word_counts) from collections import defaultdict word_counts = defaultdict(int) for word in document.split(): word_counts[word] += 1 print(word_counts) dd_list = defaultdict(list) dd_list[2].append(1) dd_list dd_dict = defaultdict(dict) dd_dict["Joel"]["City"] = "Seattle" dd_dict dd_pair = defaultdict(lambda: [0,0]) dd_pair[2][1] = 1 dd_pair ``` # Counters ### Easy way to count frequency ``` from collections import Counter c = Counter([0, 1, 2, 0]) c document = "My name is Joel Gruss! Please to meet you. I have come from far far away." word_counts = Counter(document.split()) word_counts ``` ### Most common method ``` word_counts.most_common(1) ``` # Sets ``` ### Data structure for only distinct elements primes_below_10 = {2, 3, 5, 7} s = set() s.add(1) s.add(2) len(s) 3 in s ``` ### In operator is very fast in sets ``` stopwords_list = ["a", "an", "at"] + ["hundreds_of_other_words"] + ["yet", "you"] "zip" in set(stopwords_list) ``` ### Find distinct items in a collection ``` item_list = [1, 2, 3, 1, 2, 3] num_items = len(item_list) item_set = set(item_list) num_distinct_items = len(item_set) distinct_item_list = list(item_set) ``` # Control Flow ### if statements ``` if 1 > 2: message = "if only 1 were greater than 2" elif 1 > 3: message = "else if" else: message = "when all else fails" x = 2 parity = "even" if x % 2 == 0 else "odd" parity ``` ### while loop ``` x = 0 while x < 10: print(f"{x} is less than 10") x += 1 ``` ### for loop ``` for x in range(10): print(f"{x} is less than 10") for x in range(10): if x == 3: continue if x == 5: break print(x) ``` # Truthiness ``` one_is_less_than_two = 1 < 2 true_equals_false = True == False print(one_is_less_than_two) print(true_equals_false) x = None assert x == None assert x is None ``` ### Falsy values ``` if not False: print('falsy value!') if not None: print('falsy value!') if not 0: print('falsy value!') if not 0.0: print('falsy value!') if not []: print('falsy value!') if not {}: print('falsy value!') if not set(): print('falsy value!') s = "" if s: first_char = s[0] else: first_char = "" first_char x = None safe_x = x if x is not None else 0 safe_x ``` ### all and any ``` all([True, True]) all([True, False]) any([True, False]) all([]) any([]) ``` # Sorting ``` x = [4, 3, 2, 1] y = sorted(x) print(x, y) x.sort() print(x) x = sorted([-4, 1, -2, 3], key=abs, reverse=True) x from collections import Counter document = "My name is Joel Gruss! Please to meet you. I have come from far far away." word_counts = Counter(document.split()) word_counts wc = sorted(word_counts.items(), key=lambda word_and_count: word_and_count[1], reverse=True) wc ``` # List Comprehensions ``` even_numbers = [x for x in range(5) if x % 2 == 0] even_numbers squares = [x**2 for x in range(5)] squares even_squares = [x**2 for x in range(5) if x**2 % 2 ==0] even_squares square_dict = {x: x**2 for x in range(5) if x % 2 ==0} square_dict square_set = set(x**2 for x in [1, -1]) square_set zeros = [0 for _ in range(5)] zeros pairs = [(x, y) for x in range(2) for y in range(2) ] pairs ``` # Automated Testing and assert ``` assert 1 + 1 == 2, "1 + 1 should equal 2" from typing import Tuple def minimum_of_elements(xs: Tuple[int]) -> int: return min(xs) assert minimum_of_elements([10, 30 , 4]) == 4 assert minimum_of_elements([-1 , 3 , 5]) == -1 def minimum_of_elements(xs: Tuple[int]) -> int: assert xs, "empty list has no minimum" return min(xs) minimum_of_elements([]) ``` # Objected-Oriented Programming ### Create a class ``` class CountingClicker: """Class maintains a count, can be clicked to increment the count, allows you to read_count, and can be reset back to zero. """ def __init__(self, count=0): self.count = count def __repr__(self): return f"CountingClicker(count={self.count})" def click(self, num_times = 1): """Click the clicker some number of times""" self.count =+ num_times def read(self): return self.count def reset(self): self.count = 0 clicker = CountingClicker() assert clicker.read() == 0 clicker.click(2) assert clicker.read() == 2 clicker.reset() assert clicker.read() == 0 ``` ### Create a subclass which inherits a class ``` class NoRestClicker(CountingClicker): """Class maintains a count, can be clicked to increment the count, allows you to read_count, but cannot be reset back to zero. """ def reset(self): pass clicker = CountingClicker() assert clicker.read() == 0 clicker.click(2) assert clicker.read() == 2 clicker.reset() assert clicker.read() == 0 ``` # Iterables and Generators ``` def generate_range(n): i = 0 while i < n: yield i i += 1 for i in generate_range(10): print(f"i: {i}") def natural_numbers(): """returns 1, 2 ,3, ...""" n = 1 while True: yield n n += 1 data = natural_numbers() evens = (x for x in data if x % 2 ==0) even_squares = (x ** 2 for x in evens) even_squares_ending_in_six = (x for x in even_squares if x % 10 == 6) next(even_squares_ending_in_six), next(even_squares_ending_in_six) ``` ### Iterate over values and indicesm ``` names = ["Alice", "Bob", "Charlie", "Debbie"] for i, name in enumerate(names): print(f"name {i} is {name}") ``` # Randomness ``` import random random.seed(10) # ensure we get the same results always four_uniform_randoms = [random.random() for _ in range(4)] four_uniform_randoms random.randrange(10) # random from range [0,...,9] random.randrange(3, 6) # random from range [3, 4, 5] up_to_ten = [i for i in range(11)] up_to_ten random.shuffle(up_to_ten) up_to_ten random.choice(["Alice", "Bob", "Charlie", "Debbie"]) random.sample(up_to_ten, 2) # without replacement [random.choice(up_to_ten) for _ in range(4)] # with replacement ``` # Regular Expressions ``` import re re_examples = [ not re.match("a", "cat"), # 'cat' doesn't start with 'a' re.search("a", "cat"), # 'cat' has an 'a' in it not re.search("c", "dog"), # 'dog' doesn't have a 'c' in it 3 == len(re.split("[ab]", "carbs")), # Split on a or b to ['c','r','s'] "R-D-" == re.sub("[0-9]", "-", "R2D2") # Replace digits with dashes ] assert all(re_examples) == True ``` # Zip and Argument Unpacking ### Zip ``` list1 = ['a', 'b', 'c'] list2 = [1, 2, 3] pairs = [pair for pair in zip(list1, list2)] pairs letters, numbers = zip(*pairs) letters, numbers ``` ### Argument unpacking ``` def add(a, b): return a + b add(1, 2) try: add([1, 2]) except TypeError: print("add expects two inputs") add(*[1, 2]) # returns 3 ``` # args and kwargs ``` def doubler(f): """Function mulitplies f * 2""" def g(x): return 2 * f(x) return g def f1(x): return x + 1 g = doubler(f1) assert g(3) == 8 assert g(-1) == 0 ``` ### what happends when more than one argument? ``` def f2(x, y): return x + y g = doubler(f2) try: g(1, 2) except TypeError: print("as defined g only takes one argument") ``` ### solution ``` def magic(*args, **kwargs): print("unnamed args:", args) print("named args:", kwargs) magic(1, 2, key="word", key2="word2") def other_way_magic(x, y, z): return x + y + z x_y_list = [1, 2] z_dict = {"z": 3} assert other_way_magic(*x_y_list, **z_dict) == 6 def doubler(f): """Function mulitplies f * 2""" def g(*args, **kwargs): return 2 * f(*args, **kwargs) return g def f2(x, y): return x + y g = doubler(f2) g(1, 2) ``` # Type Annotations ### Python does not care about types of objects ``` def add(a, b): return a + b assert add(10, 5) == 15 # numbers assert add([1, 2], [3]) == [1, 2, 3] # lists assert add("hi ", "there") == "hi there" # strings try: add(5, "ten") except: print("Cannot add int and string") ``` ### Type annotations use mypy ``` def add(a: int, b: int) -> int: return a + b assert add(10, 5) == 15 # this ok assert add("hi ", "there") == "hi there" # this not ok from typing import List def total(xs: List[float]) -> float: return sum(xs) from typing import Optional values: List[int] = [] best_so_far: Optional[float] = None from typing import Dict, Iterable, Tuple counts: Dict[str, int] = {'data': 1, 'sciecne': 2} evens: Iterable[int] = (x for x in range(10) if x % 2 == 0) triple: Tuple[int, float, int] = (10, 2.3, 5) from typing import Callable def twice(repeater: Callable[[str, int], str], s: str) -> str: return repeater(s, 2) def repeater(s: str, n: int) -> str: n_copies = [s for _ in range(n)] return ', '.join(n_copies) twice(repeater, "hi") Number = float Numbers = List[Number] def total(xs: Numbers) -> Number: return sum(xs) ```
github_jupyter
<table class="table table-bordered"> <tr> <th style="text-align:center; width:25%"><img src='https://www.np.edu.sg/PublishingImages/Pages/default/odp/ICT.jpg' style="width: 250px; height: 125px; "></th> <th style="text-align:center;"><h1>Deep Learning</h1><h2>Practical 3a - Data Processing Using Pandas</h2><h3>AY2020/21 Semester</h3></th> </tr> </table> ## Objectives After completing this practical exercise, students should be able to: 1. [Understand the basics of Pandas for data processing tasks](#demo) 2. [Exercise: Practise data processing on a different dataset](#exc) ## 1. Pandas <a id='demo' /> This is a short introduction on Pandas Package. For more details, please refer to a 10 minutes Pandas tutorial at: https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html. We will be using two csv files (`nba.csv` and `Players.csv`) in this Practical. You can download both files from MEL and save them at the same folder as this Practical File (.ipynb). ``` import numpy as np import pandas as pd #Load the csv file to a DataFrame variable df df = pd.read_csv('nba.csv') print(df) # Display the type of data for each column df.dtypes # Display the first few rows of data df.head() # Drop the NA records df.dropna(inplace = True) df.head() # Display the index df.index # Display the columns df.columns # shows a quick statistic summary of your data (only for the numerical data columns) df.describe() # select mutiple columns (numerical data columns) df2=df.loc[:,['Number','Age','Weight','Salary']] print(df2.head()) # convert the DataFrame to a Numpy Array array= df2.values # option 1 array= df2.to_numpy() # option 2: New in version 0.24.0. print(array) print(array.shape) # convert Numpy Array to DataFrame df3=pd.DataFrame(array) df3.head() # convert Numpy Array to DataFrame with column names indicated df3=pd.DataFrame(array, columns =['Number','Age','Weight','Salary']) df3.head() # export DataFrame to a csv file df3.to_csv('nba_new.csv') ``` ## 2. Exercise <a id='exc' /> Load the data from `Players.csv` and complete the below tasks using what you learned from this practical. ``` # Task 1: Load the csv file 'Players.csv' to a DataFrame variable df # Task 2: Clean up the data (if required) and select all the numeric columns & assign to a new DataFrame df2 # Task 3: Convert df2 to a Numpy Array # Task 4: Convert Numpy Array to DataFrame df3 with column names indicated # Task 5: Export df3 to a new csv file (Players_new.csv) ```
github_jupyter
# Линейная регрессия https://jakevdp.github.io/PythonDataScienceHandbook/ полезная книга которую я забыл добавить в прошлый раз # План на сегодня: 1. Как различать различные решения задачи регрессии? 2. Как подбирать параметры Линейной модели? 3. Как восстанавливать нелинейные модели с помощью Линейной модели? 4. Что делать если у нас много признаков? 5. Проблема переобучения ``` # Для начала я предлагаю посмотреть на простой пример import numpy as np from matplotlib import pyplot as plt %matplotlib inline plt.rc('font', **{'size':18}) ``` Сгенерим набор данных. В наших данных один признак $x$ и одна целевая переменная $y$, следом добавим к целевой переменной немного шума распределенного по нормальному закону $N(0, 10)$: $$y = f(x) = 3 x + 6 + N(0, 10), x \in [0, 30]$$ ``` random = np.random.RandomState(4242) X = np.linspace(0, 30, 60) y = X * 3 + 6 y_noisy = y + random.normal(scale=10, size=X.shape) plt.figure(figsize=(15,5)) plt.plot(X, y, label='Y = 3x + 6', color='tab:red', lw=0.5); plt.scatter(X, y_noisy, label='Y + $\epsilon$', color='tab:blue', alpha=0.5, s=100) plt.title('True law (red line) vs Observations (blue points)') plt.xlabel('X, признак') plt.ylabel('Y, target') plt.legend(); ``` Задачей регрессии называют задачу восстановления закона (функции) $f(x)$ по набору наблюдений $(x, y)$. Мне дали новое значение $x$ которого я раньше не встречал, могу ли я предсказать для него значение $y$? ``` plt.figure(figsize=(15,5)) plt.scatter(X, y_noisy, label='Y + $\epsilon$', color='tab:blue', alpha=0.5, s=100) plt.scatter(40, -5, marker='x', s=200, label='(X=40, Y=?)', color='tab:red') plt.plot([40,40], [-7, 250], ls='--', color='tab:red'); plt.text(35, 150, 'Y = ???'); plt.xlabel('X, признак') plt.ylabel('Y, target') plt.legend(loc=2); ``` Модель линейной регрессии предлагает построить через это облако точек - прямую, то есть искать функцию $f(x)$ в виде $f(x) = ax + b$, что сводит задачу к поиску двух коэффициентов $a$ и $b$. Здесь однако возникает два важных вопроса: 1. Предположим мы каким то образом нашли две прямые $(a_1, b_1)$ и $(a_2, b_2)$. Как понять какая из этих двух прямых лучше? И что вообще значит, лучше? 2. Как найти эти коэффициенты `a` и `b` ## 1. Какая прямая лучше? ``` plt.figure(figsize=(20,20)) plot1(plt.subplot(221), 2, 4, 'tab:blue') plot1(plt.subplot(222), 2.5, 15, 'tab:green') plot1(plt.subplot(223), 3, 6, 'tab:orange') axes = plt.subplot(224) axes.scatter(X, y_noisy, c='tab:red', alpha=0.5, s=100) y_hat = X * 2 + 4 axes.plot(X, y_hat, color='tab:blue', label='$f_1(x)=2x+4$') y_hat = X * 2.5 + 15 axes.plot(X, y_hat, color='tab:green', label='$f_2(x)=2.5x+15$') y_hat = X * 3 + 6 axes.plot(X, y_hat, color='tab:orange', label='$f_3(x)=3x+6$'); axes.legend(); ``` Кажется что $f_1$ (синяя прямая) отпадает сразу, но как выбрать между оставшимися двумя? Интуитивный ответ таков: надо посчитать ошибку предсказания. Это значит что для каждой точки из набора $X$ (для которой нам известно значение $y$) мы можем воспользовавшись функцией $f(x)$ посчитать соответственное $y_{pred}$. И затем сравнить $y$ и $y_{pred}$. ``` plt.figure(figsize=(10,10)) plt.scatter(X, y_noisy, s=100, c='tab:blue', alpha=0.1) y_hat = X * 3 + 6 plt.plot(X, y_hat, label='$y_{pred} = 3x+6$') plt.scatter(X[2:12], y_noisy[2:12], s=100, c='tab:blue', label='y') for _x, _y in zip(X[2:12], y_noisy[2:12]): plt.plot([_x, _x], [_y, 3*_x+6], c='b') plt.legend(); ``` Как же считать эту разницу? Существует множество способов: $$ MSE(\hat{f}, x) = \frac{1}{N} \sum_{i=1}^{N}\left(\ y_i - \hat{f}(x_i)\ \right) ^ 2 $$ $$ MAE(\hat{f}, x) = \frac{1}{N} \sum_{i=1}^{N}\left|\ y_i - \hat{f}(x_i)\ \right| $$ $$ RMSLE(\hat{f}, x) = \sqrt{\frac{1}{N} \sum_{i=1}^{N}\left(\ \log(y_i + 1) - \log(\hat{f}(x_i) + 1)\ \right)^2} $$ $$ MAPE (\hat{f}, x) = \frac{100}{N} \sum_{i=1}^{N}\left| \frac{y_i - \hat{f}(x_i)}{y_i} \right| $$ и другие. --- **Вопрос 1.** Почему бы не считать ошибку вот так: $$ ERROR(\hat{f}, x) = \frac{1}{N} \sum_{i=1}^{N}\left(\ y_i - \hat{f}(x_i)\ \right) $$ --- **Вопрос 2.** Чем отличаются `MSE`, `MAE`, `RMSLE`, `MAPE`? Верно ли, что модель наилучшая с точки зрения одной меры, всегда будет лучше с точки зрения другой/остальных? --- Пока что мы остановимся на MSE. Давайте теперь сравним наши прямые используя MSE ``` def mse(y1, y2, prec=2): return np.round(np.mean((y1 - y2)**2),prec) def plot2(axes, a, b, color='b', X=X, y=y_noisy): axes.plot(X, y_noisy, 'r.') y_hat = X * a + b axes.plot(X, y_hat, color=color, label='y = {}x + {}'.format(a,b)) axes.set_title('MSE = {:.2f}'.format(mse(y_hat, y_noisy))); axes.legend() plt.figure(figsize=(20,12)) plot2(plt.subplot(221), 2.5, 15, 'g') plot2(plt.subplot(222), 3, 6, 'orange') plot2(plt.subplot(223), 2, 4, 'b') ``` Понятно что чем меньше значение MSE тем меньше ошибка предсказания, а значит выбирать нужно модель для которой MSE наименьшее. В нашем случае это $f_3(x) = 3x+6$. Отлично, мы ответили на первый вопрос, как из многих прямых выбрать одну, теперь попробуем ответить на второй. # 2. Как найти параметры прямой? Зафиксируем что нам на текущий момент известно. 1. У нас есть данные ввиде множества пар $X$ и $y$: $\{(x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)\}$ 2. Мы хотим найти такую функцию $\hat{f}(x)$ которая бы минимизировала $$MSE(\hat{f}, x) = \frac{1}{N} \sum_{i=1}^{N}\left(\ y_i - \hat{f}(x_i)\ \right) ^ 2 \rightarrow \text{min}$$ 3. Мы будем искать $\hat{f}(x)$ в предположении что это линейная функция: $$\hat{f}(x) = ax + b$$ ---- Подставив теперь $\hat{f}(x)$ в выражение для MSE получим: $$ \frac{1}{N} \sum_{i=1}^{N}\left(\ y_i - ax_i - b\ \right) ^ 2 \rightarrow \text{min}_{a,b} $$ Сделать это можно по-меньшей мере двумя способами: 1. Аналитически: переписать выражение в векторном виде, посчитать первую производную и приравнять ее к 0, откуда выразить значения для параметров. 2. Численно: посчитать частные производные по a и по b и воспользоваться методом градиентного спуска. Подробный аналитический вывод можно посмотреть например здесь https://youtu.be/Y_Ac6KiQ1t0?list=PL221E2BBF13BECF6C (после просмотра также станет откуда взялся MSE). Нам же на будущее будет полезно сделать это в лоб (не особо вдаваясь в причины) ----- Вектор $y$ имеет размерность $n \times 1$ вектор $x$ так же $n \times 1$, применим следующий трюк: превратим вектор $x$ в матрицу $X$ размера $n \times 2$ в которой первый столбец будет целиком состоять из 1. Тогда обзначив за $\theta = [b, a]$ получим выражение для MSE в векторном виде: $$ \frac{1}{n}(y - X \theta)^{T}(y - X \theta) \rightarrow min_{\theta} $$ взяв производную по $\theta$ и приравняв ее к 0, получим: $$ y = X \theta $$ поскольку матрица $X$ не квадратная и не имеет обратной, домножим обе части на $X^T$ слева $$ X^T y = X^T X \theta $$ матрица X^T X, за редким исключением (каким?) обратима, в итоге получаем выражение для $\theta$: $$ \theta = (X^T X)^{-1} X^T y $$ проделаем теперь эти шаги с нашими данными (Формула дающая выражение для $\theta$ называется Normal equation) ``` print(X.shape, y.shape) print('----------') print('Несколько первых значений X: ', np.round(X[:5],2)) print('Несколько первых значений Y: ', np.round(y[:5],2)) X_new = np.ones((60, 2)) X_new[:, 1] = X y_new = y.reshape(-1,1) print(X_new.shape, y_new.shape) print('----------') print('Несколько первых значений X:\n', np.round(X_new[:5],2)) print('Несколько первых значений Y:\n', np.round(y_new[:5],2)) theta = np.linalg.inv((X_new.T.dot(X_new))).dot(X_new.T).dot(y_new) print(theta) ``` Таким образом мы восстановили функцию $f(x) = 3 x + 6$ (что совершенно случайно совпало с $f_3(x)$) Отлично, это была красивая победа! Что дальше? А дальше нас интересуют два вопроса: 1. Что если первоначальная функция была взята из нелинейного источника (например $y = 3 x^2 +1$) 2. Что делать если у нас не один признак, а много? (т.е. матрица $X$ имеет размер не $n \times 2$, а $n \times m+1$, где $m$ - число признаков) # 3. Что если нужно восстановить нелинейную зависимость? ``` plt.figure(figsize=(10,10)) x = np.linspace(-3, 5, 60).reshape(-1,1) y = 3*x**2 + 1 + random.normal(scale=5, size=x.shape) y_model = 3*x**2 + 1 plt.scatter(x, y, label='$y = 3x^2 + 1 +$ N(0,5)') plt.plot(x, y_model, label='$y = 3x^2 + 1$') plt.legend(); ``` Давайте для этого воспользуемся реализацией линейной регрессии из **sklearn** ``` from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(x, y) print('y = {} X + {}'.format(np.round(model.coef_[0][0],2), np.round(model.intercept_[0], 2))) ``` Обратите внимание, мы не добавляли столбец из 1 в матрицу X, поскольку в классе LinearRegression есть параметр fit_intercept (по умолчанию он равен True) Посмотрим теперь как это выглядит ``` plt.figure(figsize=(20,15)) ax1 = plt.subplot(221) ax1.scatter(x, y, ) ax1.plot(x, y_model, label=f'True source: $y = 3x^2 + 1$\nMSE={mse(y, y_model)}') ax1.legend(); y_pred = model.coef_[0][0] * x + model.intercept_[0] ax2 = plt.subplot(222) ax2.scatter(x, y,) ax2.plot(x, y_pred, label='Predicted curve: $y = {} x + {}$\nMSE={}'.format(np.round(model.coef_[0][0],2), np.round(model.intercept_[0],2), mse(y, y_pred)), c='r') ax2.legend(); ``` Кажется что в данном случае предсказывать "прямой" не лучшая идея, что же делать? Если линейные признаки не дают желаемого результата, надо добавлять нелинейные! Давайте например искать параметры $a$ и $b$ для вот такой функции: $$ f(x) = ax^2 + b $$ ``` x_new = x**2 model.fit(x_new, y) print('y = {} x^2 + {}'.format(np.round(model.coef_[0][0],2), np.round(model.intercept_[0], 2))) plt.figure(figsize=(20,15)) ax1 = plt.subplot(221) ax1.scatter(x, y, ) ax1.plot(x, y_model, label='True source: $y = 3x^2 + 1$\nMSE={}'.format(mse(y, y_model))) ax1.legend(); y_pred = model.coef_[0][0] * x_new + model.intercept_[0] ax2 = plt.subplot(222) ax2.scatter(x, y,) ax2.plot(x, y_pred, label='Predicted curve: $y = {} x^2 + {}$\nMSE={}'.format(np.round(model.coef_[0][0],2), np.round(model.intercept_[0],2), mse(y, y_pred)), c='r') ax2.legend(); ``` Некоторые замечания 1. Результирующая функция даже лучше всмысле MSE чем источник (причина - шум) 2. Регрессия все еще называется линейной (собственно линейной она зовется не по признакам X, а по параметрам $\theta$). Регрессия называется линейной потому что предсказываемая величина это **линейная комбинация признаков**, алгоритму неизвестно что мы там что-то возвели в квадрат или другую степень. 3. Откуда я узнал что нужно добавить именно квадратичный признак? (ниоткуда, просто угадал, дальше увидим как это делать) ### 3.1 Задача: воспользуйтесь Normal equation для того чтобы подобрать параметры a и b Normal equation: $$ \theta = (X^T X)^{-1} X^T y $$ Уточнение: подобрать параметры a и b нужно для функции вида $f(x) = ax^2 + b$ # 4. Что делать если у нас не один признак, а много? Отлично, теперь мы знаем что делать если у нас есть один признак и одна целевая переменная (например предсказывать вес по росту или стоимость квартиры на основе ее площади, или время подъезда такси на основе времени суток). Но что же нам делать если факторов несколько? Для этого давайте еще раз посмотрим на Normal equation: $$ \theta_{m\times 1} = (X^T_{m\times n} X_{n\times m})^{-1} X^T_{m\times n} y_{n\times 1} $$ Посчитав $\theta$ как мы будем делать предсказания для нового наблюдения $x$? $$ y = x_{1\times m} \times \theta_{m\times 1} $$ А что если у нас теперь не один признак а несколько (например $m$), как изменятся размерности? Размерность X станет равна $n\times (m+1)$: $n$ строк и $(m+1)$ столбец (размерность $y$ не изменится), подставив теперь это в Normal equation получим что размерность $\theta$ изменилась и стала равна $(m+1)\times 1$, а предсказания мы будем делать все так же:$y = x \times \theta$ или если раскрыть $\theta$: $$ y = \theta_0 + x^{[1]}\theta_1 + x^{[2]}\theta_2 + \ldots + x^{[m]}\theta_m $$ здесь верхние индексы это индекс признака, а не наблюдения (номер столбца в матрице $X$), и не арифмитическая степень. ----- Отлично, значит мы можем и с несколькими признаками строить линейную регрессию, что же дальше? А дальше нам надо ответить на (очередные) два вопроса: 1. Как же все-таки подбирать какие признаки генерировать? 2. Как это делать с помощью функций из **sklearn** Мы ответим на оба вопроса, но сперва разберем пример для того чтобы продемонстрировать **интерпретируемость** линейной модели # Пример. Определение цен на недвижимость. **Интерпретируемость** линейной модели заключается в том что **увеличение значение признака на 1** ведет к **увеличению целевой переменной** на соответствующее значение **theta** (у этого признака в линейной модели): $$ f(x_i) = \theta_0 + \theta_1 x_i^{[1]} + \ldots + \theta_j x_i^{[j]} + \ldots + \theta_m x_i^{[m]} $$ Увеличим значение признака $j$ у наблюдения $x_i$: $$ \bar{x_i}^{[j]} = x_i^{[j]} + 1 = (x_i+1)^{[j]} $$ изменение значения функции составит: $$ \Delta(f(x)) = f(\bar{x}_i) - f(x_i) = \theta_j $$ ``` import pandas as pd from sklearn.metrics import mean_squared_log_error # данные можно взять отсюда ---> https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data # house_data = pd.read_csv('train.csv', index_col=0) # trunc_data = house_data[['LotArea', '1stFlrSF', 'BedroomAbvGr', 'SalePrice']] # trunc_data.to_csv('train_house.csv') house_data = pd.read_csv('train_house.csv', index_col=0) house_data.head() X = house_data[['LotArea', '1stFlrSF', 'BedroomAbvGr']].values y = house_data['SalePrice'].values model = LinearRegression() model.fit(X, y) y_pred = model.predict(X) print('Linear coefficients: ', list(model.coef_), 'Intercept: ', model.intercept_) print('MSLE: ', np.sqrt(mean_squared_log_error(y, y_pred))) print('MSE: ', mse(y, y_pred)) for y_t, y_p in zip(y[:5], y_pred[:5]): print(y_t, np.round(y_p, 3), np.round(mean_squared_log_error([y_t], [y_p]), 6)) plt.figure(figsize=(7,7)) plt.scatter(y, y_pred); plt.plot([0, 600000], [0, 600000], c='r'); plt.text(200000, 500000, 'Overestimated\narea') plt.text(450000, 350000, 'Underestimated\narea') plt.xlabel('True value') plt.ylabel('Predicted value'); ``` Вернемся к нашим вопросам: 1. Как же все-таки подбирать какие признаки генерировать? 2. Как это делать с помощью функций из **sklearn** ## 5. Генерация признаков. ``` X = np.array([0.76923077, 1.12820513, 1.48717949, 1.84615385, 2.20512821, 2.56410256, 2.92307692, 3.28205128, 3.64102564, 4.]).reshape(-1,1) y = np.array([9.84030322, 26.33596415, 16.68207941, 12.43191433, 28.76859577, 32.31335979, 35.26001044, 31.73889375, 45.28107096, 46.6252025]).reshape(-1,1) plt.scatter(X, y); ``` Попробуем простую модель с 1 признаком: $$ f(x) = ax + b $$ ``` lr = LinearRegression() lr.fit(X, y) y_pred = lr.predict(X) plt.scatter(X, y); plt.plot(X, y_pred); plt.title('MSE: {}'.format(mse(y, y_pred))); ``` Добавим квадратичные признаки ``` from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures from sklearn.metrics import mean_squared_error poly = PolynomialFeatures(degree=2) X_2 = poly.fit_transform(X) print(X_2[:3]) lr = LinearRegression() lr.fit(X_2, y) y_pred_2 = lr.predict(X_2) plt.scatter(X, y); plt.plot(X, y_pred_2); plt.title('MSE: {}'.format(mse(y, y_pred_2))); ``` Добавим кубические ``` poly = PolynomialFeatures(degree=3) X_3 = poly.fit_transform(X) print(X_3[:3]) lr = LinearRegression() lr.fit(X_3, y) y_pred_3 = lr.predict(X_3) plt.scatter(X, y); plt.plot(X, y_pred_3); plt.title('MSE: {}'.format(mse(y, y_pred_3))); ``` We need to go deeper.. ``` def plot3(ax, degree): poly = PolynomialFeatures(degree=degree) _X = poly.fit_transform(X) lr = LinearRegression() lr.fit(_X, y) y_pred = lr.predict(_X) ax.scatter(X, y); ax.plot(X, y_pred, label='MSE={}'.format(mse(y,y_pred))); ax.set_title('Polynom degree: {}'.format(degree)); ax.legend() plt.figure(figsize=(30,15)) plot3(plt.subplot(231), 4) plot3(plt.subplot(232), 5) plot3(plt.subplot(233), 6) plot3(plt.subplot(234), 7) plot3(plt.subplot(235), 8) plot3(plt.subplot(236), 9) ``` ### Переход в многомерное нелинейное пространство #### Как сделать регрессию линейной если зависимость нелинейная? - $\mathbf{x}$ может зависеть не совсем линейно от $\mathbf{y}$. - Перейдем в новое пространство - $\phi(\mathbf{x})$ где $\phi(\cdot)$ это нелинейная функция от $\mathbf{x}$. - В наших примерах присутствуют только полиномы, вообще говоря нелинейо преобразование может быть любым: экспонента, логарифм, тригонометрические функции и пр. - Возьмем линейную комбинацию этих нелинейных функций $$f(\mathbf{x}) = \sum_{j=1}^k w_j \phi_j(\mathbf{x}).$$ - Возьмем некотрый базис функций (например квадратичный базис) $$\boldsymbol{\phi} = [1, x, x^2].$$ - Теперь наша функция имеет такой вид $$f(\mathbf{x}_i) = \sum_{j=1}^m w_j \phi_{i, j} (x_i).$$ Ну чтож, выходит что полином 9 степени это лучшее что мы можем здесь сделать? или все-таки нет... ``` a = 5 b = 10 n_points = 40 x_min = 0.5 x_max = 4 x = np.linspace(x_min, x_max, n_points)[:, np.newaxis] completely_random_number = 33 rs = np.random.RandomState(completely_random_number) noise = rs.normal(0, 5, (n_points, 1)) y = a + b * x + noise idx = np.arange(3,40,4) plt.figure(figsize=(15,5)) plt.subplot(1,2,1) plt.scatter(x,y, s=80, c ='tab:blue', edgecolors='k', linewidths=0.3); plt.scatter(x[idx],y[idx], s=80, c='tab:red'); plt.subplot(1,2,2) plt.scatter(x[idx],y[idx], s=80, c ='tab:red', edgecolors='k', linewidths=0.3); x_train = x[idx] y_train = y[idx] lr_linear = LinearRegression(fit_intercept=True) lr_linear.fit(x_train, y_train) y_linear = lr_linear.predict(x_train) # Cubic cubic = PolynomialFeatures(degree=3) x_cubic = cubic.fit_transform(x_train) lr_3 = LinearRegression(fit_intercept=False) lr_3.fit(x_cubic, y_train) y_cubic = lr_3.predict(x_cubic) # 9'th fit poly = PolynomialFeatures(degree=9) x_poly = poly.fit_transform(x_train) lr_9 = LinearRegression(fit_intercept=False) lr_9.fit(x_poly, y_train) y_poly = lr_9.predict(x_poly) xx = np.linspace(0.75,4,50).reshape(-1,1) xx_poly = poly.fit_transform(xx) yy_poly = lr_9.predict(xx_poly) # PREDICTION ON WHOLE DATA # linear prediction y_pred_linear = lr_linear.predict(x) # cubic prediction x_cubic_test = cubic.transform(x) y_pred_cubic = lr_3.predict(x_cubic_test) # poly 9 prediction x_poly_test = poly.transform(x) y_pred_poly = lr_9.predict(x_poly_test) def plot4(ax, x, y, y_regression, test_idx=None): ax.scatter(x,y, s=80, c ='tab:red', edgecolors='k', linewidths=0.3, label='Test'); ax.plot(x,y_regression); if test_idx is not None: ax.scatter(x[test_idx], y[test_idx], s=80, c ='tab:blue', edgecolors='k', linewidths=0.3, label ='Train'); ax.legend() ax.set_title('MSE = {}'.format(np.round(mse(y, y_regression), 2))); # PLOT PICTURES plt.figure(figsize=(24,12)) plot4(plt.subplot(231), x_train,y_train,y_linear) plot4(plt.subplot(232), x_train,y_train,y_cubic) plot4(plt.subplot(233), x_train,y_train,y_poly) plot4(plt.subplot(234), x,y,y_pred_linear, test_idx=idx) plot4(plt.subplot(235), x,y,y_pred_cubic, test_idx=idx) plot4(plt.subplot(236), x[3:],y[3:],y_pred_poly[3:], test_idx=idx-3) print('FIRST ROW is TRAIN data set, SECOD ROW is WHOLE data') ``` #### Вопрос: Почему на графиках в последней колонке поведение функции отличается на TRAIN и TEST данных? (области возрастания, убывания и кривизна) **Ответ:** ``` mse_train = [] mse_test = [] for degree in range(1,10): idx_train = [3, 7, 11, 15, 19, 23, 27, 31, 35, 39] idx_test = [ 0, 1, 2, 4, 5, 6, 8, 9, 10, 12, 13, 14, 16, 17, 18, 20, 21, 22, 24, 25, 26, 28, 29, 30, 32, 33, 34, 36, 38, 39] x_train, x_test = x[idx_train], x[idx_test] y_train, y_test = y[idx_train], y[idx_test] poly = PolynomialFeatures(degree=degree) lr = LinearRegression(fit_intercept=True) x_train = poly.fit_transform(x_train) x_test = poly.transform(x_test) lr.fit(x_train, y_train) y_pred_train = lr.predict(x_train) y_pred_test = lr.predict(x_test) mse_train.append(mse(y_train, y_pred_train)) mse_test.append(mse(y_test, y_pred_test)) plt.figure(figsize=(15,10)) plt.plot(list(range(1,6)), mse_train[:5], label='Train error') plt.plot(list(range(1,6)), mse_test[:5], label='Test error') plt.legend(); ``` ![Bias-Variance tradeoff](biasvariance.png) 1. http://scott.fortmann-roe.com/docs/BiasVariance.html 2. http://www.inf.ed.ac.uk/teaching/courses/mlsc/Notes/Lecture4/BiasVariance.pdf
github_jupyter
``` import numpy as np from pandas import Series, DataFrame import pandas as pd from sklearn import preprocessing, tree from sklearn.metrics import accuracy_score # from sklearn.model_selection import train_test_split, KFold from sklearn.neighbors import KNeighborsClassifier from sklearn.cross_validation import KFold df=pd.read_json('../01_Preprocessing/First.json').sort_index() df.head(2) def mydist(x, y): return np.sum((x-y)**2) def jaccard(a, b): intersection = float(len(set(a) & set(b))) union = float(len(set(a) | set(b))) return 1.0 - (intersection/union) # http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html dist=['braycurtis','canberra','chebyshev','cityblock','correlation','cosine','euclidean','dice','hamming','jaccard','kulsinski','matching','rogerstanimoto','russellrao','sokalsneath','yule'] algorithm=['ball_tree', 'kd_tree', 'brute'] len(dist) ``` ## On country (only MS) ``` df.columns oldDf=df.copy() df=df[['countryCoded','degreeCoded','engCoded', 'fieldGroup','fund','gpaBachelors','gre', 'highLevelBachUni', 'paper']] df=df[df.degreeCoded==0] del df['degreeCoded'] bestAvg=[] for alg in algorithm: for dis in dist: k_fold = KFold(n=len(df), n_folds=5) scores = [] try: clf = KNeighborsClassifier(n_neighbors=3, weights='distance',algorithm=alg, metric=dis) except Exception as err: # print(alg,dis,'err') continue for train_indices, test_indices in k_fold: xtr = df.iloc[train_indices,(df.columns != 'countryCoded')] ytr = df.iloc[train_indices]['countryCoded'] xte = df.iloc[test_indices, (df.columns != 'countryCoded')] yte = df.iloc[test_indices]['countryCoded'] clf.fit(xtr, ytr) ypred = clf.predict(xte) acc=accuracy_score(list(yte),list(ypred)) scores.append(acc*100) print(alg,dis,np.average(scores)) bestAvg.append(np.average(scores)) print('>>>>>>>Best: ',np.max(bestAvg)) ``` ## On Fund (only MS) ``` bestAvg=[] for alg in algorithm: for dis in dist: k_fold = KFold(n=len(df), n_folds=5) scores = [] try: clf = KNeighborsClassifier(n_neighbors=3, weights='distance',algorithm=alg, metric=dis) except Exception as err: continue for train_indices, test_indices in k_fold: xtr = df.iloc[train_indices, (df.columns != 'fund')] ytr = df.iloc[train_indices]['fund'] xte = df.iloc[test_indices, (df.columns != 'fund')] yte = df.iloc[test_indices]['fund'] clf.fit(xtr, ytr) ypred = clf.predict(xte) acc=accuracy_score(list(yte),list(ypred)) score=acc*100 scores.append(score) if (len(bestAvg)>1) : if(score > np.max(bestAvg)) : bestClf=clf bestAvg.append(np.average(scores)) print (alg,dis,np.average(scores)) print('>>>>>>>Best: ',np.max(bestAvg)) ``` ### Best : ('brute', 'cityblock', 76.894261294261298) ``` me=[1,2,0,2.5,False,False,1.5] n=bestClf.kneighbors([me]) n for i in n[1]: print(xtr.iloc[i]) ```
github_jupyter
# Koopman Training and Validation for 2D Tail-Actuated Robotic Fish This file uses experimental measurements using a 2D Tail-Actuated Robotic Fish to train an approximate Koopman operator. Using the initial conditions of each experiment, the data-driven solution is then used to predict the system forward and compared against the real experimental measurements. All fitness plots are generated at the end. ## Import Data ``` %%capture # suppress cell output !git clone https://github.com/giorgosmamakoukas/DataSet.git # Import data from user location !mv DataSet/* ./ # Move 'DataSet' folder to main directory ``` ## Import User Functions ``` # This file includes all user-defined functions from math import atan, sqrt, sin, cos from numpy import empty, sign, dot,zeros from scipy import io, linalg def Psi_k(s,u): #Creates a vector of basis functions using states s and control u x, y, psi, v_x, v_y, omega = s # Store states in local variables if (v_y == 0) and (v_x == 0): atanvXvY = 0; # 0/0 gives NaN psi37 = 0; psi40 = 0; psi52 = 0; psi56 = 0; else: atanvXvY = atan(v_y/v_x); psi37 = v_x * pow(v_y,2) * omega/sqrt(pow(v_x,2)+pow(v_y,2)); psi40 = pow(v_x,2) * v_y * omega / sqrt(pow(v_x,2) + pow(v_y,2)) * atanvXvY; psi52 = pow(v_x,2) * v_y * omega / sqrt(pow(v_x,2) + pow(v_y,2)); psi56 = v_x * pow(v_y,2) * omega * atanvXvY / sqrt(pow(v_x,2) + pow(v_y,2)); Psi = empty([62,1]); # declare memory to store psi vector # System States Psi[0,0] = x; Psi[1,0] = y; Psi[2,0] = psi; Psi[3,0] = v_x; Psi[4,0] = v_y; Psi[5,0] = omega; # f(t): terms that appear in dynamics Psi[6,0] = v_x * cos(psi) - v_y * sin(psi); Psi[7,0] = v_x * sin(psi) + v_y * cos(psi); Psi[8,0] = v_y * omega; Psi[9,0] = pow(v_x,2); Psi[10,0] = pow(v_y,2); Psi[11,0] = v_x * omega; Psi[12,0] = v_x * v_y; Psi[13,0] = sign(omega) * pow(omega,2); # df(t)/dt: terms that appear in derivative of dynamics Psi[14,0] = v_y * omega * cos(psi); Psi[15,0] = pow(v_x,2) * cos(psi); Psi[16,0] = pow(v_y,2) * cos(psi); Psi[17,0] = v_x * omega * sin(psi); Psi[18,0] = v_x * v_y * sin(psi); Psi[19,0] = v_y * omega * sin(psi); Psi[20,0] = pow(v_x,2) * sin(psi); Psi[21,0] = pow(v_y,2) * sin(psi); Psi[22,0] = v_x * omega * cos(psi); Psi[23,0] = v_x * v_y * cos(psi); Psi[24,0] = v_x * pow(omega,2); Psi[25,0] = v_x * v_y * omega; Psi[26,0] = v_x * pow(v_y,2); Psi[27,0] = v_y * sign(omega) * pow(omega,2); Psi[28,0] = pow(v_x,3); Psi[29,0] = v_y * pow(omega,2); Psi[30,0] = v_x * omega * sqrt(pow(v_x,2) + pow(v_y,2)); Psi[31,0] = v_y * omega * sqrt(pow(v_x,2) + pow(v_y,2)) * atanvXvY; Psi[32,0] = pow(v_x,2) * v_y; Psi[33,0] = v_x * sign(omega) * pow(omega,2); Psi[34,0] = pow(v_y,3); Psi[35,0] = pow(v_x,3) * atanvXvY; Psi[36,0] = v_x * pow(v_y,2) * atanvXvY; Psi[37,0] = psi37; Psi[38,0] = pow(v_x,2) * v_y * pow(atanvXvY,2); Psi[39,0] = pow(v_y,3) * pow(atanvXvY,2); Psi[40,0] = psi40; Psi[41,0] = pow(v_y,2) * omega; Psi[42,0] = v_x * v_y * sqrt(pow(v_x,2) + pow(v_y,2)); Psi[43,0] = pow(v_y,2) * sqrt(pow(v_x,2) + pow(v_y,2)) * atanvXvY; Psi[44,0] = pow(v_x,2) * omega; Psi[45,0] = pow(v_x,2) * sqrt(pow(v_x,2) + pow(v_y,2)) * atanvXvY; Psi[46,0] = v_x * v_y * sign(omega) * omega; Psi[47,0] = pow(omega, 3); Psi[48,0] = v_y * omega * sqrt(pow(v_x,2) + pow(v_y,2)); Psi[49,0] = pow(v_x,3); Psi[50,0] = v_x * pow(v_y,2); Psi[51,0] = pow(v_x,2) * v_y * atanvXvY; Psi[52,0] = psi52; Psi[53,0] = v_x * omega * sqrt(pow(v_x,2) + pow(v_y,2)) * atanvXvY; Psi[54,0] = pow(v_x,3) * pow(atanvXvY,2); Psi[55,0] = v_x * pow(v_y,2) * pow(atanvXvY,2); Psi[56,0] = psi56; Psi[57,0] = pow(v_y, 3) * atanvXvY; Psi[58,0] = v_x * pow(omega,2); Psi[59,0] = v_y * sign(omega) * pow(omega,2); # add control inputs Psi[60,0] = u[0]; Psi[61,0] = u[1]; return Psi def A_and_G(s_1, s_2, u): # Uses measurements s(t_k) & s(t_{k+1}) to calculate A and G A = dot(Psi_k(s_2, u), Psi_k(s_1, u).transpose()); G = dot(Psi_k(s_1, u), Psi_k(s_1, u).transpose()); return A, G def TrainKoopman(): # Train an approximate Koopman operator ######## 1. IMPORT DATA ######## mat = io.loadmat('InterpolatedData_200Hz.mat', squeeze_me=True) positions = mat['Lengths'] - 1 # subtract 1 to convert MATLAB indices to python x = mat['x_int_list'] y = mat['y_int_list'] psi = mat['psi_int_list'] v_x = mat['v1_int_list'] v_y = mat['v2_int_list'] omega = mat['omega_int_list'] u1 = mat['u1_list'] u2 = mat['u2_list'] ######## 2. INITIALIZE A and G matrices A = zeros((62, 62)) # 62 is the size of the Ψ basis functions G = zeros((62, 62)) ######## 3. TRAINING KOOPMAN ######## for i in range(x.size-1): # print('{:.2f} % completed'.format(i/x.size*100)) if i in positions: i += 1 # jump to next trial at the end of each trial # Create pair of state measurements s0 = [x[i], y[i], psi[i], v_x[i], v_y[i], omega[i]] sn = [x[i+1], y[i+1], psi[i+1], v_x[i+1], v_y[i+1], omega[i+1]] Atemp, Gtemp = A_and_G(s0,sn,[u1[i],u2[i]]) A = A+Atemp; G = G+Gtemp; Koopman_d = dot(A,linalg.pinv2(G)) # more accurate than numpy # Koopman_d = dot(A,numpy.linalg.pinv(G)) # io.savemat('SavedData.mat', {'A' : A, 'G': G, 'Kd': Koopman_d}) # save variables to Matlab file return Koopman_d ``` ## Train Koopman & Test Fitness ``` # This file trains and tests the accuracy of approximate Koopman operator ######## 0. IMPORT PYTHON FUNCTIONS ######## import matplotlib.pyplot as plt from numpy import arange, insert, linspace ######## 1. IMPORT EXPERIMENTAL DATA ######## mat = io.loadmat('InterpolatedData_200Hz.mat', squeeze_me=True) positions = mat['Lengths'] - 1 # subtract 1 to convert MATLAB indices to python # positions includes indices with last measurement of each experiment x = mat['x_int_list'] y = mat['y_int_list'] psi = mat['psi_int_list'] v_x = mat['v1_int_list'] v_y = mat['v2_int_list'] omega = mat['omega_int_list'] u1 = mat['u1_list'] u2 = mat['u2_list'] positions = insert(positions, 0, -1) # insert -1 as index that precedes the 1st experiment ######## 2. PREDICT DATA USING TRAINED KOOPMAN ######## Kd = TrainKoopman() # Train Koopman for exp_i in range(0, positions.size -2): # for each experiment indx = positions[exp_i]+1 # beginning index of each trial Psi_predicted = empty((positions[exp_i+1]-(indx), 62)) s0 = [x[indx], y[indx], psi[indx], v_x[indx], v_y[indx], omega[indx]] Psi_predicted[0,:] = Psi_k(s0, [u1[indx], u2[indx]]).transpose() # Initialize with same initial conditions as experiment for j in range(0, positions[exp_i+1]-1-(indx)): Psi_predicted[j+1, :] = dot(Kd,Psi_predicted[j, :]) ######## 3. PLOT EXPERIMENTAL VS PREDICTED DATA ######## ylabels = ['x (m)', 'y (m)', 'ψ (rad)', r'$\mathregular{v_x (m/s)}$', r'$\mathregular{v_x (m/s)}$', 'ω (rad/s)'] exp_data = [x, y, psi, v_x, v_y, omega] time = linspace(0, 1./200*(j+2), j+2) # create time vector fig = plt.figure() for states_i in range(6): plt.subplot('23'+str(states_i+1)) # 2 rows # 3 columns plt.plot(time, Psi_predicted[:, states_i]) plt.plot(time, exp_data[states_i][indx:positions[exp_i+1]]) plt.ylabel(ylabels[states_i]) plt.gca().legend(('Predicted','Experimental')) Amp_values = [15, 20, 25, 30] Bias_values = [-20, -30, -40, -50, 0, 20, 30, 40, 50] titles = 'Amp: ' + str(Amp_values[(exp_i)//18]) + ' Bias: ' + str(Bias_values[(exp_i % 18) //2]) fig.suptitle(titles) plt.show(block=False) ```
github_jupyter
``` # we assume that we have the pycnn module in your path. # we also assume that LD_LIBRARY_PATH includes a pointer to where libcnn_shared.so is. from pycnn import * ``` ## An LSTM/RNN overview: An (1-layer) RNN can be thought of as a sequence of cells, $h_1,...,h_k$, where $h_i$ indicates the time dimenstion. Each cell $h_i$ has an input $x_i$ and an output $r_i$. In addition to $x_i$, cell $h_i$ receives as input also $r_{i-1}$. In a deep (multi-layer) RNN, we don't have a sequence, but a grid. That is we have several layers of sequences: * $h_1^3,...,h_k^3$ * $h_1^2,...,h_k^2$ * $h_1^1,...h_k^1$, Let $r_i^j$ be the output of cell $h_i^j$. Then: The input to $h_i^1$ is $x_i$ and $r_{i-1}^1$. The input to $h_i^2$ is $r_i^1$ and $r_{i-1}^2$, and so on. ## The LSTM (RNN) Interface RNN / LSTM / GRU follow the same interface. We have a "builder" which is in charge of creating definining the parameters for the sequence. ``` model = Model() NUM_LAYERS=2 INPUT_DIM=50 HIDDEN_DIM=10 builder = LSTMBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model) # or: # builder = SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model) ``` Note that when we create the builder, it adds the internal RNN parameters to the `model`. We do not need to care about them, but they will be optimized together with the rest of the network's parameters. ``` s0 = builder.initial_state() x1 = vecInput(INPUT_DIM) s1=s0.add_input(x1) y1 = s1.output() # here, we add x1 to the RNN, and the output we get from the top is y (a HIDEN_DIM-dim vector) y1.npvalue().shape s2=s1.add_input(x1) # we can add another input y2=s2.output() ``` If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer. If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the `.h()` method, which returns a list of expressions, one for each layer: ``` print s2.h() ``` The same interface that we saw until now for the LSTM, holds also for the Simple RNN: ``` # create a simple rnn builder rnnbuilder=SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model) # initialize a new graph, and a new sequence rs0 = rnnbuilder.initial_state() # add inputs rs1 = rs0.add_input(x1) ry1 = rs1.output() print "all layers:", s1.h() print s1.s() ``` To summarize, when calling `.add_input(x)` on an `RNNState` what happens is that the state creates a new RNN/LSTM column, passing it: 1. the state of the current RNN column 2. the input `x` The state is then returned, and we can call it's `output()` method to get the output `y`, which is the output at the top of the column. We can access the outputs of all the layers (not only the last one) using the `.h()` method of the state. **`.s()`** The internal state of the RNN may be more involved than just the outputs $h$. This is the case for the LSTM, that keeps an extra "memory" cell, that is used when calculating $h$, and which is also passed to the next column. To access the entire hidden state, we use the `.s()` method. The output of `.s()` differs by the type of RNN being used. For the simple-RNN, it is the same as `.h()`. For the LSTM, it is more involved. ``` rnn_h = rs1.h() rnn_s = rs1.s() print "RNN h:", rnn_h print "RNN s:", rnn_s lstm_h = s1.h() lstm_s = s1.s() print "LSTM h:", lstm_h print "LSTM s:", lstm_s ``` As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h. ## Extra options in the RNN/LSTM interface **Stack LSTM** The RNN's are shaped as a stack: we can remove the top and continue from the previous state. This is done either by remembering the previous state and continuing it with a new `.add_input()`, or using we can access the previous state of a given state using the `.prev()` method of state. **Initializing a new sequence with a given state** When we call `builder.initial_state()`, we are assuming the state has random /0 initialization. If we want, we can specify a list of expressions that will serve as the initial state. The expected format is the same as the results of a call to `.final_s()`. TODO: this is not supported yet. ``` s2=s1.add_input(x1) s3=s2.add_input(x1) s4=s3.add_input(x1) # let's continue s3 with a new input. s5=s3.add_input(x1) # we now have two different sequences: # s0,s1,s2,s3,s4 # s0,s1,s2,s3,s5 # the two sequences share parameters. assert(s5.prev() == s3) assert(s4.prev() == s3) s6=s3.prev().add_input(x1) # we now have an additional sequence: # s0,s1,s2,s6 s6.h() s6.s() ``` ## Charecter-level LSTM Now that we know the basics of RNNs, let's build a character-level LSTM language-model. We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character. ``` import random from collections import defaultdict from itertools import count import sys LAYERS = 2 INPUT_DIM = 50 HIDDEN_DIM = 50 characters = list("abcdefghijklmnopqrstuvwxyz ") characters.append("<EOS>") int2char = list(characters) char2int = {c:i for i,c in enumerate(characters)} VOCAB_SIZE = len(characters) model = Model() srnn = SimpleRNNBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, model) lstm = LSTMBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, model) model.add_lookup_parameters("lookup", (VOCAB_SIZE, INPUT_DIM)) model.add_parameters("R", (VOCAB_SIZE, HIDDEN_DIM)) model.add_parameters("bias", (VOCAB_SIZE)) # return compute loss of RNN for one sentence def do_one_sentence(rnn, sentence): # setup the sentence renew_cg() s0 = rnn.initial_state() R = parameter(model["R"]) bias = parameter(model["bias"]) lookup = model["lookup"] sentence = ["<EOS>"] + list(sentence) + ["<EOS>"] sentence = [char2int[c] for c in sentence] s = s0 loss = [] for char,next_char in zip(sentence,sentence[1:]): s = s.add_input(lookup[char]) probs = softmax(R*s.output() + bias) loss.append( -log(pick(probs,next_char)) ) loss = esum(loss) return loss # generate from model: def generate(rnn): def sample(probs): rnd = random.random() for i,p in enumerate(probs): rnd -= p if rnd <= 0: break return i # setup the sentence renew_cg() s0 = rnn.initial_state() R = parameter(model["R"]) bias = parameter(model["bias"]) lookup = model["lookup"] s = s0.add_input(lookup[char2int["<EOS>"]]) out=[] while True: probs = softmax(R*s.output() + bias) probs = probs.vec_value() next_char = sample(probs) out.append(int2char[next_char]) if out[-1] == "<EOS>": break s = s.add_input(lookup[next_char]) return "".join(out[:-1]) # strip the <EOS> # train, and generate every 5 samples def train(rnn, sentence): trainer = SimpleSGDTrainer(model) for i in xrange(200): loss = do_one_sentence(rnn, sentence) loss_value = loss.value() loss.backward() trainer.update() if i % 5 == 0: print loss_value, print generate(rnn) ``` Notice that: 1. We pass the same rnn-builder to `do_one_sentence` over and over again. We must re-use the same rnn-builder, as this is where the shared parameters are kept. 2. We `renew_cg()` before each sentence -- because we want to have a new graph (new network) for this sentence. The parameters will be shared through the model and the shared rnn-builder. ``` sentence = "a quick brown fox jumped over the lazy dog" train(srnn, sentence) sentence = "a quick brown fox jumped over the lazy dog" train(lstm, sentence) ``` The model seem to learn the sentence quite well. Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM! How can that be? The answer is that we are cheating a bit. The sentence we are trying to learn has each letter-bigram exactly once. This means a simple trigram model can memorize it very well. Try it out with more complex sequences. ``` train(srnn, "these pretzels are making me thirsty") ```
github_jupyter
``` import pandas as pd import numpy as np from datascience import * # Table.interactive() import matplotlib # from ipywidgets import interact, Dropdown %matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.style.use('fivethirtyeight') ``` # Project 2: Topic ## Table of Contents <a href='#section 0'>Background Knowledge: Topic</a> 1. <a href='#section 1'> The Data Science Life Cycle</a> a. <a href='#subsection 1a'>Formulating a question or problem</a> b. <a href='#subsection 1b'>Acquiring and cleaning data</a> c. <a href='#subsection 1c'>Conducting exploratory data analysis</a> d. <a href='#subsection 1d'>Using prediction and inference to draw conclusions</a> <br><br> ### Background Knowledge <a id='section 0'></a> Anecdote / example that is applicable to their everyday. Where do have they seen this topic before? What can catch their interests? Add relative image if available: <#img src="..." width = 700/> # The Data Science Life Cycle <a id='section 1'></a> ## Formulating a question or problem <a id='subsection 1a'></a> It is important to ask questions that will be informative and that will avoid misleading results. There are many different questions we could ask about Covid-19, for example, many researchers use data to predict the outcomes based on intervention techniques such as social distancing. <div class="alert alert-warning"> <b>Question:</b> Take some time to formulate questions you have about this **TOPIC** and the data you would need to answer the questions. In addition, add the link of an article you found interesting with a description an why it interested you. </div> You can find [resources](...)(**ADD LINK**) here to choose from. Your questions: *here* Data you would need: *here* Article: *link* ## Acquiring and cleaning data <a id='subsection 1b'></a> We'll be looking at the Data from (...). You can find the raw data [here](...)(**ADD LINK**). We've cleaned up the datasets a bit, and we will be investigating the (**ADD DESCRIPTION OF DATA**). The following table, `...`, contains the (**DESCRIPTION**). **ADD CODE BOOK** ``` # data = Table().read_table("...csv") # data.show(10) # In this cell, we are RELABELING the columns ... # BASIC CLEANING WE MAY WANT THEM TO DO ``` <div class="alert alert-warning"> <b>Question:</b> It's important to evalute our data source. What do you know about the source? What motivations do they have for collecting this data? What data is missing? </div> *Insert answer* <div class="alert alert-warning"> <b>Question:</b> Do you see any missing (nan) values? Why might they be there? </div> *Insert answer here* <div class="alert alert-warning"> <b>Question:</b> We want to learn more about the dataset. First, how many total rows are in this table? What does each row represent? </div> ``` total_rows = ... ``` *Insert answer here* ## Conducting exploratory data analysis <a id='subsection 1c'></a> Visualizations help us to understand what the dataset is telling us.**OVERVIEW OF WHAT CHARTS WE WILL REVIEW / WHAT QUESTION AND WE WORKING TOWARD**. ### Part 1: One Branch of analysis (Understanding ratios / patterns in the data / grouping / filtering) <div class="alert alert-warning"> <b>Question:</b> ... </div> <div class="alert alert-warning"> <b>Question:</b> Next, visualize ... </div> <div class="alert alert-warning"> <b>Question:</b> Compare ... </div> <div class="alert alert-warning"> <b>Question:</b> Now make another bar chart... </div> ``` ... ``` <div class="alert alert-warning"> <b>Question:</b> What are some possible reasons for the disparities between charts? Hint: Think about ... </div> *Insert answer here.* ### Part 2: Other Data / Second Branch of analysis (Understanding ratios / patterns in the data / grouping / filtering) Is there additional data we need to understand / answer our question? Are we starting another branch of analysis / focus on a different column / topics than last section ``` # possible other data # other_data = Table().read_table("...csv") # other_data.show(10) ``` <div class="alert alert-warning"> <b>Question:</b> Grouping </div> <div class="alert alert-warning"> <b>Question:</b> Compare </div> <div class="alert alert-warning"> <b>Question:</b> Adding to existing tables </div> <div class="alert alert-warning"> <b>Question:</b> Compare & visualize </div> <div class="alert alert-warning"> <b>Question:</b> What differences do you see from the visualizations? What do they imply for the broader world? </div> *Insert answer here.* ## Using prediction and inference to draw conclusions <a id='subsection 1a'></a> Now that we have some experience making these visualizations, let's go back to **BACKGROUND INFO / PERSONAL EXPERIENCES / KNOWLEDGE**. We know that... From the previous section, we also know that we need to take into account ... Now we will read in two tables: Covid by State and Population by state in order to look at the percentage of the cases. And the growth of the ``` # possible other data # other_data = Table().read_table("...csv") # other_data.show(10) ``` #### MAPPING / MORE COMPLICATED VISUAL / SUMMARIZING VISUAL TO PULL TOGETHER CONCEPTS THROUGHOUT THE WHOLE PROJECT #### WHAT NARRATIVE DO WE WANT TO END ON? FINAL POINT FOR THEIR OWN PRESENTATIONS Look at the VISUAL (...) and try to explain using your knowledge and other sources. Tell a story. (Presentation) Tell us what you learned FROM THE PROJECT. ### BRING BACK ETHICS & CONTEXT Tell us something interesting about this data Source: .... Notebook Authors: Alleanna Clark, Ashley Quiterio, Karla Palos Castellanos
github_jupyter
# 解方程 ## 简单的一元一次方程 \begin{equation}x + 16 = -25\end{equation} \begin{equation}x + 16 - 16 = -25 - 16\end{equation} \begin{equation}x = -25 - 16\end{equation} \begin{equation}x = -41\end{equation} ``` x = -41 # 验证方程的解 x + 16 == -25 ``` ## 带系数的方程 \begin{equation}3x - 2 = 10 \end{equation} \begin{equation}3x - 2 + 2 = 10 + 2 \end{equation} \begin{equation}3x = 12 \end{equation} \begin{equation}x = 4\end{equation} ``` x = 4 # 代入 x = 4 3 * x - 2 == 10 ``` ## 系数是分数的方程 \begin{equation}\frac{x}{3} + 1 = 16 \end{equation} \begin{equation}\frac{x}{3} = 15 \end{equation} \begin{equation}\frac{3}{1} \cdot \frac{x}{3} = 15 \cdot 3 \end{equation} \begin{equation}x = 45 \end{equation} ``` x = 45 x/3 + 1 == 16 ``` ## 需要合并同类项的例子 \begin{equation}3x + 2 = 5x - 1 \end{equation} \begin{equation}3x + 3 = 5x \end{equation} \begin{equation}3 = 2x \end{equation} \begin{equation}\frac{3}{2} = x \end{equation} \begin{equation}x = \frac{3}{2} \end{equation} \begin{equation}x = 1\frac{1}{2} \end{equation} ``` x = 1.5 3*x + 2 == 5*x -1 ``` ## 一元方程练习 \begin{equation}\textbf{4(x + 2)} + \textbf{3(x - 2)} = 16 \end{equation} \begin{equation}4x + 8 + 3x - 6 = 16 \end{equation} \begin{equation}7x + 2 = 16 \end{equation} \begin{equation}7x = 14 \end{equation} \begin{equation}\frac{7x}{7} = \frac{14}{7} \end{equation} \begin{equation}x = 2 \end{equation} ``` x = 2 4 * (x + 2) + 3 * (x - 2) == 16 ``` # 线性方程 ## 例子 \begin{equation}2y + 3 = 3x - 1 \end{equation} \begin{equation}2y + 4 = 3x \end{equation} \begin{equation}2y = 3x - 4 \end{equation} \begin{equation}y = \frac{3x - 4}{2} \end{equation} ``` import numpy as np from tabulate import tabulate x = np.array(range(-10, 11)) # 从 -10 到 10 的21个数据点 y = (3 * x - 4) / 2 # 对应的函数值 print(tabulate(np.column_stack((x,y)), headers=['x', 'y'])) from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "last_expr" # %matplotlib inline from matplotlib import pyplot as plt plt.plot(x, y, color="grey", marker = "o") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.show() ``` ## 截距 ``` plt.plot(x, y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() # 画出坐标轴 plt.axvline() plt.show() ``` - x轴上的截距 \begin{equation}0 = \frac{3x - 4}{2} \end{equation} \begin{equation}\frac{3x - 4}{2} = 0 \end{equation} \begin{equation}3x - 4 = 0 \end{equation} \begin{equation}3x = 4 \end{equation} \begin{equation}x = \frac{4}{3} \end{equation} \begin{equation}x = 1\frac{1}{3} \end{equation} - y轴上的截距 \begin{equation}y = \frac{3\cdot0 - 4}{2} \end{equation} \begin{equation}y = \frac{-4}{2} \end{equation} \begin{equation}y = -2 \end{equation} ``` plt.plot(x, y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.annotate('x',(1.333, 0)) # 标出截距点 plt.annotate('y',(0,-2)) plt.show() ``` > 截距的作用是明显的:两点成一线。连接截距就可以画出函数曲线。 ## 斜率 \begin{equation}slope = \frac{\Delta{y}}{\Delta{x}} \end{equation} \begin{equation}m = \frac{y_{2} - y_{1}}{x_{2} - x_{1}} \end{equation} \begin{equation}m = \frac{7 - -2}{6 - 0} \end{equation} \begin{equation}m = \frac{7 + 2}{6 - 0} \end{equation} \begin{equation}m = 1.5 \end{equation} ``` plt.plot(x, y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() m = 1.5 xInt = 4 / 3 yInt = -2 mx = [0, xInt] my = [yInt, yInt + m * xInt] plt.plot(mx, my, color='red', lw=5) # 用红色标出 plt.show() plt.grid() # 放大图 plt.axhline() plt.axvline() m = 1.5 xInt = 4 / 3 yInt = -2 mx = [0, xInt] my = [yInt, yInt + m * xInt] plt.plot(mx, my, color='red', lw=5) plt.show() ``` ### 直线的斜率、截距形式 \begin{equation}y = mx + b \end{equation} \begin{equation}y = \frac{3x - 4}{2} \end{equation} \begin{equation}y = 1\frac{1}{2}x + -2 \end{equation} ``` m = 1.5 yInt = -2 x = np.array(range(-10, 11)) y2 = m * x + yInt # 斜率截距形式 plt.plot(x, y2, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.annotate('y', (0, yInt)) plt.show() ``` # 线形方程组 > 解的含义:直线的交点 \begin{equation}x + y = 16 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} ``` l1p1 = [16, 0] # 线1点1 l1p2 = [0, 16] # 线1点2 l2p1 = [25,0] # 线2点1 l2p2 = [0,10] # 线2点2 plt.plot(l1p1,l1p2, color='blue') plt.plot(l2p1, l2p2, color="orange") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.show() ``` ### 解线形方程组 (消去法) \begin{equation}x + y = 16 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} \begin{equation}-10(x + y) = -10(16) \end{equation} \begin{equation}10x + 25y = 250 \end{equation} \begin{equation}-10x + -10y = -160 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} \begin{equation}15y = 90 \end{equation} \begin{equation}y = \frac{90}{15} \end{equation} \begin{equation}y = 6 \end{equation} \begin{equation}x + 6 = 16 \end{equation} \begin{equation}x = 10 \end{equation} ``` x = 10 y = 6 print ((x + y == 16) & ((10 * x) + (25 * y) == 250)) ``` # 指数、根和对数 ## 指数 \begin{equation}2^{2} = 2 \cdot 2 = 4\end{equation} \begin{equation}2^{3} = 2 \cdot 2 \cdot 2 = 8\end{equation} ``` x = 5**3 print(x) ``` ## 根 \begin{equation}?^{2} = 9 \end{equation} \begin{equation}\sqrt{9} = 3 \end{equation} \begin{equation}\sqrt[3]{64} = 4 \end{equation} ``` import math x = math.sqrt(9) # 平方根 print (x) cr = round(64 ** (1. / 3)) # 立方根 print(cr) ``` ### 根是分数形式的指数 \begin{equation} 8^{\frac{1}{3}} = \sqrt[3]{8} = 2 \end{equation} \begin{equation} 9^{\frac{1}{2}} = \sqrt{9} = 3 \end{equation} ``` print (9**0.5) print (math.sqrt(9)) ``` ## 对数 > 对数是指数的逆运算 \begin{equation}4^{?} = 16 \end{equation} \begin{equation}log_{4}(16) = 2 \end{equation} ``` x = math.log(16, 4) print(x) ``` ### 以10为底的对数 \begin{equation}log(64) = 1.8061 \end{equation} ### 自然对数 \begin{equation}log_{e}(64) = ln(64) = 4.1589 \end{equation} ``` print(math.log10(64)) print (math.log(64)) ``` ## 幂运算 (合并同类项) \begin{equation}2y = 2x^{4} ( \frac{x^{2} + 2x^{2}}{x^{3}} ) \end{equation} \begin{equation}2y = 2x^{4} ( \frac{3x^{2}}{x^{3}} ) \end{equation} \begin{equation}2y = 2x^{4} ( 3x^{-1} ) \end{equation} \begin{equation}2y = 6x^{3} \end{equation} \begin{equation}y = 3x^{3} \end{equation} ``` x = np.array(range(-10, 11)) y3 = 3 * x ** 3 print(tabulate(np.column_stack((x, y3)), headers=['x', 'y'])) plt.plot(x, y3, color="magenta") # y3是曲线 plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` ### 看一个成指数增长的例子 \begin{equation}y = 2^{x} \end{equation} ``` y4 = 2.0**x plt.plot(x, y4, color="magenta") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` ### 复利计算 > 存款100,年利5%,20年后余额是多少(记复利) \begin{equation}y1 = 100 + (100 \cdot 0.05) \end{equation} \begin{equation}y1 = 100 \cdot 1.05 \end{equation} \begin{equation}y2 = 100 \cdot 1.05 \cdot 1.05 \end{equation} \begin{equation}y2 = 100 \cdot 1.05^{2} \end{equation} \begin{equation}y20 = 100 \cdot 1.05^{20} \end{equation} ``` year = np.array(range(1, 21)) # 年份 balance = 100 * (1.05 ** year) # 余额 plt.plot(year, balance, color="green") plt.xlabel('Year') plt.ylabel('Balance') plt.show() ``` # 多项式 \begin{equation}12x^{3} + 2x - 16 \end{equation} 三项: - 12x<sup>3</sup> - 2x - -16 - 两个系数(12 和 2) 和一个常量-16 - 变量 x - 指数 <sup>3</sup> ## 标准形式 随着x的指数增长排列(升幂) \begin{equation}3x + 4xy^{2} - 3 + x^{3} \end{equation} 指数最高项在前(降幂) \begin{equation}x^{3} + 4xy^{2} + 3x - 3 \end{equation} ## 多项式化简 \begin{equation}x^{3} + 2x^{3} - 3x - x + 8 - 3 \end{equation} \begin{equation}3x^{3} - 4x + 5 \end{equation} ``` from random import randint x = randint(1,100) # 取任意值验证多项式化简 (x**3 + 2*x**3 - 3*x - x + 8 - 3) == (3*x**3 - 4*x + 5) ``` ## 多项式相加 \begin{equation}(3x^{3} - 4x + 5) + (2x^{3} + 3x^{2} - 2x + 2) \end{equation} \begin{equation}3x^{3} + 2x^{3} + 3x^{2} - 4x -2x + 5 + 2 \end{equation} \begin{equation}5x^{3} + 3x^{2} - 6x + 7 \end{equation} ``` x = randint(1,100) (3*x**3 - 4*x + 5) + (2*x**3 + 3*x**2 - 2*x + 2) == 5*x**3 + 3*x**2 - 6*x + 7 ``` ## 多项式相减 \begin{equation}(2x^{2} - 4x + 5) - (x^{2} - 2x + 2) \end{equation} \begin{equation}(2x^{2} - 4x + 5) + (-x^{2} + 2x - 2) \end{equation} \begin{equation}2x^{2} + -x^{2} + -4x + 2x + 5 + -2 \end{equation} \begin{equation}x^{2} - 2x + 3 \end{equation} ``` from random import randint x = randint(1,100) (2*x**2 - 4*x + 5) - (x**2 - 2*x + 2) == x**2 - 2*x + 3 ``` ## 多项式相乘 1. 用第一个多项式的每一项乘以第二个多项式 2. 把相乘的结果合并同类项 \begin{equation}(x^{4} + 2)(2x^{2} + 3x - 3) \end{equation} \begin{equation}2x^{6} + 3x^{5} - 3x^{4} + 4x^{2} + 6x - 6 \end{equation} ``` x = randint(1,100) (x**4 + 2)*(2*x**2 + 3*x - 3) == 2*x**6 + 3*x**5 - 3*x**4 + 4*x**2 + 6*x - 6 ``` ## 多项式相除 ### 简单例子 \begin{equation}(4x + 6x^{2}) \div 2x \end{equation} \begin{equation}\frac{4x + 6x^{2}}{2x} \end{equation} \begin{equation}\frac{4x}{2x} + \frac{6x^{2}}{2x}\end{equation} \begin{equation}2 + 3x\end{equation} ``` x = randint(1,100) (4*x + 6*x**2) / (2*x) == 2 + 3*x ``` ### 长除法 \begin{equation}(x^{2} + 2x - 3) \div (x - 2) \end{equation} \begin{equation} x - 2 |\overline{x^{2} + 2x - 3} \end{equation} \begin{equation} \;\;\;\;x \end{equation} \begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation} \begin{equation} \;x^{2} -2x \end{equation} \begin{equation} \;\;\;\;x \end{equation} \begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation} \begin{equation}- (x^{2} -2x) \end{equation} \begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation} \begin{equation} \;\;\;\;\;\;\;\;x + 4 \end{equation} \begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation} \begin{equation}- (x^{2} -2x) \end{equation} \begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation} \begin{equation}- (\;\;\;\;\;\;\;\;\;\;\;\;4x -8) \end{equation} \begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;5} \end{equation} \begin{equation}x + 4 + \frac{5}{x-2} \end{equation} ``` x = randint(3,100) (x**2 + 2*x -3)/(x-2) == x + 4 + (5/(x-2)) ``` # 因式 16可以如下表示 - 1 x 16 - 2 x 8 - 4 x 4 或者 1, 2, 4, 8 是 16 的因子 ## 用多项式乘来表示多项式 \begin{equation}-6x^{2}y^{3} \end{equation} \begin{equation}(2xy^{2})(-3xy) \end{equation} 又如 \begin{equation}(x + 2)(2x^{2} - 3y + 2) = 2x^{3} + 4x^{2} - 3xy + 2x - 6y + 4 \end{equation} 那么**x+2** 和 **2x<sup>2</sup> - 3y + 2** 都是 **2x<sup>3</sup> + 4x<sup>2</sup> - 3xy + 2x - 6y + 4**的因子 ## 最大公共因子 | 16 | 24 | |--------|--------| | 1 x 16 | 1 x 24 | | 2 x 8 | 2 x 12 | | 2 x **8** | 3 x **8** | | 4 x 4 | 4 x 6 | 8是16和24的最大公约数 \begin{equation}15x^{2}y\;\;\;\;\;\;\;\;9xy^{3}\end{equation} 这两个多项式的最大公共因子是? ## 最大公共因子 先看系数,他们都包含**3** - 3 x 5 = 15 - 3 x 3 = 9 再看***x***项, x<sup>2</sup> 和 x。 最后看***y***项, y 和 y<sup>3</sup>。 最大公共因子是 \begin{equation}3xy\end{equation} ## 最大公共因子 可见最大公共因子总包括 - 系数的最大公约数 - 变量指数的最小值 用多项式除来验证: \begin{equation}\frac{15x^{2}y}{3xy}\;\;\;\;\;\;\;\;\frac{9xy^{3}}{3xy}\end{equation} \begin{equation}3xy(5x) = 15x^{2}y\end{equation} \begin{equation}3xy(3y^{2}) = 9xy^{3}\end{equation} ``` x = randint(1,100) y = randint(1,100) print((3*x*y)*(5*x) == 15*x**2*y) print((3*x*y)*(3*y**2) == 9*x*y**3) ``` ## 用系数最大公约数作因式分解 \begin{equation}6x + 15y \end{equation} \begin{equation}6x + 15y = 3(2x) + 3(5y) \end{equation} \begin{equation}6x + 15y = 3(2x) + 3(5y) = \mathbf{3(2x + 5y)} \end{equation} ``` x = randint(1,100) y = randint(1,100) (6*x + 15*y) == (3*(2*x) + 3*(5*y)) == (3*(2*x + 5*y)) ``` ## 用最大公共因子作因式分解 \begin{equation}15x^{2}y + 9xy^{3}\end{equation} \begin{equation}3xy(5x) = 15x^{2}y\end{equation} \begin{equation}3xy(3y^{2}) = 9xy^{3}\end{equation} \begin{equation}15x^{2}y + 9xy^{3} = \mathbf{3xy(5x + 3y^{2})}\end{equation} ``` x = randint(1,100) y = randint(1,100) (15*x**2*y + 9*x*y**3) == (3*x*y*(5*x + 3*y**2)) ``` ## 用平方差公式作因式分解 \begin{equation}x^{2} - 9\end{equation} \begin{equation}x^{2} - 3^{2}\end{equation} \begin{equation}(x - 3)(x + 3)\end{equation} ``` x = randint(1,100) (x**2 - 9) == (x - 3)*(x + 3) ``` ## 用平方作因式分解 \begin{equation}x^{2} + 10x + 25\end{equation} \begin{equation}(x + 5)(x + 5)\end{equation} \begin{equation}(x + 5)^{2}\end{equation} 一般的 \begin{equation}(a + b)^{2} = a^{2} + b^{2}+ 2ab \end{equation} ``` a = randint(1,100) b = randint(1,100) a**2 + b**2 + (2*a*b) == (a + b)**2 ``` # 二次方程 \begin{equation}y = 2(x - 1)(x + 2)\end{equation} \begin{equation}y = 2x^{2} + 2x - 4\end{equation} ``` x = np.array(range(-9, 9)) y = 2 * x **2 + 2 * x - 4 plt.plot(x, y, color="grey") # 画二次曲线 (抛物线) plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` >会是什么形状? \begin{equation}y = -2x^{2} + 6x + 7\end{equation} ``` x = np.array(range(-8, 12)) y = -2 * x ** 2 + 6 * x + 7 plt.plot(x, y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` ## 抛物线的顶点 \begin{equation}y = ax^{2} + bx + c\end{equation} ***a***, ***b***, ***c***是系数 它产生的抛物线有顶点,或者是最高点,或者是最低点。 ``` def plot_parabola(a, b, c): vx = (-1*b)/(2*a) # 顶点 vy = a*vx**2 + b*vx + c minx = int(vx - 10) # 范围 maxx = int(vx + 11) x = np.array(range(minx, maxx)) y = a * x ** 2 + b * x + c miny = y.min() maxy = y.max() plt.plot(x, y, color="grey") # 画曲线 plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() sx = [vx, vx] # 画中心线 sy = [miny, maxy] plt.plot(sx, sy, color='magenta') plt.scatter(vx,vy, color="red") # 画顶点 plot_parabola(2, 2, -4) plt.show() plot_parabola(-2, 3, 5) plt.show() ``` ## 抛物线的交点(二次方程的解) \begin{equation}y = 2(x - 1)(x + 2)\end{equation} \begin{equation}2(x - 1)(x + 2) = 0\end{equation} \begin{equation}x = 1\end{equation} \begin{equation}x = -2\end{equation} ``` # 画曲线 plot_parabola(2, 2, -4) # 画交点 x1 = -2 x2 = 1 plt.scatter([x1,x2],[0,0], color="green") plt.annotate('x1',(x1, 0)) plt.annotate('x2',(x2, 0)) plt.show() ``` ## 二次方程的解公式 \begin{equation}ax^{2} + bx + c = 0\end{equation} \begin{equation}x = \frac{-b \pm \sqrt{b^{2} - 4ac}}{2a}\end{equation} ``` def plot_parabola_from_formula (a, b, c): plot_parabola(a, b, c) # 画曲线 x1 = (-b + (b*b - 4*a*c)**0.5)/(2 * a) x2 = (-b - (b*b - 4*a*c)**0.5)/(2 * a) plt.scatter([x1, x2], [0, 0], color="green") # 画解 plt.annotate('x1', (x1, 0)) plt.annotate('x2', (x2, 0)) plt.show() plot_parabola_from_formula (2, -16, 2) ``` # 函数 \begin{equation}f(x) = x^{2} + 2\end{equation} \begin{equation}f(3) = 11\end{equation} ``` def f(x): return x**2 + 2 f(3) x = np.array(range(-100, 101)) plt.xlabel('x') plt.ylabel('f(x)') plt.grid() plt.plot(x, f(x), color='purple') plt.show() ``` ## 函数的定义域 \begin{equation}f(x) = x + 1, \{x \in \rm I\!R\}\end{equation} \begin{equation}g(x) = (\frac{12}{2x})^{2}, \{x \in \rm I\!R\;\;|\;\; x \ne 0 \}\end{equation} 简化形式: \begin{equation}g(x) = (\frac{12}{2x})^{2},\;\; x \ne 0\end{equation} ``` def g(x): if x != 0: return (12/(2*x))**2 x = range(-100, 101) y = [g(a) for a in x] print(g(0.1)) plt.xlabel('x') plt.ylabel('g(x)') plt.grid() plt.plot(x, y, color='purple') # 绘制临界点, 如果取接近0的值,临界点附件函数的形状变得不可见 plt.plot(0, g(1), color='purple', marker='o', markerfacecolor='w', markersize=8) plt.show() ``` \begin{equation}h(x) = 2\sqrt{x}, \{x \in \rm I\!R\;\;|\;\; x \ge 0 \}\end{equation} ``` def h(x): if x >= 0: return 2 * np.sqrt(x) x = range(-100, 101) y = [h(a) for a in x] plt.xlabel('x') plt.ylabel('h(x)') plt.grid() plt.plot(x, y, color='purple') # 画出边界 plt.plot(0, h(0), color='purple', marker='o', markerfacecolor='purple', markersize=8) plt.show() ``` \begin{equation}j(x) = x + 2,\;\; x \ge 0 \text{ and } x \le 5\end{equation} \begin{equation}\{x \in \rm I\!R\;\;|\;\; 0 \le x \le 5 \}\end{equation} ``` def j(x): if x >= 0 and x <= 5: return x + 2 x = range(-100, 101) y = [j(a) for a in x] plt.xlabel('x') plt.ylabel('j(x)') plt.grid() plt.plot(x, y, color='purple') # 两个边界点 plt.plot(0, j(0), color='purple', marker='o', markerfacecolor='purple', markersize=8) plt.plot(5, j(5), color='purple', marker='o', markerfacecolor='purple', markersize=8) plt.show() ``` ### 阶梯函数 \begin{equation} k(x) = \begin{cases} 0, & \text{if } x = 0, \\ 1, & \text{if } x = 100 \end{cases} \end{equation} ``` def k(x): if x == 0: return 0 elif x == 100: return 1 x = range(-100, 101) y = [k(a) for a in x] plt.xlabel('x') plt.ylabel('k(x)') plt.grid() plt.scatter(x, y, color='purple') plt.show() ``` ### 函数的值域 \begin{equation}p(x) = x^{2} + 1\end{equation} \begin{equation}\{p(x) \in \rm I\!R\;\;|\;\; p(x) \ge 1 \}\end{equation} ``` def p(x): return x**2 + 1 x = np.array(range(-100, 101)) plt.xlabel('x') plt.ylabel('p(x)') plt.grid() plt.plot(x, p(x), color='purple') plt.show() ```
github_jupyter
# Modeling and Simulation in Python Chapter 20 Copyright 2017 Allen Downey License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0) ``` # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import * ``` ### Dropping pennies I'll start by getting the units we need from Pint. ``` m = UNITS.meter s = UNITS.second ``` And defining the initial state. ``` init = State(y=381 * m, v=0 * m/s) ``` Acceleration due to gravity is about 9.8 m / s$^2$. ``` g = 9.8 * m/s**2 ``` When we call `odeint`, we need an array of timestamps where we want to compute the solution. I'll start with a duration of 10 seconds. ``` t_end = 10 * s ``` Now we make a `System` object. ``` system = System(init=init, g=g, t_end=t_end) ``` And define the slope function. ``` def slope_func(state, t, system): """Compute derivatives of the state. state: position, velocity t: time system: System object containing `g` returns: derivatives of y and v """ y, v = state unpack(system) dydt = v dvdt = -g return dydt, dvdt ``` It's always a good idea to test the slope function with the initial conditions. ``` dydt, dvdt = slope_func(init, 0, system) print(dydt) print(dvdt) ``` Now we're ready to call `run_ode_solver` ``` results, details = run_ode_solver(system, slope_func, max_step=0.5*s) details.message ``` Here are the results: ``` results ``` And here's position as a function of time: ``` def plot_position(results): plot(results.y, label='y') decorate(xlabel='Time (s)', ylabel='Position (m)') plot_position(results) savefig('figs/chap09-fig01.pdf') ``` ### Onto the sidewalk To figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value. ``` t_crossings = crossings(results.y, 0) ``` For this example there should be just one crossing, the time when the penny hits the sidewalk. ``` t_sidewalk = t_crossings[0] * s ``` We can compare that to the exact result. Without air resistance, we have $v = -g t$ and $y = 381 - g t^2 / 2$ Setting $y=0$ and solving for $t$ yields $t = \sqrt{\frac{2 y_{init}}{g}}$ ``` sqrt(2 * init.y / g) ``` The estimate is accurate to about 10 decimal places. ## Events Instead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**. Here's an event function that returns the height of the penny above the sidewalk: ``` def event_func(state, t, system): """Return the height of the penny above the sidewalk. """ y, v = state return y ``` And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate. ``` results, details = run_ode_solver(system, slope_func, events=event_func) details ``` The message from the solver indicates the solver stopped because the event we wanted to detect happened. Here are the results: ``` results ``` With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred: ``` t_sidewalk = get_last_label(results) * s ``` Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end. We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array: ``` details.t_events[0][0] * s ``` The result is accurate to about 15 decimal places. We can also check the velocity of the penny when it hits the sidewalk: ``` v_sidewalk = get_last_value(results.v) * m / s ``` And convert to kilometers per hour. ``` km = UNITS.kilometer h = UNITS.hour v_sidewalk.to(km / h) ``` If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h. So it's a good thing there is air resistance. ## Under the hood Here is the source code for `crossings` so you can see what's happening under the hood: ``` %psource crossings ``` The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html). And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. ### Exercises **Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate): "If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed." Use `run_ode_solver` to answer this question. Here are some suggestions about how to proceed: 1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons. 2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun. 3. Express your answer in days, and plot the results as millions of kilometers versus days. If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them. You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/). ``` init = State(r=149.6e9, v=0) G=6.67e-11 Msun=1.989e30 t_end=100e6 rSun = 695500 rEarth=6378 system = System(init=init, G=G, Msun=Msun, t_end=t_end, r_final=rSun + rEarth) def slope_func(state, t, system): """Compute derivatives of the state. state: position, velocity t: time system: System object containing `g` returns: derivatives of y and v """ r, v = state unpack(system) drdt = v dvdt = -(G*Msun)/(r**2) return drdt, dvdt drdt, dvdt = slope_func(init, 0, system) print(drdt) print(dvdt) #results, details = run_ode_solver(system, slope_func, max_step=10) #details.message results def event_func(state, t, system): """Return the height of the penny above the sidewalk. """ r, v = state return r results, details = run_ode_solver(system, slope_func, events=event_func, max_step=100000) details def plot_position(results): plot(results.r, label='r') decorate(xlabel='Time (s)', ylabel='Position (m)') plot_position(results) savefig('figs/chap09-fig01.pdf') results.index[-1] # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here ```
github_jupyter
Here we build LSTM recurrent neural networks to predict the monthly sales of alcohol given previous sales. ``` import matplotlib.pyplot as plt import pandas as pd import numpy as np from tensorflow import keras from tensorflow.keras import layers from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from google.colab import drive drive.mount('/content/drive') %cd /content/drive/MyDrive/Alcohol-Sales-Prediction df=pd.read_csv("data.csv") df.head() plt.figure(figsize = (18,9)) plt.plot(df['Date'], df['Sales']) plt.xticks(np.arange(0,df.shape[0],12), rotation = 45) plt.xlabel('Date', fontsize=18) plt.ylabel('Sales', fontsize=18) plt.show() ``` We use the first 264 rows (out of 325 rows) for training. ``` num_train = 264 num_test = 61 ``` First we normalize the data using the Min-Max scaler. ``` training_set = df.iloc[:num_train, 1:2].values scaler = MinMaxScaler(feature_range = (0, 1)) training_set_scaled = scaler.fit_transform(training_set) ``` There is clear seasonality in the monthly data, so we use the sliding window approach with window size 12. ``` X_train = [] y_train = [] for i in range(12, num_train): X_train.append(training_set_scaled[i-12:i, 0]) y_train.append(training_set_scaled[i, 0]) X_train, y_train = np.array(X_train), np.array(y_train) X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1)) ``` y_train contains sales in rows 12 to 263 of the dataset, and each corresponding array in X_train contains the sales in the previous 12 months. ``` X_train.shape ``` Build an LSTM model with 2 hidden layers. ``` LSTM_model = keras.Sequential( [ keras.Input(shape=(X_train.shape[1], 1)), layers.LSTM(12, return_sequences=True), layers.LSTM(12), layers.Dense(1) ] ) LSTM_model.summary() LSTM_model.compile(loss = 'mean_squared_error', optimizer = 'adam') LSTM_model.fit(X_train, y_train, epochs = 100, batch_size = 1, verbose = 2) ``` We evaluate our model on the alcohol sales for the last 61 rows. ``` test_set = df.iloc[num_train - 12:, 1:2].values test_set_scaled = scaler.transform(test_set) ``` test_set contains the last 73 rows of the dataset. ``` X_test = [] for i in range(12, num_test+12): X_test.append(test_set_scaled[i-12:i, 0]) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) X_test.shape ``` Obtain and denormalize the predictions, then compute the MSE with respect to the test data. ``` LSTM_predict_scaled = LSTM_model.predict(X_test) LSTM_predict = scaler.inverse_transform(LSTM_predict_scaled) print(f"Mean Squared Error between descaled predictions and test values is {mean_squared_error(df.iloc[num_train:, 1:2].values, LSTM_predict)}.") ``` Plot the predicted sales with the actual sales for the test data. ``` plt.figure(figsize = (18,9)) plt.plot(df.iloc[num_train:, 0], df.iloc[num_train:, 1:2].values, color = "red", label = "Actual Value") plt.plot(df.iloc[num_train:, 0], LSTM_predict, color = "blue", label = "Predicted Value") plt.xticks(np.arange(0,num_test,12)) plt.title('LSTM Alcohol Sales Prediction', fontsize=18) plt.xlabel('Date', fontsize=18) plt.ylabel('Sales', fontsize=18) plt.legend() plt.show() ``` We see that the the model tends to predict lower values than the actual prices. The model also performs worse for larger sales values, in particular the model becomes less sensitive to changes in input, which is shown by the flat regions near the end of the graph. This is due to the increasing nature of the sales data, and that the test data has exceeded the range of the values used in training. Because LSTM uses tanh and sigmoid activations, large inputs can easily cause neuron saturation which leads to relatively constant outputs. Also note that the variance of the alcohol sales increases with the mean. We modify the LSTM model above by using a different normalization technique. For training, we divide each input array by it's first element then subtract it by 1 (elementwise). We do the same thing for the output observations, using the same values as the corresponding inputs for division. ``` X_train_new = [] y_train_new = [] for i in range(12, num_train): X_train_new.append([x / training_set[i-12, 0] - 1 for x in training_set[i-11:i, 0]]) y_train_new.append(training_set[i, 0] / training_set[i-12, 0] - 1) X_train_new, y_train_new = np.array(X_train_new), np.array(y_train_new) X_train_new = np.reshape(X_train_new, (X_train_new.shape[0], X_train_new.shape[1], 1)) ``` We do not include the first element of each input in X_train_new as it is always normalized to 0. ``` X_train_new.shape ``` Build another LSTM model with 2 hidden layers. ``` LSTM_model_new = keras.Sequential( [ keras.Input(shape=(X_train_new.shape[1], 1)), layers.LSTM(11, return_sequences=True), layers.LSTM(11), layers.Dense(1) ] ) LSTM_model_new.summary() LSTM_model_new.compile(loss = 'mean_squared_error', optimizer = 'adam') LSTM_model_new.fit(X_train_new, y_train_new, epochs = 100, batch_size = 1, verbose = 2) ``` Again we evaluate our new model on the alcohol sales for the last 61 rows. X_test_new is computed in the same way as for the training inputs. ``` X_test_new = [] for i in range(12, num_test+12): X_test_new.append([x / test_set[i-12, 0] - 1 for x in test_set[i-11:i, 0]]) X_test_new = np.array(X_test_new) X_test_new = np.reshape(X_test_new, (X_test_new.shape[0], X_test_new.shape[1], 1)) ``` Obtain and descale the predictions using the first elements of the inputs, then compute the MSE with respect to the test data. ``` LSTM_predict_new_scaled = LSTM_model_new.predict(X_test_new) LSTM_predict_new = np.multiply(np.array(LSTM_predict_new_scaled.flatten()) + 1, np.array(test_set[0:-12, 0])) print(f"Mean Squared Error between descaled predictions and test values is {mean_squared_error(df.iloc[num_train:, 1:2].values, LSTM_predict_new)}.") ``` The Mean Squared Error is much lower than that of the original model. ``` plt.figure(figsize = (18,9)) plt.plot(df.iloc[num_train:, 0], df.iloc[num_train:, 1:2].values, color = "red", label = "Actual Value") plt.plot(df.iloc[num_train:, 0], LSTM_predict_new, color = "blue", label = "Predicted Value") plt.xticks(np.arange(0,num_test,12)) plt.title('Modified LSTM Alcohol Sales Prediction', fontsize=18) plt.xlabel('Date', fontsize=18) plt.ylabel('Sales', fontsize=18) plt.legend() plt.show() ``` The modified LSTM model fits the test dataset significantly better than the previous model, and the performance doesn't deteriorate as the sales values increase.
github_jupyter
# Introduction to Seaborn *** We got a good a glimpse of the data. But that's the thing with Data Science the more you get involved the harder is it for you to stop exploring. Now, We want to **analyze** the data in order to extract some insights.We can use the Seaborn library for that. We can use Seaborn to do both **Univariate and Multivariate analysis**. How? we will see soon. ## So what is Seaborn? (1/2) *** Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. Some of the features that seaborn offers are : * Several built-in themes for styling matplotlib graphics * Tools for choosing color palettes to make beautiful plots that reveal patterns in your data * Functions for visualizing univariate and bivariate distributions or for comparing them between subsets of data ## So what is Seaborn? (2/2) *** * Tools that fit and visualize linear regression models for different kinds of independent and dependent variables * Functions that visualize matrices of data and use clustering algorithms to discover structure in those matrices * A function to plot statistical timeseries data with flexible estimation and representation of uncertainty around the estimate * High-level abstractions for structuring grids of plots that let you easily build complex visualizations <div class="alert alert-block alert-success">**You can import Seaborn as below :**</div> ``` import seaborn as sns ``` # **Scikit-learn** --- Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent interface in Python. It features various algorithms like support vector machine, random forests, and k-neighbours, and it also supports Python numerical and scientific libraries like NumPy and SciPy. Some popular groups of models provided by scikit-learn include: **Clustering**: for grouping unlabeled data such as KMeans. Cross Validation: for estimating the performance of supervised models on unseen data. **Datasets:** for test datasets and for generating datasets with specific properties for investigating model behavior. **Dimensionality Reduction:** for reducing the number of attributes in data for summarization, visualization and feature selection such as Principal component analysis. **Ensemble methods:** for combining the predictions of multiple supervised models. **Feature extraction:** for defining attributes in image and text data. **Feature selection:** for identifying meaningful attributes from which to create supervised models. **Parameter Tuning:** for getting the most out of supervised models. **Manifold Learning:** For summarizing and depicting complex multi-dimensional data. **Supervised Models:** a vast array not limited to generalized linear models, discriminate analysis, naive bayes, lazy methods, neural networks, support vector machines and decision trees. ## **Linear Regression for Machine Learning** Linear regression attempts to model the relationship between two variables by fitting a linear equation to observed data. One variable is considered to be an explanatory variable, and the other is considered to be a dependent variable. For example, a modeler might want to relate the weights of individuals to their heights using a linear regression model. Before attempting to fit a linear model to observed data, a modeler should first determine whether or not there is a relationship between the variables of interest. This does not necessarily imply that one variable causes the other (for example, higher GATE scores do not cause higher college grades), but that there is some significant association between the two variables. A linear regression line has an equation of the form **Y = a + bX**, where **X** is the explanatory variable and **Y** is the dependent variable. The slope of the line is b, and a is the intercept (the value of y when x = 0). ``` # Necessary Imports import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import warnings from sklearn.linear_model import LinearRegression ``` ## Dataset *** Let's start by loading the dataset. We'll be using two `.csv` files. One having only one predictor and the other having multiple predictors. Since the target variable(we'll find out what target variables and predictors are below) is **quantitative/continuous**, this is the best for regression problems. Let's start loading the data for univariate analysis. ``` data = pd.read_csv("house_prices.csv") data.info data.head() ``` In order to learn to make predictions, it is important to learn what a Predictor is. ## So what is a predictor? *** How could you say if a person went to tier 1, 2 or 3 business college in India? Simple, if someone is determined to pursue a MBA degree, Higher CAT scores (or GPA) leads to more college admissions! so CAT score here is known as predictors and the variable of interest is known as the target variable. ## Predictors & Target Variable for our dataset *** Here, our target variable would be as mentioned above ____________ ** What could be the predictors for our target variable? ** Let's go with the **LotArea** We would want to see if the price of a house is really affected by the area of the house Intuitively, we all know the outcome but let's try to understand why we're doing this ## Plotting our data *** - Starting simple, let's just check how our data looks like in a scatter plot where: - Area is taken along the X-axis - Price is taken along the Y-axis ``` import matplotlib.pyplot as plt plt.scatter(data['LotArea'], data['SalePrice']) plt.title('House pricing') plt.xlabel('Area') plt.ylabel('Price') plt.show() import matplotlib.pyplot as plt plt.scatter(data.LotArea, data.SalePrice) plt.plot(data.LotArea, 30000 + 15*data.LotArea, "r-") plt.title('NYC House pricing') plt.xlabel('Area') plt.ylabel('Price') plt.show() ``` ## Is there a relation ? *** - By seeing our plot above, we can see an upward trend in the House Prices as the Area of the house increases - We can say that as the Area of a house increases, it's price increases too. <br/> Now, let's say we want to predict the price of the house whose area is 14000 sq feet, how should we go about it? ## Is there a relation ? *** - By seeing our plot above, we can see an upward trend in the House Prices as the Area of the house increases - We can say that as the Area of a house increases, it's price increases too. <br/> Now, let's say we want to predict the price of the house whose area is 14000 sq feet, how should we go about it? ``` plt.scatter(data['LotArea'], data['SalePrice']) plt.axvline(x=14000,linewidth='1',color='r') plt.title('House pricing') plt.xlabel('Area') plt.ylabel('Price') plt.show() ``` ## Which line to choose? *** As you saw, there are many lines which would seem to be fitting reasonably well. consider following lines, $$ price = 30000 + 15∗area\\ price=10000 + 17 ∗ area\\ price= 50000 + 12 ∗ area $$ <div class="alert alert-block alert-success">**Let's try and plot them and see if they are a good fit**</div> ``` import matplotlib.pyplot as plt plt.scatter(data.LotArea, data.SalePrice) plt.plot(data.LotArea, 30000 + 15*data.LotArea, "r-") plt.plot(data.LotArea, 10000 + 17*data.LotArea, "y-") plt.plot(data.LotArea, 50000 + 12*data.LotArea, "c-") plt.title(' House pricing') plt.xlabel('Area') plt.ylabel('Price') plt.show() ``` ## Which line to choose? *** As you can see although all three seemed like a good fit, they are quite different from each other. And in the end, they will result in very different predictions. For example, for house area = 9600, the predictions for red, black and yellow lines are This function is defined as: $$(Y_{pred}-Y_{actual})^2$$ The farther the points, more the the distance and more is the value of the function ! It is known as the **cost function** and since this function captures square of distance, it is known as the **least-squares cost function**. The idea is to **minimize** the cost function to get the best fitting line. ``` # red line: print("red line:", 30000 + 15*9600) # <-- Inserted value 9600 inplace of LotArea # yellow line: print('yellow line:', 10000 + 17*9600) # <-- Inserted value 9600 inplace of LotArea # cyan line: print('cyan line:', 50000 + 12*9600) # <-- Inserted value 9600 inplace of LotArea ``` ## Which line to choose? *** As you can see the price predictions are varying from each other significantly. So how do we choose the best line? Well, we can define a function that measures how near or far the prediction is from the actual value. If we consider the actual and predicted values as points in space, we can just calculate the distance between these two points! This function is defined as: $$(Y_{pred}-Y_{actual})^2$$ The farther the points, more the the distance and more is the value of the function ! It is known as the **cost function** and since this function captures square of distance, it is known as the **least-squares cost function**. The idea is to **minimize** the cost function to get the best fitting line. ## Introducing *Linear Regression* : *** Linear regression using least squared cost function is known as **Ordinary Least Squared Linear Regression**. This allows us to analyze the relationship between two quantitative variables and derive some meaningful insights ## Notations ! *** We will start to use following notations as it helps us represent the problem in a concise way. * $x^{(i)}$ denotes the predictor(s) - in our case it's the Area * $y^{(i)}$ denotes the target variable (Price) A pair ($x^{(i)}$ , $y^{(i)}$) is called a training example. Let's consider that any given dataset contains **"m"** training examples or Observations { $x^{(i)}$ , $y^{(i)}$ ; i = 1, . . . , m} — is called a **training set**. In this example, m = 1326 (Nos. of row) For example, 2nd training example, ( x(2) , y(2) ) corresponds to **(9600,181500)** ## **Cost Function - Why is it needed ?** * An ideal case would be when all the individual points in the scatter plot fall directly on the line OR a straight line passes through all the points in our plot, but in reality, that rarely happens * We can see that for a Particular Area, there is a difference between Price given by our data point (which is the correct observation) and the line (predicted observation or Fitted Value) So how can we Mathematically capture such differences and represent it? ### **Cost Function - Mathemtical Representation** We choose θs so that predicted values are as close to the actual values as possible We can define a mathematical function to capture the difference between the predicted and actual values. This function is known as the cost function and denoted by J(θ) $$J(θ) = \frac{1}{2m} \sum _{i=1}^m (h_\theta(X^{(i)})-Y^{(i)})^2$$ θ is the coefficient of 'x' for our linear model intuitively. It measures how much of a unit change of 'x' will have an effect on 'y' Here, we need to figure out the values of intercept and coefficients so that the cost function is minimized. We do this by a very important and widely used Algorithm: **Gradient Descent** --- ## **Optimizing using gradient descent** --- - Gradient Descent is an iterative method that starts with some “initial random value” for θ, and that repeatedly changes θ to make J(θ) smaller, until hopefully it converges to a value of θ that minimizes J(θ) - It repeatedly performs an update on θ as shown: $$ \theta_{j} := \theta_{j}-\alpha \frac{\partial }{\partial \theta_{j}}J(\theta) $$ - Here α is called the learning rate. This is a very natural algorithm that repeatedly takes a step in the direction of steepest decrease of J(θ) ## Gradient Descent Algorithm *** To get the optimal value of θ , perform following algorithm known as the **Batch Gradient Descent Algorithm** - Assume initial θ - Calculate h(θ) for i=1 to m - Calculate J(θ). Stop when value of J(θ) assumes global/local minima - Calculate $\thinspace\sum_{i=1}^{m}(y^{(i)}-h_{\theta}(x^{(i)}))*x_{j}$ for all $\theta_{j}'s$ - Calculate new $\thinspace\theta_{j}'s$ - Go to step 2 ## Linear Regression in `sklearn` *** **`sklearn`** provides an easy api to fit a linear regression and predict values using linear regression Let's see how it works ``` X = data.LotArea[:,np.newaxis] # Reshape y = data.SalePrice X # Fitting Simple Linear Regression to the Training set regressor = LinearRegression() regressor.fit(X, y) # Predicting the Test set results y_pred = regressor.predict(X) ``` ## Plotting the Best Fitting Line ``` # Train Part plt.scatter(X, y) plt.plot(X, y_pred, "r-") plt.title('Housing Price ') plt.xlabel('Area') plt.ylabel('Price') plt.show() ``` ## Prediction made Easy *** - Visually, now we now have a nice approximation of how Area affects the Price - We can also make a prediction, the easy way of course! - For example: If we want to buy a house of 14,000 sq. ft, we can simply draw a vertical line from 14,000 up to our Approximated Trend line and continue that line towards the y-axis ``` # Train Part plt.scatter(X, y) plt.plot(X, y_pred, "r-") plt.title('Housing Price ') plt.xlabel('Area') plt.ylabel('Price') plt.axvline(x=14000,c='g'); ``` - We can see that for a house whose area ~ 14,000 we need to pay ~ 2,00,000-2,25,000 ## Multivariate Linear Regression *** - In Univariate Linear Regression we used only two variable. One as Dependent Variable and Other as Independent variable. - Now, we will use Multiple Dependent variables instead of one and will predict the Price i.e. Independent variable. - i.e the equation for multivariate linear regression is modified as below: $$ y = \theta_{0}+\theta_{1}x_{1}+\theta_{2}x_{2}+\cdots +\theta_{n}x_{n} $$ - So, along with Area we will consider other variables as such as Pool etc. ``` #Loading the data NY_Housing = pd.read_csv("house_prices_multivariate.csv") NY_Housing.head() # making Independent and Dependent variables from the dataset X = NY_Housing.iloc[:,:-1] # Selecting everything except the last column y = NY_Housing.SalePrice # Fitting Multiple Linear Regression from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(X, y) print("intercept:", regressor.intercept_) # This is the y-intercept print("coefficients of predictors:", regressor.coef_) # These are the weights or regression coefficients. ``` ## Predicting the price *** Now let's say I want to predict the price of a house with following specifications. ``` my_house = X.iloc[155] my_house pred_my_house = regressor.predict(my_house.values.reshape(1, -1)) print("predicted value:", pred_my_house[0]) print("actual value:", y[155]) ``` As you can see the predicted value is not very far away from the actual value. Now let's try to predict the price for all the houses in the dataset. ``` # Predicting the results y_pred = regressor.predict(X) y_pred[:10] ``` <div class="alert alert-block alert-success">Great! now, let's put the predicted values next to the actual values and see how good a job have we done!</div> ``` prices = pd.DataFrame({"actual": y, "predicted": y_pred}) prices.head(10) ``` ## Measuring the goodness of fit *** Must say we have done a reasonably good job of predicting the house prices. However, as the number of predictions increase it would be difficult to manually check the goodness of fit. In such a case, we can use the cost function to check the goodness of fit. <div class="alert alert-block alert-success">Let's first start by finding the mean squared error (MSE)</div> ``` from sklearn.metrics import mean_squared_error mean_squared_error(y_pred, y) ``` <div class="alert alert-block alert-warning">**What do you think about the error value?**</div> As you would notice the error value seems very high (in billions!). Why has it happened? MSE is a relative measure of goodness of fit. We say that because the measure of goodness of MSE depends on the unit. As it turns out, Linear regression depends on certain underlying assumptions. Violation of these assumptions outputs poor results. Hence, it would be a good idea to understand these assumptions. ## Assumptions in Linear Regression *** There are some key assumptions that are made whilst dealing with Linear Regression These are pretty intuitive and very essential to understand as they play an important role in finding out some relationships in our dataset too! Let's discuss these assumptions, their importance and mainly **how we validate these assumptions**! ### Assumption - 1 *** 1) **Linear Relationship Assumption: ** Relationship between response (Dependent Variables) and feature variables (Independent Variables) should be linear. - **Why it is important:** <div class="alert alert-block alert-info">Linear regression only captures the linear relationship, as it's trying to fit a linear model to the data.</div> - **How do we validate it:** <div class="alert alert-block alert-success">The linearity assumption can be tested using scatter plots.</div> ### Assumption - 2 *** 2) **Little or No Multicollinearity Assumption:** It is assumed that there is little or no multicollinearity in the data. - **Why it is important:** <div class="alert alert-block alert-info">It results in unstable parameter estimates which makes it very difficult to assess the effect of independent variables.</div> - **How to validate it:** Multicollinearity occurs when the features (or independent variables) are not independent from each other. <div class="alert alert-block alert-success">Pair plots of features help validate.</div> ### Assumption - 3 *** 3) **Homoscedasticity Assumption: ** Homoscedasticity describes a situation in which the error term (that is, the “noise” or random disturbance in the relationship between the independent variables and the dependent variable) is the same across all values of the independent variables. - **Why it is important:** <div class="alert alert-block alert-info">Generally, non-constant variance arises in presence of outliers or extreme leverage values.</div> - **How to validate:** <div class="alert alert-block alert-success">Plot between dependent variable vs error .</div> *** In an ideal plot the variance around the regression line is the same for all values of the predictor variable. In this plot we can see that the variance around our regression line is nearly the same and hence it satisfies the condition of homoscedasticity. ![slide%2055.png](attachment:slide%2055.png) <br/><br/> Image Source : https://en.wikipedia.org/wiki/Homoscedasticity ### Assumption - 4 *** 4) **Little or No autocorrelation in residuals:** There should be little or no autocorrelation in the data. Autocorrelation occurs when the residual errors are not independent from each other. - **Why it is important:** <div class="alert alert-block alert-info">The presence of correlation in error terms drastically reduces model's accuracy. This usually occurs in time series models. If the error terms are correlated, the estimated standard errors tend to underestimate the true standard error.</div> - **How to validate:** <div class="alert alert-block alert-success">Residual vs time plot. look for the seasonal or correlated pattern in residual values.</div> ### Assumption - 5 *** 5) **Normal Distribution of error terms** - **Why it is important:** <div class="alert alert-block alert-info">Due to the Central Limit Theorem, we may assume that there are lots of underlying facts affecting the process and the sum of these individual errors will tend to behave like in a zero mean normal distribution. In practice, it seems to be so. </div> - **How to validate:** <div class="alert alert-block alert-success">You can look at QQ plot </div>
github_jupyter
# Scaling up ML using Cloud AI Platform In this notebook, we take a previously developed TensorFlow model to predict taxifare rides and package it up so that it can be run in Cloud AI Platform. For now, we'll run this on a small dataset. The model that was developed is rather simplistic, and therefore, the accuracy of the model is not great either. However, this notebook illustrates *how* to package up a TensorFlow model to run it within Cloud AI Platform. Later in the course, we will look at ways to make a more effective machine learning model. ## Environment variables for project and bucket Note that: <ol> <li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li> <li> Cloud training often involves saving and restoring model files. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available). A common pattern is to prefix the bucket name by the project id, so that it is unique. Also, for cost reasons, you might want to use a single region bucket. </li> </ol> <b>Change the cell below</b> to reflect your Project ID and bucket name. ``` import os PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # For Python Code # Model Info MODEL_NAME = 'taxifare' # Model Version MODEL_VERSION = 'v1' # Training Directory name TRAINING_DIR = 'taxi_trained' # For Bash Code os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION os.environ['MODEL_NAME'] = MODEL_NAME os.environ['MODEL_VERSION'] = MODEL_VERSION os.environ['TRAINING_DIR'] = TRAINING_DIR os.environ['TFVERSION'] = '1.14' # Tensorflow version %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION ``` ### Create the bucket to store model and training data for deploying to Google Cloud Machine Learning Engine Component ``` %%bash # The bucket needs to exist for the gsutil commands in next cell to work gsutil mb -p ${PROJECT} gs://${BUCKET} ``` ### Enable the Cloud Machine Learning Engine API The next command works with Cloud AI Platform API. In order for the command to work, you must enable the API using the Cloud Console UI. Use this [link.](https://console.cloud.google.com/project/_/apis/library) Then search the API list for Cloud Machine Learning and enable the API before executing the next cell. Allow the Cloud AI Platform service account to read/write to the bucket containing training data. ``` %%bash # This command will fail if the Cloud Machine Learning Engine API is not enabled using the link above. echo "Getting the service account email associated with the Cloud AI Platform API" AUTH_TOKEN=$(gcloud auth print-access-token) SVC_ACCOUNT=$(curl -X GET -H "Content-Type: application/json" \ -H "Authorization: Bearer $AUTH_TOKEN" \ https://ml.googleapis.com/v1/projects/${PROJECT}:getConfig \ | python -c "import json; import sys; response = json.load(sys.stdin); \ print (response['serviceAccount'])") # If this command fails, the Cloud Machine Learning Engine API has not been enabled above. echo "Authorizing the Cloud AI Platform account $SVC_ACCOUNT to access files in $BUCKET" gsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET gsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored. gsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET ``` ## Packaging up the code Take your code and put into a standard Python package structure. <a href="taxifare/trainer/model.py">model.py</a> and <a href="taxifare/trainer/task.py">task.py</a> containing the Tensorflow code from earlier (explore the <a href="taxifare/trainer/">directory structure</a>). ``` %%bash find ${MODEL_NAME} %%bash cat ${MODEL_NAME}/trainer/model.py ``` ## Find absolute paths to your data Note the absolute paths below. ``` %%bash echo "Working Directory: ${PWD}" echo "Head of taxi-train.csv" head -1 $PWD/taxi-train.csv echo "Head of taxi-valid.csv" head -1 $PWD/taxi-valid.csv ``` ## Running the Python module from the command-line #### Clean model training dir/output dir ``` %%bash # This is so that the trained model is started fresh each time. However, this needs to be done before # tensorboard is started rm -rf $PWD/${TRAINING_DIR} %%bash # Setup python so it sees the task module which controls the model.py export PYTHONPATH=${PYTHONPATH}:${PWD}/${MODEL_NAME} # Currently set for python 2. To run with python 3 # 1. Replace 'python' with 'python3' in the following command # 2. Edit trainer/task.py to reflect proper module import method python -m trainer.task \ --train_data_paths="${PWD}/taxi-train*" \ --eval_data_paths=${PWD}/taxi-valid.csv \ --output_dir=${PWD}/${TRAINING_DIR} \ --train_steps=1000 --job-dir=./tmp %%bash ls $PWD/${TRAINING_DIR}/export/exporter/ %%writefile ./test.json {"pickuplon": -73.885262,"pickuplat": 40.773008,"dropofflon": -73.987232,"dropofflat": 40.732403,"passengers": 2} %%bash # This model dir is the model exported after training and is used for prediction # model_dir=$(ls ${PWD}/${TRAINING_DIR}/export/exporter | tail -1) # predict using the trained model gcloud ai-platform local predict \ --model-dir=${PWD}/${TRAINING_DIR}/export/exporter/${model_dir} \ --json-instances=./test.json ``` ## Monitor training with TensorBoard To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row. TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests. You may close the TensorBoard tab when you are finished exploring. #### Clean model training dir/output dir ``` %%bash # This is so that the trained model is started fresh each time. However, this needs to be done before # tensorboard is started rm -rf $PWD/${TRAINING_DIR} ``` ## Running locally using gcloud ``` %%bash # Use Cloud Machine Learning Engine to train the model in local file system gcloud ai-platform local train \ --module-name=trainer.task \ --package-path=${PWD}/${MODEL_NAME}/trainer \ -- \ --train_data_paths=${PWD}/taxi-train.csv \ --eval_data_paths=${PWD}/taxi-valid.csv \ --train_steps=1000 \ --output_dir=${PWD}/${TRAINING_DIR} ``` Use TensorBoard to examine results. When I ran it (due to random seeds, your results will be different), the ```average_loss``` (Mean Squared Error) on the evaluation dataset was 187, meaning that the RMSE was around 13. ``` %%bash ls $PWD/${TRAINING_DIR} ``` ## Submit training job using gcloud First copy the training data to the cloud. Then, launch a training job. After you submit the job, go to the cloud console (http://console.cloud.google.com) and select <b>AI Platform | Jobs</b> to monitor progress. <b>Note:</b> Don't be concerned if the notebook stalls (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. Use the Cloud Console link (above) to monitor the job. ``` %%bash # Clear Cloud Storage bucket and copy the CSV files to Cloud Storage bucket echo $BUCKET gsutil -m rm -rf gs://${BUCKET}/${MODEL_NAME}/smallinput/ gsutil -m cp ${PWD}/*.csv gs://${BUCKET}/${MODEL_NAME}/smallinput/ %%bash OUTDIR=gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR} JOBNAME=${MODEL_NAME}_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME # Clear the Cloud Storage Bucket used for the training job gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=${PWD}/${MODEL_NAME}/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC \ --runtime-version=$TFVERSION \ -- \ --train_data_paths="gs://${BUCKET}/${MODEL_NAME}/smallinput/taxi-train*" \ --eval_data_paths="gs://${BUCKET}/${MODEL_NAME}/smallinput/taxi-valid*" \ --output_dir=$OUTDIR \ --train_steps=10000 ``` Don't be concerned if the notebook appears stalled (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. <b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b> ## Deploy model Find out the actual name of the subdirectory where the model is stored and use it to deploy the model. Deploying model will take up to <b>5 minutes</b>. ``` %%bash gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}/export/exporter ``` #### Deploy model : step 1 - remove version info Before an existing cloud model can be removed, it must have any version info removed. If an existing model does not exist, this command will generate an error but that is ok. ``` %%bash MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}/export/exporter | tail -1) echo "MODEL_LOCATION = ${MODEL_LOCATION}" gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} ``` #### Deploy model: step 2 - remove existing model Now that the version info is removed from an existing model, the actual model can be removed. If an existing model is not deployed, this command will generate an error but that is ok. It just means the model with the given name is not deployed. ``` %%bash gcloud ai-platform models delete ${MODEL_NAME} ``` #### Deploy model: step 3 - deploy new model ``` %%bash gcloud ai-platform models create ${MODEL_NAME} --regions $REGION ``` #### Deploy model: step 4 - add version info to the new model ``` %%bash MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}/export/exporter | tail -1) echo "MODEL_LOCATION = ${MODEL_LOCATION}" gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION ``` ## Prediction ``` %%bash gcloud ai-platform predict --model=${MODEL_NAME} --version=${MODEL_VERSION} --json-instances=./test.json from googleapiclient import discovery from oauth2client.client import GoogleCredentials import json credentials = GoogleCredentials.get_application_default() api = discovery.build('ml', 'v1', credentials=credentials, discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json') request_data = {'instances': [ { 'pickuplon': -73.885262, 'pickuplat': 40.773008, 'dropofflon': -73.987232, 'dropofflat': 40.732403, 'passengers': 2, } ] } parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, MODEL_NAME, MODEL_VERSION) response = api.projects().predict(body=request_data, name=parent).execute() print ("response={0}".format(response)) ``` ## Train on larger dataset I have already followed the steps below and the files are already available. <b> You don't need to do the steps in this comment. </b> In the next chapter (on feature engineering), we will avoid all this manual processing by using Cloud Dataflow. Go to http://bigquery.cloud.google.com/ and type the query: <pre> SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, 'nokeyindata' AS key FROM [nyc-tlc:yellow.trips] WHERE trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 AND ABS(HASH(pickup_datetime)) % 1000 == 1 </pre> Note that this is now 1,000,000 rows (i.e. 100x the original dataset). Export this to CSV using the following steps (Note that <b>I have already done this and made the resulting GCS data publicly available</b>, so you don't need to do it.): <ol> <li> Click on the "Save As Table" button and note down the name of the dataset and table. <li> On the BigQuery console, find the newly exported table in the left-hand-side menu, and click on the name. <li> Click on "Export Table" <li> Supply your bucket name and give it the name train.csv (for example: gs://cloud-training-demos-ml/taxifare/ch3/train.csv). Note down what this is. Wait for the job to finish (look at the "Job History" on the left-hand-side menu) <li> In the query above, change the final "== 1" to "== 2" and export this to Cloud Storage as valid.csv (e.g. gs://cloud-training-demos-ml/taxifare/ch3/valid.csv) <li> Download the two files, remove the header line and upload it back to GCS. </ol> <p/> <p/> ## Run Cloud training on 1-million row dataset This took 60 minutes and uses as input 1-million rows. The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). At the end of the training the loss was 32, but the RMSE (calculated on the validation dataset) was stubbornly at 9.03. So, simply adding more data doesn't help. ``` %%bash XXXXX this takes 60 minutes. if you are sure you want to run it, then remove this line. OUTDIR=gs://${BUCKET}/${MODEL_NAME}/${TRAINING_DIR} JOBNAME=${MODEL_NAME}_$(date -u +%y%m%d_%H%M%S) CRS_BUCKET=cloud-training-demos # use the already exported data echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=${PWD}/${MODEL_NAME}/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=STANDARD_1 \ --runtime-version=$TFVERSION \ -- \ --train_data_paths="gs://${CRS_BUCKET}/${MODEL_NAME}/ch3/train.csv" \ --eval_data_paths="gs://${CRS_BUCKET}/${MODEL_NAME}/ch3/valid.csv" \ --output_dir=$OUTDIR \ --train_steps=100000 ``` ## Challenge Exercise Modify your solution to the challenge exercise in d_trainandevaluate.ipynb appropriately. Make sure that you implement training and deployment. Increase the size of your dataset by 10x since you are running on the cloud. Does your accuracy improve? ### Clean-up #### Delete Model : step 1 - remove version info Before an existing cloud model can be removed, it must have any version info removed. ``` %%bash gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} ``` #### Delete model: step 2 - remove existing model Now that the version info is removed from an existing model, the actual model can be removed. ``` %%bash gcloud ai-platform models delete ${MODEL_NAME} ``` Copyright 2016 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
github_jupyter
``` #import all the necessary modules import pandas as pd pd.options.display.max_columns=None import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn import metrics from sklearn.tree import DecisionTreeClassifier ``` .### Let us See how the data looks like ``` #read the data df=pd.read_csv(r'pima-indians-diabetes.csv') df.head() # get a glance of the data. df.describe() ``` #### From the above details we can say that there are many incorrect data present for columns glucose_conc, diastolic_bp, thickness, insulin, bmi, skin. These values can not be zero and hence we need to use some means to use correct data here. ``` # Let us see the dependent variable class labels. The data here seems like 67% Non Diabetic and rest 33% as diabetic. df.diabetes.value_counts() ``` #check if any blanks are present in the data df.isnull().sum() ### Now that we have a basic knowledge of the data let us see visually if the data is linearly seperable From the below pairplot we can see that the data is not linearly seperable and hence using a linear model might not give us good accuracy. ``` sns.pairplot(df,hue='diabetes') ``` ### Now that we know how the data is let us see how the columns are correlated to each other. We can either use correlation matrix to see that or use a heatmap to see the correlation. ``` #find the correlation corr_mat=df.corr() plt.figure(figsize=(20,20)) g=sns.heatmap(corr_mat,annot=True,cmap="RdYlGn") ``` ### Feature Engineering : Let us engineer the data now so that it can be fed into the model. ``` #convert the outcome to 0 and 1 from True False #df['diabetes']=df['diabetes'].apply(lambda x:1 if x==True else 0) #df.head() #split the data into x and y X=df.drop(['diabetes'],axis=1).values y=df['diabetes'].values X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.30,random_state=32) ``` ### From here we see that almost all the features has minimum value as 0 which is not actually possible in real. let us see how many data points are like this. ``` for col in df.columns: print(f"Zero values for {col} : {df[df[col]==0].shape[0]}") #impute the data and change the incorrect values with their column means from sklearn.impute import SimpleImputer fill_values=SimpleImputer(missing_values=0,strategy="mean") X_train=fill_values.fit_transform(X_train) X_test=fill_values.fit_transform(X_test) #we will find other processess also using randomforest ``` ### Classification using XGBoost ``` #Let us import the classifier import xgboost xgboost_clf=xgboost.XGBClassifier() xgboost_clf.fit(X_train,y_train) xgboost_clf.get_params(deep=True) #checking the feature importance pd.DataFrame({'Column':df.drop(['diabetes'],axis=1).columns,'Feature Importance':xgboost_clf.feature_importances_}).sort_values(by=['Feature Importance'],ascending=False) #predict the data predicted_val=xgboost_clf.predict(X_test) print(f"Accuracy ={metrics.accuracy_score(y_test,predicted_val)}") ``` ### Let us try to increase the accuracy of the model using hyper parameter tuning. We will use RandomizedSearchCV to do the same. ``` #use hyper parameters to tune the accuracy from sklearn.model_selection import RandomizedSearchCV params={ "learning_rate":[0.05,0.10,0.15,0.20,0.25,0.30], "max_depth":[3,4,5,6,8,10,12,15], "min_child_weight":[1,3,5,7], "gamma":[0.0,0.1,0.2,0.3,0.4], "colsample_bytree":[0.3,0.4,0.5,0.7] } random_search=RandomizedSearchCV(xgboost_clf,param_distributions=params,n_iter=5,scoring='roc_auc',n_jobs=-1,cv=5,verbose=3) random_search.fit(X_train,y_train.ravel()) random_search.best_estimator_ #predict the data xgboost_clf=xgboost.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0.0, gpu_id=-1, importance_type='gain', interaction_constraints='', learning_rate=0.1, max_delta_step=0, max_depth=6, min_child_weight=7, monotone_constraints='()', n_estimators=100, n_jobs=8, num_parallel_tree=1, random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, subsample=1, tree_method='exact', validate_parameters=1, verbosity=None) xgboost_clf.fit(X_train,y_train) predicted_val=xgboost_clf.predict(X_test) print(f"Accuracy ={metrics.accuracy_score(y_test,predicted_val)}") from sklearn.model_selection import cross_val_score score=cross_val_score(xgboost_clf,X_train,y_train.ravel(),cv=10) print(score) print("Accuracy : ",score.mean()) ``` ### So the average accurracy of this model is ~76%
github_jupyter
``` # Import Dependencies import pandas as pd import numpy as np import matplotlib.pyplot as plt import scipy.stats as sts import time import datetime import json import requests from scipy.stats import linregress # Import API key from config import api_key # citipy to determine city based on latitude and longitude from citipy import citipy # Output csv file output_data_file = "WeatherPy_output.csv" # Range of Latitudes and Longitudes lat_range = (-90, 90) lng_range = (-180, 180) # Create lists to hold response data for lat_lngs and cities lat_lngs = [] cities = [] lats = np.random.uniform(low=-90.000, high=90.000, size=1500) lngs = np.random.uniform(low=-180.000, high=180.000, size=1500) lat_lngs = zip(lats, lngs) for lat_lng in lat_lngs: city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name if city not in cities: cities.append(city) # Numbers of City len(cities) import pprint pp = pprint.PrettyPrinter(indent=4) c_id= [] name = [] country = [] long = [] latt = [] cloudiness= [] date= [] humidity= [] max_temp = [] wind_speed = [] weather_json = {} try: url = "http://api.openweathermap.org/data/2.5/weather?" for city in cities: query_url = url + "&q=" + city + "&APPID=" + api_key weather_response = requests.get(query_url) weather_json = weather_response.json() c_id.append(str(weather_json['id'])) name.append(str(weather_json['name'])) country.append(str(weather_json['sys']['country'])) long.append(float(round(weather_json['coord']['lon'],2))) latt.append(float(round(weather_json['coord']['lat'],2))) cloudiness.append(float(weather_json['clouds']['all'])) date.append(str(datetime.datetime.fromtimestamp(weather_json['dt']).strftime("%A, %d. %B %Y %I:%M%p"))) humidity.append(float(weather_json['main']['humidity'])) max_temp.append((1.8*(weather_json['main']['temp_max'] - 273) + 32)) wind_speed.append(float(weather_json['wind']['speed'])) except KeyError: pass # Build partial query URL # query_url = url + "&q=" + city + "&APPID=" + api_key # print(query_url) # Create DataFrame weather_list_df = pd.DataFrame({'City ID':c_id,'City':name,'Country':country,'Lng':long,'Lat':latt, 'Cloudiness':cloudiness, 'Humidity': humidity, 'Date': date, 'Max Temp':max_temp, 'Wind Speed':wind_speed}) weather_list_df.to_csv(output_data_file, index = False) weather_list_df.head() # Convert data to DataFrame # Latitude vs. Temperature Plot plt.scatter(weather_list_df['Lat'], weather_list_df['Max Temp'], marker="o", facecolors="blue", edgecolors="black") # Set the upper and lower limits of our y axis plt.ylim(0,120) # Set the upper and lower limits of our x axis plt.xlim(-60,60) # Create a title, x label, and y label for our chart plt.title("City Latitude vs. Max Temprature") plt.ylabel("Maximum Temperature (Farenheit)") plt.xlabel("Latitude") plt.grid(True) plt.savefig('output_data/Lat_MT.png') # Latitude vs. Humidity Plot plt.scatter(weather_list_df['Lat'], weather_list_df['Humidity'], marker="o", facecolors="blue", edgecolors="black") # Set the upper and lower limits of our y axis plt.ylim(0,200) # Set the upper and lower limits of our x axis plt.xlim(-60,120) # Create a title, x label, and y label for our chart plt.title("City Latitude vs. Humidity") plt.ylabel("Humidity (%)") plt.xlabel("Latitude") plt.grid(True) plt.savefig('output_data/Lat_Hum.png') # Latitude vs. Cloudiness Plot plt.scatter(weather_list_df['Lat'], weather_list_df['Cloudiness'], marker="o", facecolors="blue", edgecolors="black") # Set the upper and lower limits of our y axis plt.ylim(-30,120) # Set the upper and lower limits of our x axis plt.xlim(-60,120) # Create a title, x label, and y label for our chart plt.title("City Latitude vs. Cloudiness") plt.ylabel("Cloudiness (%)") plt.xlabel("Latitude") plt.grid(True) plt.savefig('output_data/Lat_Cl.png') # Latitude vs. Wind Speed Plot plt.scatter(weather_list_df['Lat'], weather_list_df['Wind Speed'], marker="o", facecolors="blue", edgecolors="black") # Set the upper and lower limits of our y axis plt.ylim(0,30) # Set the upper and lower limits of our x axis plt.xlim(-60,100) # Create a title, x label, and y label for our chart plt.title("City Latitude vs. Wind Speed") plt.ylabel("Wind Speed (mph)") plt.xlabel("Latitude") plt.grid(True) plt.savefig('output_data/Lat_W.png') # Create Northern and Southern Hemisphere DataFrames northern_x_v = weather_list_df[weather_list_df['Lat']<1] southern_x_v = weather_list_df[weather_list_df['Lat']>1] # Northern Hemisphere - Max Temp vs. Latitude Linear Regression x_values=northern_x_v['Lat'] y_values = northern_x_v['Max Temp'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(0,50),fontsize=15,color="red") plt.xlabel('Latitude') plt.ylabel('Max Temp') plt.title("Northern Hemisphere - Max Temp vs. Latitude") print(f"The r-squared is: {rvalue}") plt.show() plt.savefig('output_data/N_linear_Lat_MT.png') # Southern Hemisphere - Max Temp vs. Latitude Linear Regression x_values =southern_x_v['Lat'] y_values = southern_x_v['Max Temp'] plt.scatter(x_values,y_values) plt.xlabel('Rooms in House') plt.ylabel('Median House Prices ($1000)') plt.title("Southern Hemisphere - Max Temp vs. Latitude") plt.show() (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(0,50),fontsize=15,color="red") plt.xlabel('Latitude') plt.ylabel('Max Temp') plt.title("Southern Hemisphere - Max Temp vs. Latitude") print(f"The r-squared is: {rvalue}") plt.show() plt.savefig('output_data/S_linear_Lat_MT.png') # Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression x_values =northern_x_v['Lat'] y_values = northern_x_v['Cloudiness'] plt.scatter(x_values,y_values) plt.xlabel('Rooms in House') plt.ylabel('Median House Prices ($1000)') plt.title("Northern Hemisphere - Cloudiness (%) vs. Latitude") plt.show() # Linear Regression (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(0,50),fontsize=15,color="red") plt.xlabel('Latitude') plt.ylabel('Cloudiness') plt.title("Northern Hemisphere - Cloudiness (%) vs. Latitude") print(f"The r-squared is: {rvalue}") plt.show() plt.savefig('output_data/N_linear_Lat_Cl.png') # Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression x_values =southern_x_v['Lat'] y_values = southern_x_v['Cloudiness'] plt.scatter(x_values,y_values) plt.xlabel('Rooms in House') plt.ylabel('Median House Prices ($1000)') plt.title("Southern Hemisphere - Cloudiness (%) vs. Latitude") plt.show() # Linear Regression (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(0,50),fontsize=15,color="red") plt.xlabel('Latitude') plt.ylabel('Cloudiness') plt.title("Southern Hemisphere - Cloudiness (%) vs. Latitude") print(f"The r-squared is: {rvalue}") plt.show() plt.savefig('output_data/S_linear_Lat_Cl.png') # Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression x_values = northern_x_v['Lat'] y_values = northern_x_v['Wind Speed'] plt.scatter(x_values,y_values) plt.xlabel('Rooms in House') plt.ylabel('Median House Prices ($1000)') plt.title("Northern Hemisphere - Wind Speed (mph) vs. Latitude") plt.show() # Linear Regression (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(0,8),fontsize=15,color="red") plt.xlabel('Wind speed') plt.ylabel('Max Temp') plt.title("Northern Hemisphere - Wind Speed (mph) vs. Latitude") print(f"The r-squared is: {rvalue}") plt.show() plt.savefig('output_data/N_linear_Lat_W.png') # Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression x_values = southern_x_v['Lat'] y_values = southern_x_v['Wind Speed'] plt.scatter(x_values,y_values) plt.xlabel('Rooms in House') plt.ylabel('Median House Prices ($1000)') plt.title("Southern Hemisphere - Wind Speed (mph) vs. Latitude") plt.show() # Linear Regression (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(0,5),fontsize=15,color="red") plt.xlabel('Wind speed') plt.ylabel('Max Temp') plt.title("Southern Hemisphere - Wind Speed (mph) vs. Latitude") print(f"The r-squared is: {rvalue}") plt.show() plt.savefig('output_data/S_linear_Lat_W.png') ```
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Загрузка текста <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/text"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Смотрите на TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ru/tutorials/load_data/text.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Запустите в Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ru/tutorials/load_data/text.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Изучайте код на GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ru/tutorials/load_data/text.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Скачайте ноутбук</a> </td> </table> Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ru). Это руководство предлагает пример того как использовать `tf.data.TextLineDataset` для загрузки примеров из текстовых файлов. `TextLineDataset` предназначен для создания набора данных из текстового файла, в которых каждый пример это строка текста в исходном файле. Это потенциально полезно для любой текстовых данных которые изначально строковые (например, поэзия или логи ошибок). В этом руководстве мы будем использовать три разных английских перевода одного и того же текста - "Илиады" Гомера, и обучим модель определять переводчика на основе одной строки из текста. ## Установка ``` from __future__ import absolute_import, division, print_function, unicode_literals try: # %tensorflow_version существует только в Colab. !pip install tf-nightly except Exception: pass import tensorflow as tf import tensorflow_datasets as tfds import os ``` Тексты трех переводов выполнили: - [Вильям Купер](https://en.wikipedia.org/wiki/William_Cowper) — [текст](https://storage.googleapis.com/download.tensorflow.org/data/illiad/cowper.txt) - [Эрл Эдвард](https://en.wikipedia.org/wiki/Edward_Smith-Stanley,_14th_Earl_of_Derby) — [текст](https://storage.googleapis.com/download.tensorflow.org/data/illiad/derby.txt) - [Сэмюель Батлер](https://en.wikipedia.org/wiki/Samuel_Butler_%28novelist%29) — [текст](https://storage.googleapis.com/download.tensorflow.org/data/illiad/butler.txt) Текстовые файлы использованные в этом руководстве были подвергнуты некоторым типичным задачам предварительной обработки, в основном удаление материала - шапка и футер документа, нумерация строк, заголовки глав. Скачайте эти файлы локально. ``` DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt'] for name in FILE_NAMES: text_dir = tf.keras.utils.get_file(name, origin=DIRECTORY_URL+name) parent_dir = os.path.dirname(text_dir) parent_dir ``` ## Загрузите текст в датасеты Переберите файлы, загружая каждый в свой датасет. Каждый пример нужно пометить индивидуально, так что используйте `tf.data.Dataset.map` чтобы применить функцию расставляющую метки каждому элементу. Она переберет каждую запись в датасете возвращая пару (`example, label`). ``` def labeler(example, index): return example, tf.cast(index, tf.int64) labeled_data_sets = [] for i, file_name in enumerate(FILE_NAMES): lines_dataset = tf.data.TextLineDataset(os.path.join(parent_dir, file_name)) labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i)) labeled_data_sets.append(labeled_dataset) ``` Объедините эти размеченные наборы данных в один и перемешайте его. ``` BUFFER_SIZE = 50000 BATCH_SIZE = 64 TAKE_SIZE = 5000 all_labeled_data = labeled_data_sets[0] for labeled_dataset in labeled_data_sets[1:]: all_labeled_data = all_labeled_data.concatenate(labeled_dataset) all_labeled_data = all_labeled_data.shuffle( BUFFER_SIZE, reshuffle_each_iteration=False) ``` Вы можете использовать `tf.data.Dataset.take` и `print`, чтобы посмотреть как выглядят пары `(example, label)`. Свойство `numpy` показывает каждое значение тензора. ``` for ex in all_labeled_data.take(5): print(ex) ``` ## Закодируйте текстовые строки числами Модели машинного обучения работают с числами, не словами, так что строковые значения необходимо конвертировать в списки с числами. Чтобы сделать это поставьте в соответствие каждому слову свое число. ### Создайте словарь Сперва создайте словарь токенизировав текст в коллекцию отдельных отличающихся слов. Есть несколько способов сделать это и в TensorFlow и в Python. В этом учебнике: 1. Переберите `numpy` значения всех примеров. 2. Используйте `tfds.features.text.Tokenizer` чтобы разбить их на токены. 3. Соберите эти токены в множество Python чтобы избавиться от дубликатов 4. Получите размер словаря для последующего использования. ``` tokenizer = tfds.features.text.Tokenizer() vocabulary_set = set() for text_tensor, _ in all_labeled_data: some_tokens = tokenizer.tokenize(text_tensor.numpy()) vocabulary_set.update(some_tokens) vocab_size = len(vocabulary_set) vocab_size ``` ### Закодируйте примеры Создайте кодировщик передав `vocabulary_set` в `tfds.features.text.TokenTextEncoder`. Метод `encode` кодировщика берет строку текста и возвращает список чисел. ``` encoder = tfds.features.text.TokenTextEncoder(vocabulary_set) ``` Вы можете посмотреть одну строку чтобы увидеть как выглядит результат работы кодировщика. ``` example_text = next(iter(all_labeled_data))[0].numpy() print(example_text) encoded_example = encoder.encode(example_text) print(encoded_example) ``` Теперь запустите кодировщик на датасете обернув его в `tf.py_function` и передав в метод `map` датасета. ``` def encode(text_tensor, label): encoded_text = encoder.encode(text_tensor.numpy()) return encoded_text, label def encode_map_fn(text, label): # py_func doesn't set the shape of the returned tensors. encoded_text, label = tf.py_function(encode, inp=[text, label], Tout=(tf.int64, tf.int64)) # `tf.data.Datasets` work best if all components have a shape set # so set the shapes manually: encoded_text.set_shape([None]) label.set_shape([]) return encoded_text, label all_encoded_data = all_labeled_data.map(encode_map_fn) ``` ## Разбейте датасет на тестовую и обучающую выборки Используйте `tf.data.Dataset.take` и `tf.data.Dataset.skip` чтобы создать небольшой тестовый и большой обучающий датасеты. Перед передачей в модель датасет должны быть разбиты на пакеты. Обычно количество записей в пакете и их размерность должно быть одинаковые. ``` train_data = all_encoded_data.skip(TAKE_SIZE).shuffle(BUFFER_SIZE) train_data = train_data.padded_batch(BATCH_SIZE) test_data = all_encoded_data.take(TAKE_SIZE) test_data = test_data.padded_batch(BATCH_SIZE) ``` Сейчас, `test_data` и `train_data` являются не коллекциями пар (`example, label`), а коллекциями пакетов. Каждый пакет это пара вида (*много примеров*, *много меток*) представленная в виде массивов. Чтобы проиллюстрировать: ``` sample_text, sample_labels = next(iter(test_data)) sample_text[0], sample_labels[0] ``` Так как мы ввели новую кодировку токенов (нуль использовался для заполнения), размер словаря увеличился на единицу. ``` vocab_size += 1 ``` ## Постройте модель ``` model = tf.keras.Sequential() ``` Первый слой конвертирует целочисленные представления в плотные векторные вложения. См. руководство [Вложения слов] (../../tutorials/sequences/word_embeddings) для подробностей. ``` model.add(tf.keras.layers.Embedding(vocab_size, 64)) ``` Следующий слой является [Long Short-Term Memory](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) слоем, который позволяет моедли понять слова в контексте других слов. Двунаправленная обертка LSTM позволяет ей выучить взаимодействие элементов как с предыдущими так и с последующими элементами. ``` model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64))) ``` И наконец у нас есть серии из одной и более плотно связанных слоев, последний из которых - выходной слой. Выходной слой генерирует вероятность для всех меток. Та у которой большая вероятность и является предсказанием модели для этого примера. ``` # Один или более плотных слоев. # Отредактируйте список в строке `for` чтобы поэкспериментировать с размером слоев. for units in [64, 64]: model.add(tf.keras.layers.Dense(units, activation='relu')) # Выходной слой. Первый аргумент - число меток. model.add(tf.keras.layers.Dense(3, activation='softmax')) ``` Наконец скомпилируйте модель. Для softmax категоризационной модели используйте `sparse_categorical_crossentropy` в виде функции потерь. Вы можете попробовать другие оптимизаторы, но `adam` очень часто используемая. ``` model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ``` ## Обучите модель Наша модель, работающая на этих данных, дает достойные результаты (около 83%). ``` model.fit(train_data, epochs=3, validation_data=test_data) eval_loss, eval_acc = model.evaluate(test_data) print('\nEval loss: {:.3f}, Eval accuracy: {:.3f}'.format(eval_loss, eval_acc)) ```
github_jupyter
# Pandas and some Census data Based on teaching materials from Lam Thuy Vo at NICAR19 to do stuff with a U.S. Census CSV file using Pandas. * [Slides](https://docs.google.com/presentation/d/1ZG-IC33qL6dOk-WfwMuiyfzLb-Th1I_XoGNFPpbq8Ps/) * [GitHub repo](https://github.com/lamthuyvo/python-data-nicar2019) * [Pandas user guide](http://pandas.pydata.org/pandas-docs/stable/user_guide/index.html) **Before running these cells,** you must install Pandas in the environment that is running this Notebook. ``` # import the pandas library and give it a "nickname," pd, to be used when you call a pandas function import pandas as pd # import a CSV file's data as a Pandas dataframe # we assign the dataframe to the variable census_data, but some people would prefer to name it df census_data = pd.read_csv('2016_census_data.csv') # look at first 5 rows of dataset with a function called .head() # be sure to scroll rightwards to see more columns census_data.head(5) # look at last 5 rows of dataset with a function called .tail() # note the row numbers in the first column census_data.tail(5) # view 5 random rows - the number can be more or less than 5, up to you # if you shift-enter in this cell more than once, you'll get different rows each time census_data.sample(5) ``` What we are looking at is called a **dataframe.** What we are doing is **exploring** the dataframe to get an idea of how much data we have and what it looks like. We can also read the column headings to understand what data we have. ``` # we can flip the data so that the column headings are in column 1 and all the rows become columns # it does not stay this way - we are only looking census_data.T # see a list of all column headings census_data.columns # see all the data types in your dataframe census_data.dtypes ``` `object` is used for text (string), and `int64` means the data in that column is an integer. Sometimes the data is not formatted correctly, and you need to change the data type in a column. Other data types include `float64`, `bool`, and `datetime64[ns]`. ``` # use the shape command to get see number of rows, columns in the dataframe census_data.shape # see first five cells in 'county' column census_data['county'].head(5) # see ALL cells in 'county' column - not really - we have too many rows in this dataframe census_data['county'] # create a Python list with the names of only the columns you want to view column_names = ['county', 'total_population', 'median_income', 'educational_attainment'] # now view only those columns - use .sample() to get random rows census_data[column_names].sample(10) # pandas assigns each row a number as an index automatically # (we could assign each row a different index, but it's not necessary) # if I know I want to look at only the data in the row with index 350 - census_data.iloc[350] ``` It's called "integer-location based indexing," so that's why the function is named `iloc`. [Learn more here.](https://www.shanelynn.ie/select-pandas-dataframe-rows-and-columns-using-iloc-loc-and-ix/) ``` # combining .iloc[] and column-name search census_data['name'].iloc[350] # or using that list from before and a different index number census_data[column_names].iloc[4] # do math things on ONE column only # note - if we don't PRINT these, we'll see only the last one print(census_data['black_alone'].mean()) print(census_data['black_alone'].median()) print(census_data['black_alone'].sum()) ``` What we got from the previous cell was the mean, median, and sum (total) of ALL values in the entire column named "black_alone" - for all 4,700 rows! ``` census_data['black_alone'].describe() ``` The meaning of each line in the previous result is explained [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.describe.html). `count` is how many rows were analyzed (4,700). `mean` is the same as the cell before, where we got mean, median, and sum. `mean` is the same as "average." `std` is the [standard deviation](https://www.robertniles.com/stats/stdev.shtml). `min` is the lowest value in the column - here, 0 means that some Census tracts have zero black residents. The three percentiles show us what proportion of rows contain a value equal to or less than that number - so 25% of the Census tracts have 47 or fewer black residents. Finally, `max` tells us the highest value in this column. ``` # how many Census tracts (out of 4,700) with fewer than 100 black residents? len(census_data[census_data['black_alone'] < 100]) # how many Census tracts (out of 4,700) with fewer than 100 Asian residents? len(census_data[census_data['asian'] < 100]) # we can assign an entire column to a variable, and then use that variable to make comparisons asian_population = census_data['asian'] asian_population.describe() # now we assign a different column to a different variable black_population = census_data['black_alone'] # compare the means of the two columns # python greater than black_population.mean() > asian_population.mean() # reverse asian_population.mean() > black_population.mean() # show unique values in a given column census_data.county.unique() ``` Remember that each of these counties has MANY Census tracts, so the county names appear many times in the `county` column. The previous cell lets us see how many counties are in the dataframe, with no duplications. ``` # sort them alphabetically and print one per line for county in sorted(census_data.county.unique()): print(county) # find rows where a string matches the value in given column # we want to find out how many rows are for Pike County pike = census_data.loc[(census_data['county'] == 'Pike County')] # count how many Pike County rows len(pike) ``` You can see from previous results that there are definitely cells containing the string "Pike County" - so why does `len()` come back with 0? It means that there is no match, and that means there must be some **invisible characters** in the string - such as spaces, line endings, or tabs. Python has a method for striping those characters off the start or end of a string. ``` # data in this column is dirty, so we strip spaces and invisible characters with .str.strip(' \t\n\r') pike = census_data.loc[(census_data['county'].str.strip(' \t\n\r') == 'Pike County')] # count how many Pike County rows len(pike) ``` That's better. And now that we realize the county cells contain "dirty data," we might want to simply clean all of them up at once. It's best to preserve the original column (for safety) and create a new column that has the clean data in it. ``` # to create a new column with clean county names - # 'county_clean' is the NEW column census_data['county_clean'] = census_data['county'].str.strip(' \t\n\r') # create a Python list with just the columns you want column_names = ['name', 'county', 'state', 'total_population', 'median_income', 'median_home_value'] # using the pike variable from a previous cell - # sort the rows with highest median_home_value at top, lowest at bottom pike[column_names].sort_values('median_home_value', ascending=False) ``` In the output above, notice the **total population** for the tract with highest median home value. ``` # if I want to use my new 'county_clean' column instead - column_names = ['county_clean', 'total_population', 'median_income', 'median_home_value', 'name'] # note, I changed the column order in that list!! passaic = census_data.loc[(census_data['county_clean'] == 'Passaic County')] passaic[column_names].sort_values('median_home_value', ascending=False) ``` **Note** how you can display the columns in ANY ORDER you desire. Just make a new list for `column_names` and use that to show the data. Include only the columns you want to see, in the order you want. But wait, there's more! What if you want to save that data in that exact format - maybe to send it to someone else. ``` # save that to a NEW CSV file with a new filename new_dataset = passaic[column_names].sort_values('median_home_value', ascending=False) new_dataset.to_csv('passaic_only.csv', encoding='utf8') ``` THAT is seriously powerful. You just extracted **100 rows** from a 4,700-row CSV, threw out 10 of the 16 columns, and put the columns into a different order. Your original CSV remains untouched and intact. You created a new CSV file that you could share with others who do not have Jupyter Notebooks. In case you're not sure what directory the new file was saved to, enter `pwd` to find out which directory this Jupyter Notebook is running in. (`pwd` is a command that stands for "print working directory.) ``` pwd ``` The output from the previous cell shows you where to find the file *passaic_only.csv* on your computer. ``` # how many rows (Census tracts) does each county have, anyway? census_data['county_clean'].value_counts() ``` **Note** that the list (in the output from the previous cell) is in order from most rows to fewest rows. You can see which counties have a large number of Census tracts in this dataset (which might not be complete for some counties). ``` # combine all Census tracts for each county and show sum of all tracts' population census_data.groupby('county_clean')['total_population'].sum() # sort them by highest to lowest population counties_pop = census_data.groupby('county_clean')['total_population'].sum() counties_pop.sort_values(ascending=False) ``` This is just an introduction to some of the exploration you can do with Pandas. There's much more!
github_jupyter
# Basic imports ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.rcParams.update({'figure.max_open_warning': 0}) ``` # Loading datasets ``` cns_df=pd.read_csv("cns_molecules.csv", sep="\t") non_cns_df=pd.read_csv("non_cns_molecules.csv", sep="\t") cns_df_length=len(cns_df) non_cns_df_length=len(non_cns_df) print("cns rows: {}".format(cns_df_length)) print("non cns rows: {}".format(non_cns_df_length)) ``` # New column for both datasets (1= true, 0=false) ``` new_cns_column=[1 for i in range(cns_df_length)] new_non_cns_columns=[0 for i in range(non_cns_df_length)] cns_df["is_cns_molecule"]=new_cns_column non_cns_df["is_cns_molecule"]=new_non_cns_columns ``` ### Merged dataset ``` mixed_df=cns_df mixed_df=mixed_df.append(non_cns_df) ``` ### Shuffle dataset The idiomatic way to do this with Pandas is to use the .sample method of your dataframe to sample all rows without replacement: df.sample(frac=1) The frac keyword argument specifies the fraction of rows to return in the random sample, so frac=1 means return all rows (in random order). Note: If you wish to shuffle your dataframe in-place and reset the index, you could do e.g. df = df.sample(frac=1).reset_index(drop=True) Here, specifying drop=True prevents .reset_index from creating a column containing the old index entries. ``` mixed_df=mixed_df.sample(frac=1,random_state=0).reset_index(drop=True) mixed_df.to_csv("molecules.csv",sep="\t",index = False, header=True); mixed_df for c in mixed_df.columns.values: print(c) ``` # Model ``` from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score from sklearn.svm import LinearSVC clf = LinearSVC(random_state=0, tol=1e-5, dual=False) data_frame=mixed_df.drop(["m_name"],axis=1) y=data_frame["is_cns_molecule"] x=data_frame.drop(["is_cns_molecule"],axis=1) column_to_drop_index = [] for cnt,i in enumerate(x.columns): bln = (mixed_df[i] == 0).all() if (bln): column_to_drop_index.append(cnt) print (column_to_drop_index) x_train, x_test, y_train, y_test=train_test_split(x,y,random_state=0,test_size=0.2) clf.fit(x_train,y_train) predicted=clf.predict(x_test) matrix=confusion_matrix(y_test,predicted) matrix_labels=[["True positive","False positive"], ["False negative","True negative"]] for i in range(2): for j in range(2): print("{} {}".format(matrix_labels[i][j],matrix[i][j])) print("f1 score: {}%".format(f1_score(y_test,predicted)*100)) print("accuracy score: {}%".format(accuracy_score(y_test,predicted)*100)) ```
github_jupyter
``` import spacy from IPython.display import SVG, YouTubeVideo from spacy import displacy ``` # Intro to Clinical NLP ### Instructor: Alec Chapman ### Email: abchapman93@gmail.com Welcome to the NLP module! We'll start this module by watching a short introduction of the instructor and of Natural Language Processing (NLP) in medicine. Then we'll learn how to perform clinical NLP in spaCy and will end by applying an NLP system to several clinical tasks and datasets. ### Introduction videos: - [Meet the Instructor: Dr. Wendy Chapman](https://youtu.be/piJc8RXCZW4) - [Intro to Clinical NLP / Meet the Instructor: Alec Chapman](https://youtu.be/suVOm0CFX7A) Slides: [Intro-to-NLP.pdf](https://github.com/Melbourne-BMDS/mimic34md2020_materials/blob/master/slides/Intro-to-NLP.pdf) ``` # Introduction to the instructor: Wendy Chapman YouTubeVideo("piJc8RXCZW4") YouTubeVideo("suVOm0CFX7A") ``` # Intro to spaCy ``` YouTubeVideo("agmaqyUMAkI") ``` One very popular tool for NLP is [spaCy](https://spacy.io). SpaCy offers many out-of-the-box tools for processing and analyzing text, and the spaCy framework allows users to extend the models for their own purposes. SpaCy consists mostly of **statistical NLP** models. In statistical models, a large corpus of text is processed and mathematical methods are used to identify patterns in the corpus. This process is called **training**. Once a model has been trained, we can use it to analyze new text. But as we'll see, we can also use spaCy to implement sophisticated rules and custom logic. SpaCy comes with several pre-trained models, meaning that we can quickly load a model which has been trained on large amounts of data. This way, we can take advantage of work which has already been done by spaCy developers and focus on our own NLP tasks. Additionally, members of the open-source spaCy community can train and publish their own models. <img alt="SpaCy logo" height="100" width="250" src="https://spacy.io/static/social_default-1d3b50b1eba4c2b06244425ff0c49570.jpg"> # Agenda - We'll start by looking at the basic usage of spaCy - Next, we'll focus on specific NLP task, **named entity recognition (NER)**, and see how this works in spaCy, as well as some of the limitations with clinical data - Since spaCy's built-in statistical models don't accomplish the tasks we need in clinical NLP, we'll use spaCy's pattern matchers to write rules to extract clinical concepts - We will then download and use a statistical model to extract clinical concepts from text - Some of these limitations can be addressed by writing our own rules for concept extraction, and we'll practice that with some clinical texts. We'll then go a little deeper into how spaCy's models are implemented and how we can modify them. Finally, we'll end the day by spaCy models which were designed specifically for use in the biomedical domain. # spaCy documentation spaCy has great documentation. As we're going along today, try browsing through their documentation to find examples and instructions. Start by opening up these two pages and navigating through the documentation: [Basic spaCy usage](https://spacy.io/usage/models) [API documentation](https://spacy.io/api) spaCy also has a really good, free online class. If you want to dig deeper into spaCy after this class, it's a great resource for using this library: https://course.spacy.io/ It's also available on DataCamp (the first two chapters will be assigned for homework): https://learn.datacamp.com/courses/advanced-nlp-with-spacy # Basic usage of spaCy In this notebook, we'll look at the basic fundamentals of spaCy: - Main classes in spaCy - Linguistic attributes - Named entity recognition (NER) ## How to use spaCy At a high-level, here are the steps for using spaCy: - Start by loading a pre-trained NLP model - Process a string of text with the model - Use the attributes in our processed documents for downstream NLP tasks like NER or document classification For example, here's a very short example of how this works. For the sake of demonstration, we'll use this snippet of a business news article: ``` # First, load a pre-trained model nlp = spacy.load("en_core_web_sm") # Process a string of text with the model text = """Taco Bell’s latest marketing venture, a pop-up hotel, opened at 10 a.m. Pacific Time Thursday. The rooms sold out within two minutes. The resort has been called “The Bell: A Taco Bell Hotel and Resort.” It’s located in Palm Springs, California.""" doc = nlp(text) doc # Use the attributes in our processed documents for downstream NLP tasks # Here, we'll visualize the entities in this text identified through NER displacy.render(doc, style="ent") ``` Let's dive a little deeper into how spaCy is structured and what we have to work with. ## SpaCy Architecture The [spaCy documentation](https://spacy.io/api) offers a detailed description of the package's architecture. In this notebook, we'll focus on these 5 classes: - `Language`: The NLP model used to process text - `Doc`: A sequence of text which has been processed by a `Language` object - `Token`: A single word or symbol in a Doc - `Span`: A slice from a Doc - `EntityRecognizer`: A model which extracts mentions of **named entities** from text # `nlp` The `nlp` object in spaCy is the linguistic model which will be used for processing text. We instantiate a `Language` class by providing the name of a pre-trained model which we wish to use. We typically name this object `nlp`, and this will be our primary entry point. ``` nlp = spacy.load("en_core_web_sm") nlp ``` The `nlp` model we instantiated above is a **small** ("sm"), **English** ("en")-language model trained on **web** ("web") data, but there are currently 16 different models from 9 different languages. See the [spaCy documentation](https://spacy.io/usage/models) for more information on each of the models. # Documents, spans and tokens The `nlp` object is what we'll be using to process text. The next few classes represent the output of our NLP model. ## `Doc` class The `doc` object represents a single document of text. To create a `doc` object, we call `nlp` on a string of text. This runs that text through a spaCy pipeline, which we'll learn more about in a future notebook. ``` text = 'Taco Bell’s latest marketing venture, a pop-up hotel, opened at 10 a.m. Pacific Time Thursday.' doc = nlp(text) type(doc) print(doc) ``` ## Tokens and Spans ### Token A `Token` is a single word, symbol, or whitespace in a `doc`. When we create a `doc` object, the text broken up into individual tokens. This is called **"tokenization"**. **Discussion**: Look at the tokens generated from this text snippet. What can you say about the tokenization method? Is it as simple as splitting up into words every time we reach a whitespace? ``` token = doc[0] token type(token) doc for token in doc: print(token) ``` ### Span A `Span` is a slice of a document, or a consecutive sequence of tokens. ``` span = doc[1:4] span type(span) ``` ## Linguistic Attributes Because spaCy comes with pre-trained linguistic models, when we call `nlp` on a text we have access to a number of linguistic attributes in the `doc` or `token` objects. ### POS Tagging Parts of speech are categories of words. For example, "nouns", "verbs", and "adjectives" are all examples of parts of speech. Assigning parts of speech to words is useful for downstream NLP texts such as word sense disambiguation and named entity recognition. **Discussion**: What to the POS tags below mean? ``` print(f"Token -> POS\n") for token in doc: print(f"{token.text} -> {token.pos_}") spacy.explain("PROPN") ``` ### Lemma The **lemma** of a word refers to the **root form** of a word. For example, "eat", "eats", and "ate" are all different inflections of the lemma "eat". ``` print(f"Token -> Lemma\n") for token in doc: print(f"{token.text} -> {token.lemma_}") ``` ### Dependency Parsing In dependency parsing, we analyze the structure of a sentence. We won't spend too much time on this, but here is a nice visualization of dependency parse looks like. Take a minute to look at the arrows between words and try to figure out what they mean. ``` doc = nlp("The cat sat on the green mat") displacy.render(doc, style='dep') ``` ### Other attributes Look at spaCy's [Token class documentation](https://spacy.io/api/token) for a full list of additional attributes available for each token in a document. # NER with spaCy **"Named Entity Recognition"** is a subtask of NLP where we extract specific named entities from the text. The definition of a "named entity" changes depending on the domain we're working on. We'll look at clinical NER later, but first we'll look at some examples in more general domains. NER is often performed using news articles as source texts. In this case, named entities are typically proper nouns, such as: - People - Geopolitical entities, like countries - Organizations We won't go into the details of how NER is implemented in spaCy. If you want to learn more about NER and various way it's implemented, a great resource is [Chapter 17.1 of Jurafsky and Martin's textbook "Speech and Language Processing."](https://web.stanford.edu/~jurafsky/slp3/17.pdf) Here is an excerpt from an article in the Guardian. We'll process this document with our nlp object and then look at what entities are extracted. One way to do this is using spaCy's `displacy` package, which visualizes the results of a spaCy pipeline. ``` text = """Germany will fight to the last hour to prevent the UK crashing out of the EU without a deal and is willing to hear any fresh ideas for the Irish border backstop, the country’s ambassador to the UK has said. Speaking at a car manufacturers’ summit in London, Peter Wittig said Germany cherished its relationship with the UK and was ready to talk about solutions the new prime minister might have for the Irish border problem.""" doc = nlp(text) displacy.render(doc, style="ent") ``` We can use spaCy's `explain` function to see definitions of what an entity type is. Look up any entity types that you're not familiar with: ``` spacy.explain("GPE") ``` The last example comes from a political news article, which is pretty typical for what NER is often trained on and used for. Let's look at another news article, this one with a business focus: ``` # Example 2 text = """Taco Bell’s latest marketing venture, a pop-up hotel, opened at 10 a.m. Pacific Time Thursday. The rooms sold out within two minutes. The resort has been called “The Bell: A Taco Bell Hotel and Resort.” It’s located in Palm Springs, California.""" doc = nlp(text) displacy.render(doc, style="ent") ``` ## Discussion Compare how the NER performs on each of these texts. Can you see any errors? Why do you think it might make those errors? Once we've processed a text with `nlp`, we can iterate through the entities through the `doc.ents` attribute. Each entity is a spaCy `Span`. You can see the label of the entity through `ent.label_`. ``` for ent in doc.ents: print(ent, ent.label_) ``` # spaCy Processing Pipelines How does spaCy generate information like POS tags and entities? Under the hood, the `nlp` object goes through a number of sequential steps to processt the text. This is called a **pipeline** and it allows us to create modular, independent processing steps when analyzing text. The model we loaded comes with a default **pipeline** which helps extract linguistic attributes from the text. We can see the names of our pipeline components through the `nlp.pipe_names` attribute: ``` nlp.pipe_names ``` The image below shows a visual representation of this. In this default spaCy pipeline, - We pass the text into the pipeline by calling `nlp(text)` - The text is split into **tokens** by the `tokenizer` - POS tags are assigned by the `tagger` - A dependency parse is generated by the `parser` - Entities are extracted by the `ner` - a `Doc` object is returned These are the steps taken in the default pipeline. However, as we'll see later we can add our own processing **components** and add them to our pipeline to do additional analysis. <img alt="SpaCy logo" src="https://d33wubrfki0l68.cloudfront.net/16b2ccafeefd6d547171afa23f9ac62f159e353d/48b91/pipeline-7a14d4edd18f3edfee8f34393bff2992.svg"> # Clinical Text Let's now try using spaCy's built-in NER model on clinical text and see what information we can extract. ``` clinical_text = "76 year old man with hypotension, CKD Stage 3, status post RIJ line placement and Swan. " doc = nlp(clinical_text) displacy.render(doc, style="ent") ``` ### Discussion - How did spaCy do with this sentence? - What do you think caused it to make errors in the classifications? General purpose NER models are typically made for extracting entities out of news articles. As we saw before, this includes mainly people, organizations, and geopolitical entities. We can see which labels are available in spaCy's NER model by looking at the NER component. As you can see, not many of these are very useful for clinical text extraction. ### Discussion - What are some entity types we are interested in in clinical domain? - Does spaCy's out-of-the-box NER handle any of these types? # Next Steps Since spaCy's model doesn't extract the information we need by default, we'll need to do some additional work to extract clinical concepts. In the next notebook, we'll look at how spaCy allows **rule-based NLP** through **pattern matching**. [nlp-02-medspacy-concept-extraction.ipynb](nlp-02-medspacy-concept-extraction.ipynb)
github_jupyter
``` %matplotlib inline ``` ============================================== Real-time feedback for decoding :: Server Side ============================================== This example demonstrates how to setup a real-time feedback mechanism using StimServer and StimClient. The idea here is to display future stimuli for the class which is predicted less accurately. This allows on-demand adaptation of the stimuli depending on the needs of the classifier. To run this example, open ipython in two separate terminals. In the first, run rt_feedback_server.py and then wait for the message RtServer: Start Once that appears, run rt_feedback_client.py in the other terminal and the feedback script should start. All brain responses are simulated from a fiff file to make it easy to test. However, it should be possible to adapt this script for a real experiment. ``` # Author: Mainak Jas <mainak@neuro.hut.fi> # # License: BSD (3-clause) import time import numpy as np import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.svm import SVC from sklearn.pipeline import Pipeline from sklearn.cross_validation import train_test_split from sklearn.metrics import confusion_matrix import mne from mne.datasets import sample from mne.realtime import StimServer from mne.realtime import MockRtClient from mne.decoding import Vectorizer, FilterEstimator print(__doc__) # Load fiff file to simulate data data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname, preload=True) # Instantiating stimulation server # The with statement is necessary to ensure a clean exit with StimServer(port=4218) as stim_server: # The channels to be used while decoding picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=True, exclude=raw.info['bads']) rt_client = MockRtClient(raw) # Constructing the pipeline for classification filt = FilterEstimator(raw.info, 1, 40) scaler = preprocessing.StandardScaler() vectorizer = Vectorizer() clf = SVC(C=1, kernel='linear') concat_classifier = Pipeline([('filter', filt), ('vector', vectorizer), ('scaler', scaler), ('svm', clf)]) stim_server.start(verbose=True) # Just some initially decided events to be simulated # Rest will decided on the fly ev_list = [4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4] score_c1, score_c2, score_x = [], [], [] for ii in range(50): # Tell the stim_client about the next stimuli stim_server.add_trigger(ev_list[ii]) # Collecting data if ii == 0: X = rt_client.get_event_data(event_id=ev_list[ii], tmin=-0.2, tmax=0.5, picks=picks, stim_channel='STI 014')[None, ...] y = ev_list[ii] else: X_temp = rt_client.get_event_data(event_id=ev_list[ii], tmin=-0.2, tmax=0.5, picks=picks, stim_channel='STI 014') X_temp = X_temp[np.newaxis, ...] X = np.concatenate((X, X_temp), axis=0) time.sleep(1) # simulating the isi y = np.append(y, ev_list[ii]) # Start decoding after collecting sufficient data if ii >= 10: # Now start doing rtfeedback X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7) y_pred = concat_classifier.fit(X_train, y_train).predict(X_test) cm = confusion_matrix(y_test, y_pred) score_c1.append(float(cm[0, 0]) / sum(cm, 1)[0] * 100) score_c2.append(float(cm[1, 1]) / sum(cm, 1)[1] * 100) # do something if one class is decoded better than the other if score_c1[-1] < score_c2[-1]: print("We decoded class RV better than class LV") ev_list.append(3) # adding more LV to future simulated data else: print("We decoded class LV better than class RV") ev_list.append(4) # adding more RV to future simulated data # Clear the figure plt.clf() # The x-axis for the plot score_x.append(ii) # Now plot the accuracy plt.plot(score_x[-5:], score_c1[-5:]) plt.hold(True) plt.plot(score_x[-5:], score_c2[-5:]) plt.xlabel('Trials') plt.ylabel('Classification score (% correct)') plt.title('Real-time feedback') plt.ylim([0, 100]) plt.xticks(score_x[-5:]) plt.legend(('LV', 'RV'), loc='upper left') plt.show() ```
github_jupyter
# Pre-trained embeddings for Text ``` import gzip import numpy as np %matplotlib inline import matplotlib.pyplot as plt import pandas as pd glove_path = '../data/embeddings/glove.6B.50d.txt.gz' with gzip.open(glove_path, 'r') as fin: line = fin.readline().decode('utf-8') line def parse_line(line): values = line.decode('utf-8').strip().split() word = values[0] vector = np.asarray(values[1:], dtype='float32') return word, vector embeddings = {} word_index = {} word_inverted_index = [] with gzip.open(glove_path, 'r') as fin: for idx, line in enumerate(fin): word, vector = parse_line(line) # parse a line embeddings[word] = vector # add word vector word_index[word] = idx # add idx word_inverted_index.append(word) # append word word_index['good'] word_inverted_index[219] embeddings['good'] embedding_size = len(embeddings['good']) embedding_size plt.plot(embeddings['good']); plt.subplot(211) plt.plot(embeddings['two']) plt.plot(embeddings['three']) plt.plot(embeddings['four']) plt.title("A few numbers") plt.ylim(-2, 5) plt.subplot(212) plt.plot(embeddings['cat']) plt.plot(embeddings['dog']) plt.plot(embeddings['rabbit']) plt.title("A few animals") plt.ylim(-2, 5) plt.tight_layout() vocabulary_size = len(embeddings) vocabulary_size ``` ## Loading pre-trained embeddings in Keras ``` from keras.models import Sequential from keras.layers import Embedding embedding_weights = np.zeros((vocabulary_size, embedding_size)) for word, index in word_index.items(): embedding_weights[index, :] = embeddings[word] emb_layer = Embedding(input_dim=vocabulary_size, output_dim=embedding_size, weights=[embedding_weights], mask_zero=False, trainable=False) word_inverted_index[0] model = Sequential() model.add(emb_layer) embeddings['cat'] cat_index = word_index['cat'] cat_index model.predict([[cat_index]]) ``` ## Gensim ``` import gensim from gensim.scripts.glove2word2vec import glove2word2vec glove_path = '../data/embeddings/glove.6B.50d.txt.gz' glove_w2v_path = '../data/embeddings/glove.6B.50d.txt.vec' glove2word2vec(glove_path, glove_w2v_path) from gensim.models import KeyedVectors glove_model = KeyedVectors.load_word2vec_format( glove_w2v_path, binary=False) glove_model.most_similar(positive=['good'], topn=5) glove_model.most_similar(positive=['two'], topn=5) glove_model.most_similar(positive=['king', 'woman'], negative=['man'], topn=3) ``` ## Visualization ``` import os model_dir = '/tmp/ztdl_models/embeddings/' from shutil import rmtree rmtree(model_dir, ignore_errors=True) os.makedirs(model_dir) n_viz = 4000 emb_layer_viz = Embedding(n_viz, embedding_size, weights=[embedding_weights[:n_viz]], mask_zero=False, trainable=False) model = Sequential([emb_layer_viz]) word_embeddings = emb_layer_viz.weights[0] word_embeddings import keras.backend as K import tensorflow as tf sess = K.get_session() saver = tf.train.Saver([word_embeddings]) saver.save(sess, os.path.join(model_dir, 'model.ckpt'), 1) os.listdir(model_dir) fname = os.path.join(model_dir, 'metadata.tsv') with open(fname, 'w', encoding="utf-8") as fout: for index in range(0, n_viz): word = word_inverted_index[index] fout.write(word + '\n') config = """embeddings {{ tensor_name: "{tensor}" metadata_path: "{metadata}" }}""".format(tensor=word_embeddings.name, metadata='metadata.tsv') print(config) fname = os.path.join(model_dir, 'projector_config.pbtxt') with open(fname, 'w', encoding="utf-8") as fout: fout.write(config) ```
github_jupyter
# Lesson 5 Practice: Tidy Data Use this notebook to follow along with the lesson in the corresponding lesson notebook: [L05-Tidy_Data-Lesson.ipynb](./L05-Tidy_Data-Lesson.ipynb). ## Instructions Follow along with the teaching material in the lesson. Throughout the tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: ![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/16/Apps-gnome-info-icon.png). You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. For each task, use the cell below it to write and test your code. You may add additional cells for any task as needed or desired. ## Task 1a: Setup <span style="float:right; margin-left:10px; clear:both;">![Task](./media/task-icon.png)</span> Import the following packages: + `pandas` as `pd` + `numpy` as `np` ``` import numpy as np import pandas as pd ``` ## Task 2a: Understand the data Execute the following code to display the sample data frame: ``` # Create the data rows and columns. data = [['John Smith', None, 2], ['Jane Doe', 16, 11], ['Mary Johnson', 3, 1]] # Create the list of labels for the data frame. headers = ['', 'Treatment_A', 'Treatement_B'] # Create the data frame. pd.DataFrame(data, columns=headers) ``` Using the table above, answer the following: What are the variables? What are the observations? What is the observable unit? Are the variables columns? Are the observations rows? ## Task 2b: Explain causes of untidyness Execute the following code to display the sample data frame: ``` data = [['Agnostic',27,34,60,81,76,137], ['Atheist',12,27,37,52,35,70], ['Buddhist',27,21,30,34,33,58], ['Catholic',418,617,732,670,638,1116], ['Don\'t know/refused',15,14,15,11,10,35], ['Evangelical Prot',575,869,1064,982,881,1486], ['Hindu',1,9,7,9,11,34], ['Historically Black Prot',228,244,236,238,197,223], ['Jehovah\'s Witness',20,27,24,24,21,30], ['Jewish',19,19,25,25,30,95]] headers = ['religion','<$10k','$10-20k','$20-30k','$30-40k','$40-50k','$50-75k'] religion = pd.DataFrame(data, columns=headers) religion ``` Explain why the data above is untidy? What are the variables? What are the observations? ## Task 2c: Explain causes of untidyness Execute the following code to display the sample data frame: ``` data = [['AD', 2000, 0, 0, 1, 0, 0, 0, 0, None, None], ['AE', 2000, 2, 4, 4, 6, 5, 12, 10, None, 3], ['AF', 2000, 52, 228, 183, 149, 129, 94, 80, None, 93], ['AG', 2000, 0, 0, 0, 0, 0, 0, 1, None, 1], ['AL', 2000, 2, 19, 21, 14, 24, 19, 16, None, 3], ['AM', 2000, 2, 152, 130, 131, 63, 26, 21, None, 1], ['AN', 2000, 0, 0, 1, 2, 0, 0, 0, None, 0], ['AO', 2000, 186, 999, 1003, 912, 482, 312, 194, None, 247], ['AR', 2000, 97, 278, 594, 402, 419, 368, 330, None, 121], ['AS', 2000, None, None, None, None, 1, 1, None, None, None]] headers = ['country', 'year', 'm014', 'm1524', 'm2534', 'm3544', 'm4554', 'm5564', 'm65', 'mu', 'f014'] demographics = pd.DataFrame(data, columns=headers) demographics ``` Using the dataset above: Explain why the data above is untidy? What are the variables? What are the observations? ## Task 3a: Melt data, use case #1 Using the `pd.melt` function, melt the demographics data introduced in section 2. Be sure to: - Set the colum headers correctly. - Order by country - Print the first 10 lines of the resulting melted dataset. ***Note*** The demographics dataset is provided in Task 2c above ``` demographics2 = pd.melt(demographics, id_vars = ['country', 'year'], var_name = 'Sex.Age', value_name = "cases" ) demographics2 = demographics2.sort_values("country") demographics2.head(10) ``` ## Task 3b: Practice with a new dataset Download the [PI_DataSet.txt](https://hivdb.stanford.edu/download/GenoPhenoDatasets/PI_DataSet.txt) file from [HIV Drug Resistance Database](https://hivdb.stanford.edu/pages/genopheno.dataset.html). Store the file in the same directory as the practice notebook for this assignment. ***Note***: Choose the file labeled “10935 phenotype results from 1808 isolates” Here is the meaning of data columns: - SeqID: a numeric identifier for a unique HIV isolate protease sequence. Note: disruption of the protease inhibits HIV’s ability to reproduce. - The Next 8 columns are identifiers for unique protease inhibitor class drugs. - The values in these columns are the fold resistance over wild type (the HIV strain susceptible to all drugs). - Fold change is the ratio of the drug concentration needed to inhibit the isolate. - The latter columns, with P as a prefix, are the positions of the amino acids in the protease. - '-' indicates consensus. - '.' indicates no sequence. - '#' indicates an insertion. - '~' indicates a deletion;. - '*' indicates a stop codon - a letter indicates one letter Amino Acid substitution. - two and more amino acid codes indicates a mixture.  Import this dataset into your notebook, view the top few rows of the data and respond to these questions: What are the variables? ``` drugresist = pd.read_csv('PI_DataSet.txt', sep=" ", delimiter = '\t') drugresist.head() The variables are Seq ID, drug, fold change, position and comparison result ``` What are the observations? What are the values? ## Task 3c: Practice with a new dataset Part 2 Use the data retreived from task 3b, generate a data frame containing a Tidy’ed set of values for drug concentration fold change. BE sure to: - Set the column names as ‘SeqID’, ‘Drug’ and ‘Fold_change’. - Order the data frame first by sequence ID and then by Drup name - Reset the row indexes - Display the first 10 elements. ``` drugFC = pd.melt(drugresist, id_vars ='SeqID', var_name = 'Drug', value_name = "Fold_Change") drugFC = drugFC.sort_values(by= ['SeqID', 'Drug']) drugFC ```
github_jupyter
# Ballot Polling Assertion RLA ``` from __future__ import division, print_function import math import json import warnings import numpy as np import pandas as pd import csv import copy from collections import OrderedDict from IPython.display import display, HTML from cryptorandom.cryptorandom import SHA256 from cryptorandom.sample import sample_by_index from assertion_audit_utils import \ Assertion, Assorter, CVR, TestNonnegMean, check_audit_parameters,\ find_p_values, find_sample_size, new_sample_size, prep_sample, summarize_status,\ write_audit_parameters from dominion_tools import \ prep_dominion_manifest, sample_from_manifest, write_cards_sampled seed = 20546205145833673221 # use, e.g., 20 rolls of a 10-sided die. Seed doesn't have to be numeric replacement = False #risk_function = "kaplan_martingale" #risk_fn = lambda x: TestNonnegMean.kaplan_martingale(x, N=N_cards)[0] risk_function = "kaplan_kolmogorov" risk_fn = lambda x: TestNonnegMean.kaplan_kolmogorov(x, N=N_cards, t=1/3, g=g) g = 0.1 N_cards = 146662 N_cards_max = 147000 #Upper bound on number of ballots cast # Using same files as CVR stratum but treating as no-CVR manifest_file = './Data/N19 ballot manifest with WH location for RLA Upload VBM 11-14.xlsx' manifest_type = 'STYLE' sample_file = './Data/pollingsample.csv' mvr_file = './Data/mvr_prepilot_test.json' log_file = './Data/pollinglog.json' error_rate = 0.002 # contests to audit. Edit with details of your contest (eg., Contest 339 is the DA race) contests = {'339':{'risk_limit': 0.05, 'choice_function':'IRV', 'n_winners':int(1), 'candidates':['15','16','17','18'], 'reported_winners' : ['15'], 'assertion_file' : './Data/SF2019Nov8Assertions.json' } } # read the assertions for the IRV contest for c in contests: if contests[c]['choice_function'] == 'IRV': with open(contests[c]['assertion_file'], 'r') as f: contests[c]['assertion_json'] = json.load(f)['audits'][0]['assertions'] all_assertions = Assertion.make_all_assertions(contests) all_assertions ``` ## Read the ballot manifest ``` manifest = pd.read_excel(manifest_file) # prep dominion_manifest w/o cvr processing cols = ['Tray #', 'Tabulator Number', 'Batch Number', 'Total Ballots', 'VBMCart.Cart number'] assert set(cols).issubset(manifest.columns), "missing columns" manifest_cards = manifest['Total Ballots'].sum() if N_cards < N_cards_max: warnings.warn('The CVR list does not account for every card cast in the contest; adding a phantom batch to the manifest') r = {'Tray #': None, 'Tabulator Number': 'phantom', 'Batch Number': 1, \ 'Total Ballots': N_cards_max-N_cards, 'VBMCart.Cart number': None} manifest = manifest.append(r, ignore_index = True) manifest['cum_cards'] = manifest['Total Ballots'].cumsum() for c in ['Tray #', 'Tabulator Number', 'Batch Number', 'VBMCart.Cart number']: manifest[c] = manifest[c].astype(str) phantom_cards = N_cards_max-N_cards #manifest, manifest_cards, phantom_cards = prep_dominion_manifest(manifest, N_cards, N_cards_max) manifest # read cvrs: not needed?! check_audit_parameters(risk_function, g, error_rate, contests) n_cvrs = 0 write_audit_parameters(log_file, seed, replacement, risk_function, g, N_cards, n_cvrs, \ manifest_cards, phantom_cards, error_rate, contests) ''' write_audit_parameters(log_file=log_file, seed=seed, replacement=replacement, \ risk_function=risk_function, g=g, N_cards=N_cards, n_cvrs=0, manifest_cards=manifest_cards, \ phantom_cards=phantom_cards, error_rate=error_rate, \ contests=contests) ''' #n_cvrs = 0 #write_audit_parameters(log_file, seed, replacement, risk_function, g, N_cards, n_cvrs, \ # manifest_cards, phantom_cards, error_rate, contests) ``` ## Set up for sampling ## Find initial sample size ``` # TODO ? ballot polling sample_size = 200 ``` ## Draw the first sample ``` prng = SHA256(seed) sample = sample_by_index(N_cards_max, sample_size, prng=prng) n_phantom_sample = np.sum([i>N_cards for i in sample]) #TODO print("The sample includes {} phantom cards.".format(n_phantom_sample)) print(sample) phantom_ballots = sample[sample>N_cards] print(sample) print(np.where(sample==phantom_ballots[0]), np.where(sample==phantom_ballots[1])) manifest_sample_lookup = sample_from_manifest(manifest, sample) #print(manifest_sample_lookup) write_cards_sampled(sample_file, manifest_sample_lookup, print_phantoms=True) ``` ## Read the audited sample data ``` with open(mvr_file) as f: mvr_json = json.load(f) mvr_sample = CVR.from_dict(mvr_json['ballots']) for i in range(10): print(mvr_sample[i]) ``` ## Find measured risks for all assertions ``` p_max = 0 for c in contests.keys(): contests[c]['p_values'] = {} contests[c]['proved'] = {} contest_max_p = 0 for asrtn in all_assertions[c]: a = all_assertions[c][asrtn] d = [a.assorter.assort(i) for i in mvr_sample] #print(d, '\n', a.assorter_mean(mvr_sample), '\n') print(d[89], d[108], '\n') a.p_value = risk_fn(d) #print(a.p_value, '\n') a.proved = (a.p_value <= contests[c]['risk_limit']) contests[c]['p_values'].update({asrtn: a.p_value}) contests[c]['proved'].update({asrtn: int(a.proved)}) contest_max_p = np.max([contest_max_p, a.p_value]) contests[c].update({'max_p': contest_max_p}) p_max = np.max([p_max, contests[c]['max_p']]) #print(contests['339']['p_values'], '\n', contests['339']['proved']) print("maximum assertion p-value {}".format(p_max)) done = summarize_status(contests, all_assertions) write_audit_parameters(log_file, seed, replacement, risk_function, g, N_cards, n_cvrs, \ manifest_cards, phantom_cards, error_rate, contests) ```
github_jupyter
<div class="alert alert-info"> Launch in Binder [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/esowc/UNSEEN-open/master?filepath=doc%2FNotebooks%2Fexamples%2FCalifornia_Fires.ipynb) <!-- Or launch an [Rstudio instance](https://mybinder.org/v2/gh/esowc/UNSEEN-open/master?urlpath=rstudio?filepath=doc%2Fexamples%2FCalifornia_Fires.ipynb) --> </div> # California fires In August 2020 in California, wildfires have burned more than [a million acres of land](https://edition.cnn.com/2020/10/06/us/gigafire-california-august-complex-trnd/index.html). This years' fire season was also unique in the number of houses destroyed. Here we retrieve average august temperatures over California within ERA5 1979-2020 and show anomalous August 2020 was. ![California Temperature August 2020](../../../graphs/California_anomaly.png) We furthermore create an UNSEEN ensemble and show that these kind of fire seasons can be expected to occur more often in our present climate since we find a clear trend in temperature extremes over the last decades. ### Retrieve data <div class="alert alert-info"> Note In this notebook you cannot use the python functions under Retrieve and Preprocess (they are here only for documentation on the entire workflow, see [retrieve](../1.Download/1.Retrieve.ipynb) if you want to download your own dataset. The resulting preprocessed dataset is provided so you can perform statistical analysis on the dataset and rerun the evaluation and examples provided. </div> The main functions to retrieve all forecasts (SEAS5) and reanalysis (ERA5) are `retrieve_SEAS5` and `retrieve_ERA5`. We want to download 2m temperature for August over California. By default, the hindcast years of 1981-2016 are downloaded for SEAS5. We include the years 1981-2020. The folder indicates where the files will be stored, in this case outside of the UNSEEN-open repository, in a 'California_example' directory. For more explanation, see [retrieve](../1.Download/1.Retrieve.ipynb). ``` import os import sys sys.path.insert(0, os.path.abspath('../../../')) os.chdir(os.path.abspath('../../../')) import src.cdsretrieve as retrieve import src.preprocess as preprocess import numpy as np import xarray as xr retrieve.retrieve_SEAS5( variables=['2m_temperature', '2m_dewpoint_temperature'], target_months=[8], area=[70, -130, 20, -70], years=np.arange(1981, 2021), folder='E:/PhD/California_example/SEAS5/') retrieve.retrieve_ERA5(variables=['2m_temperature', '2m_dewpoint_temperature'], target_months=[8], area=[70, -130, 20, -70], folder='E:/PhD/California_example/ERA5/') ``` ### Preprocess In the preprocessing step, we first merge all downloaded files into one xarray dataset, then take the spatial average over the domain and a temporal average over the MAM season. Read the docs on [preprocessing](../2.Preprocess/2.Preprocess.ipynb) for more info. ``` SEAS5_California = preprocess.merge_SEAS5(folder ='E:/PhD/California_example/SEAS5/', target_months = [8]) ``` And for ERA5: ``` ERA5_California = xr.open_mfdataset('E:/PhD/California_example/ERA5/ERA5_????.nc',combine='by_coords') ``` We calculate the [standardized anomaly of the 2020 event](../California_august_temperature_anomaly.ipynb) and select the 2m temperature over the region where 2 standard deviations from the 1979-2010 average was exceeded. This is a simple average, an area-weighed average is more appropriate, since grid cell area decreases with latitude, see [preprocess](../2.Preprocess/2.Preprocess.ipynb). ``` ERA5_anomaly = ERA5_California['t2m'] - ERA5_California['t2m'].sel(time=slice('1979','2010')).mean('time') ERA5_sd_anomaly = ERA5_anomaly / ERA5_California['t2m'].std('time') ERA5_California_events = ( ERA5_California['t2m'].sel( # Select 2 metre temperature longitude = slice(-125,-100), # Select the longitude latitude = slice(45,20)). # And the latitude where(ERA5_sd_anomaly.sel(time = '2020').squeeze('time') > 2). ##Mask the region where 2020 sd >2. mean(['longitude', 'latitude'])) #And take the mean ``` Plot the August temperatures over the defined California domain: ``` ERA5_California_events.plot() ``` Select the same domain for SEAS5 and extract the events. ``` SEAS5_California_events = ( SEAS5_California['t2m'].sel( longitude = slice(-125,-100), # Select the longitude latitude = slice(45,20)). where(ERA5_sd_anomaly.sel(time = '2020').squeeze('time') > 2). mean(['longitude', 'latitude'])) ``` And here we store the data in the Data section so the rest of the analysis in R can be reproduced. ``` SEAS5_California_events.to_dataframe().to_csv('Data/SEAS5_California_events.csv') ERA5_California_events.to_dataframe().to_csv('Data/ERA5_California_events.csv') ``` ### Evaluate <div class="alert alert-info"> Note From here onward we use R and not python! We switch to R since we believe R has a better functionality in extreme value statistics. </div> ``` setwd('../../..') getwd() SEAS5_California_events <- read.csv("Data/SEAS5_California_events.csv", stringsAsFactors=FALSE) ERA5_California_events <- read.csv("Data/ERA5_California_events.csv", stringsAsFactors=FALSE) ## Convert Kelvin to Celsius SEAS5_California_events$t2m <- SEAS5_California_events$t2m - 273.15 ERA5_California_events$t2m <- ERA5_California_events$t2m - 273.15 ## Convert character time to Date format ERA5_California_events$time <- lubridate::ymd(ERA5_California_events$time) SEAS5_California_events$time <- lubridate::ymd(SEAS5_California_events$time) ``` *Is the UNSEEN ensemble realistic?* To answer this question, we perform three statistical tests: independence, model stability and model fidelity tests. These statistical tests are available through the [UNSEEN R package](https://github.com/timokelder/UNSEEN). See [evaluation](../3.Evaluate/3.Evaluate.ipynb) for more info. ``` require(UNSEEN) ``` #### Timeseries <a id='Timeseries'></a> We plot the timeseries of SEAS5 (UNSEEN) and ERA5 (OBS) for the the Siberian Heatwave. You can call the documentation of the function with `?unseen_timeseries` ``` timeseries = unseen_timeseries( ensemble = SEAS5_California_events, obs = ERA5_California_events, ensemble_yname = "t2m", ensemble_xname = "time", obs_yname = "t2m", obs_xname = "time", ylab = "August California temperature (C)") timeseries ggsave(timeseries, height = 5, width = 6, filename = "graphs/Calif_timeseries.png") ``` The timeseries consist of **hindcast (years 1982-2016)** and **archived forecasts (years 2017-2020)**. The datasets are slightly different: the hindcasts contains 25 members whereas operational forecasts contain 51 members, the native resolution is different and the dataset from which the forecasts are initialized is different. **For the evaluation of the UNSEEN ensemble we want to only use the SEAS5 hindcasts for a consistent dataset**. Note, 2017 is not used in either the hindcast nor the operational dataset, since it contains forecasts both initialized in 2016 (hindcast) and 2017 (forecast), see [retrieve](../1.Download/1.Retrieve.ipynb). We split SEAS5 into hindcast and operational forecasts: ``` SEAS5_California_events_hindcast <- SEAS5_California_events[ SEAS5_California_events$time < '2017-02-01' & SEAS5_California_events$number < 25,] SEAS5_California_events_forecasts <- SEAS5_California_events[ SEAS5_California_events$time > '2017-02-01',] ``` And we select the same years for ERA5. ``` ERA5_California_events_hindcast <- ERA5_California_events[ ERA5_California_events$time > '1981-02-01' & ERA5_California_events$time < '2017-02-01',] ``` Which results in the following timeseries: ``` unseen_timeseries( ensemble = SEAS5_California_events_hindcast, obs = ERA5_California_events_hindcast, ensemble_yname = "t2m", ensemble_xname = "time", obs_yname = "t2m", obs_xname = "time", ylab = "August California temperature (C)") ``` #### Evaluation tests With the hindcast dataset we evaluate the independence, stability and fidelity. Here, we plot the results for the fidelity test, for more detail on the other tests see the [evaluation section](../3.Evaluate/3.Evaluate.ipynb). The fidelity test shows us how consistent the model simulations of UNSEEN (SEAS5) are with the observed (ERA5). The UNSEEN dataset is much larger than the observed -- hence they cannot simply be compared. For example, what if we had faced a few more or a few less heatwaves purely by chance? This would influence the observed mean, but not so much influence the UNSEEN ensemble because of the large data sample. Therefore we express the UNSEEN ensemble as a range of plausible means, for data samples of the same length as the observed. We do the same for higher order [statistical moments](https://en.wikipedia.org/wiki/Moment_(mathematics)). ``` Eval = fidelity_test( obs = ERA5_California_events_hindcast$t2m, ensemble = SEAS5_California_events_hindcast$t2m, units = 'C', biascor = FALSE, fontsize = 14 ) Eval ggsave(Eval, filename = "graphs/Calif_fidelity.png") ``` The fidelity test shows that the mean of the UNSEEN ensemble is too low compared to the observed -- the blue line falls outside of the model range in a. To correct for this low bias, we can apply an additive bias correction, which only corrects the mean of the simulations. Lets apply the additive biascor: ``` obs = ERA5_California_events_hindcast$t2m ensemble = SEAS5_California_events_hindcast$t2m ensemble_biascor = ensemble + (mean(obs) - mean(ensemble)) fidelity_test( obs = obs, ensemble = ensemble_biascor, units = 'C', biascor = FALSE ) ``` This shows us what we expected: the mean bias is corrected because the model simulations are shifted up (the blue line is still the same, the axis has just shifted along with the histogram), but the other statistical moments are the same. ### Illustrate ``` source('src/evt_plot.r') ``` We apply extreme value theory to analyze the trend in 100-year temperature extremes. There are different extreme value distributions that can be used to fit to the data. First, we fit a stationary Gumbel and a GEV distribution (including shape parameter) to the observed extremes. Then we fit a nonstationary GEV distribution to the observed temperatures and show that this better describes the data because the p-value of 0.006 and 0.002 are very small (much below 0.05 based on 5% significance with the likelihood ratio test). <!-- How about nonstationarity? Here I fit nonstationary distributions to the observed and to UNSEEN, and test whether those distributions fit better than stationary distributions. With a p value of 0.006, the nonstationary distribution is clearly a better fit. --> ``` ## Fit stationary distributions fit_obs_Gumbel <- fevd(x = obs, type = "Gumbel" ) fit_obs_GEV <- fevd(x = obs, type = "GEV" ) ## And the nonstationary distribution fit_obs_GEV_nonstat <- fevd(x = obs, type = "GEV", location.fun = ~ c(1:36), ##Fitting the gev with a location and scale parameter linearly correlated to the covariate (years) scale.fun = ~ c(1:36), use.phi = TRUE ) #And test the fit ##1. Stationary Gumbel vs stationary GEV lr.test(fit_obs_Gumbel, fit_obs_GEV_nonstat) ##2. Stationary GEV vs Nonstationary GEV lr.test(fit_obs_GEV, fit_obs_GEV_nonstat) ``` For the unseen ensemble this analysis is slightly more complicated since we need a covariate that has the same length as the ensemble: ``` #Create the ensemble covariate year_vector = as.integer(format(SEAS5_California_events_hindcast$time, format="%Y")) covariate_ens = year_vector - 1980 # Fit the stationary distribution fit_unseen_GEV <- fevd(x = ensemble_biascor, type = 'GEV', use.phi = TRUE) fit_unseen_Gumbel <- fevd(x = ensemble_biascor, type = 'Gumbel', use.phi = TRUE) # Fit the nonstationary distribution fit_unseen_GEV_nonstat <- fevd(x = ensemble_biascor, type = 'GEV', location.fun = ~ covariate_ens, ##Fitting the gev with a location and scale parameter linearly correlated to the covariate (years) scale.fun = ~ covariate_ens, use.phi = TRUE) ``` And the likelihood ratio test tells us that the nonstationary GEV distribution is the best fit, both p-values < 2.2e-16: ``` #And test the fit ##1. Stationary Gumbel vs stationary GEV lr.test(fit_unseen_Gumbel,fit_unseen_GEV) ##2. Stationary GEV vs Nonstationary GEV lr.test(fit_unseen_GEV, fit_unseen_GEV_nonstat) ``` We plot unseen trends in 100-year extremes. For more info on the methods see [this paper](https://doi.org/10.31223/osf.io/hyxeq) ``` p1 <- unseen_trends1(ensemble = ensemble_biascor, x_ens = year_vector, x_obs = 1981:2016, rp = 100, obs = obs, covariate_ens = covariate_ens, covariate_obs = c(1:36), GEV_type = 'GEV', ylab = 'August temperature (c)') p1 p2 <- unseen_trends2(ensemble = ensemble_biascor, obs = obs, covariate_ens = covariate_ens, covariate_obs = c(1:36), GEV_type = 'GEV', ylab = 'August temperature (c)') p2 ``` **Applications:** We have seen the worst fire season over California this year. Such fires are likely part of a chain of impacts, from droughts to heatwaves to fires, with feedbacks between them. Here we assess August temperatures and show that the 2020 August average temperature was very anomalous. We furthermore use SEAS5 forecasts to analyze the trend in rare extremes. Evaluation metrics show that the model simulations have a high bias, which we correct for using an additive bias correction. UNSEEN trend analysis shows a clear trend over time, both in the model and in the observed temperatures. Based on this analysis, temperature extremes that you would expect to occur once in 100 years in 1981 might occur once in 10 years in 2015 -- and even more frequently now! **Note** Our analysis shows the results of a *linear* trend analysis of August temperature averages over 1981-2015. Other time windows, different trends than linear, and spatial domains could (should?) be investigated, as well as drought estimates in addition to temperature extremes.
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from keras.preprocessing.text import Tokenizer from sklearn.model_selection import train_test_split from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers.convolutional import Conv1D from keras.layers.convolutional import MaxPooling1D from keras.layers import Dropout from keras.layers.embeddings import Embedding from sklearn.model_selection import GridSearchCV from keras.wrappers.scikit_learn import KerasClassifier import ndac %matplotlib inline #read in the data and classify data = pd.read_csv('dataframes/DF_prest.csv', index_col=0) data, hist = ndac.quantile_classify(data['conc_cf'], data['aa_seq']) # setup 'docs' for use with Tokenizer def aa_seq_doc(aa_sequence): """This function takes in an amino acid sequence (aa sequence) and adds spaces between each amino acid.""" return ' '.join([aa_sequence[i:i+1] for i in range(0, len(aa_sequence))]) data['aa_seq_doc'] = data['aa_seq'].apply(aa_seq_doc) data = data[pd.notnull(data['aa_seq_doc'])] # check shape print('data shape: ', data.shape) # define sequence documents docs = list(data['aa_seq_doc']) # create the tokenizer t = Tokenizer() # fit the tokenizer on the documents t.fit_on_texts(docs) # integer encode documents X = t.texts_to_sequences(docs) y = data['class'].values # fix random seed for reproducibility np.random.seed(27315) # load the dataset but only keep the top n words, zero the rest top_words = len(t.word_index) + 1 # truncate and pad input sequences seq_lengths = [len(seq) for seq in X] max_seq_length = max(seq_lengths) X = sequence.pad_sequences(X, maxlen=max_seq_length) # tune hyperparameters for simple model # model based on "A C-LSTM Neural Network for Text Classification" def create_model(embedding_length=16, num_filters=128, pool_size=2, lstm_nodes=100, drop=0.5, recurrent_drop=0.5, filter_length=3): # create the model model = Sequential() model.add(Embedding(top_words, embedding_length, input_length=max_seq_length)) model.add(Conv1D(filters=num_filters, kernel_size=filter_length, padding='same', activation='selu')) model.add(MaxPooling1D(pool_size=pool_size)) model.add(LSTM(lstm_nodes, dropout=drop, recurrent_dropout=recurrent_drop)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) return model model = KerasClassifier(build_fn=create_model, batch_size=64, epochs=30, verbose=0) # define the grid search parameters # model hyperparameters embedding_length = [4, 6, 8] num_filters = [100] filter_length = [8, 10] pool_size = [4] lstm_nodes = [100] param_grid = dict(num_filters=num_filters, pool_size=pool_size, lstm_nodes=lstm_nodes, filter_length=filter_length, embedding_length=embedding_length) grid = GridSearchCV(estimator=model, param_grid=param_grid, cv=3, verbose=10) grid_result = grid.fit(X, y) # summarize results print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) grid_df = pd.DataFrame(grid_result.cv_results_['params']) grid_df['mean'] = grid_result.cv_results_['mean_test_score'] grid_df['stddev'] = grid_result.cv_results_['std_test_score'] # print results to csv file grid_df.to_csv('2018-06-19_aa_gird_search_results.csv') ```
github_jupyter
# Building a change detection app using Jupyter Dashboard The Python API, along with the [Jupyter Dashboard](http://jupyter-dashboards-layout.readthedocs.io/) project enables Python developers to quickly build and prototype interactive web apps. This sample illustrates one such app which can be used to detect the changes in vegetation between the two dates. Increases in vegetation are shown in green, and decreases are shown in magenta. This sample uses the fast on-the-fly processing power of raster functions available in the `raster` module of the Python API. <blockquote>To run this sample you need `jupyter_dashboards` package in your conda environment. You can install it as shown below. For information on this, [refer to the install instructions](http://jupyter-dashboards-layout.readthedocs.io/en/latest/getting-started.html#installing-and-enabling)</blockquote> conda install jupyter_dashboards -c conda-forge <img src="../../static/img/02_change_detection_app_01.gif"> ### Connect to ArcGIS Online ``` from arcgis.gis import GIS from arcgis.geocoding import geocode from arcgis.raster.functions import * from arcgis import geometry import pandas as pd # connect as an anonymous user gis = GIS() # search for the landsat multispectral imagery layer landsat_item = gis.content.search("Landsat Multispectral tags:'Landsat on AWS','landsat 8', 'Multispectral', 'Multitemporal', 'imagery', 'temporal', 'MS'", 'Imagery Layer', outside_org=True)[0] landsat = landsat_item.layers[0] df = None ``` ### Create widget controls to accept place of interest We use the `widgets` module from `ipywidgets` to create a text box and command button controls. These controls allow the user to specify a place of interest. ``` import ipywidgets as widgets # text box widget location = widgets.Text(value='Ranchi, India', placeholder='Ranchi, India', description='Location:', disabled=False) # command button widget gobtn = widgets.Button(description='Go', disabled=False, button_style='', tooltip='Go', icon='check') # define what happens whent the command button is clicked def on_gobutton_clicked(b): global df global m global oldslider # geocode the place name and set that as the map's extent area = geocode(location.value)[0] m.extent = area['extent'] df = filter_images() gobtn.on_click(on_gobutton_clicked) location_items = [location, gobtn] widgets.HBox(location_items) ``` Create the map widget and load the landsat imagery layer ``` m = gis.map(location.value) m.add_layer(landsat) display(m) ``` ### Create date slider controls We create two slider controls to pick the before and after dates. ``` oldindex = 0 # int(len(df)/2) # before image date slider oldslider = widgets.IntSlider(value=oldindex, min=0,max=10, #len(df) - 1, step=1, description='Older:', disabled=False, continuous_update=True, orientation='horizontal', readout=False, readout_format='f', slider_color='white') old_label = widgets.Label(value='')#str(df.Time.iloc[oldindex].date())) # define the slider behavior def on_old_value_change(change): global df i = change['new'] if df is not None: try: # print(df.Time.iloc[i].date()) old_label.value = str(df.Time.iloc[i].date()) except: pass oldslider.observe(on_old_value_change, names='value') widgets.HBox([oldslider, old_label]) newindex = 0 # len(df) - 1 # after image date slider newslider = widgets.IntSlider(value=newindex, min=0, max=10, #len(df) - 1, step=1, description='Newer:', disabled=False, continuous_update=True, orientation='horizontal', readout=False, readout_format='f', slider_color='white') new_label = widgets.Label(value='') #str(df.Time.iloc[newindex].date())) # define the slider behavior def on_new_value_change(change): global df i = change['new'] if df is not None: try: # print(df.Time.iloc[i].date()) new_label.value = str(df.Time.iloc[i].date()) except: pass newslider.observe(on_new_value_change, names='value') widgets.HBox([newslider, new_label]) ``` ### Query the time enabled landast imagery layer Based on the dates selected from the sliders, we filter the layer for images ``` def update_sliders(tdf): global oldslider global newslider oldslider.max = len(tdf) - 1 newslider.max = len(tdf) -1 oldindex = int(len(tdf)/2) newindex = int(len(tdf) -1) oldslider.value = oldindex newslider.value = newindex old_label.value = str(tdf.Time.iloc[oldindex].date()) new_label.value = str(tdf.Time.iloc[newindex].date()) def filter_images(): global df area = geocode(location.value, out_sr=landsat.properties.spatialReference)[0] extent = area['extent'] selected = landsat.filter_by(where="(Category = 1) AND (CloudCover <=0.10)", geometry=geometry.filters.intersects(extent)) fs = selected.query(out_fields="AcquisitionDate, GroupName, Best, CloudCover, WRS_Row, WRS_Path, Month, Name", return_geometry=True, return_distinct_values=False, order_by_fields="AcquisitionDate") tdf = fs.sdf df = tdf tdf['Time'] = pd.to_datetime(tdf['AcquisitionDate'], unit='ms') if len(tdf) > 1: update_sliders(tdf) # m.draw(tdf.iloc[oldslider.value].SHAPE) return tdf df = filter_images() ``` ### Perform change detection when the action button is clicked We create a command button and when it is clicked, display the difference in NDVI values in shades of green and magenta. ``` # create the action button diffbtn = widgets.Button(description='Detect changes', disabled=False, button_style='success', tooltip='Show Different Image', icon='check') def on_diffbutton_clicked(b): # m.clear_graphics() first = df.iloc[oldslider.value].OBJECTID last = df.iloc[newslider.value].OBJECTID old = landsat.filter_by('OBJECTID='+str(first)) new = landsat.filter_by('OBJECTID='+str(last)) diff = stretch(composite_band([ndvi(old, '5 4'), ndvi(new, '5 4'), ndvi(old, '5 4')]), stretch_type='stddev', num_stddev=3, min=0, max=255, dra=True, astype='u8') m.add_layer(diff) diffbtn.on_click(on_diffbutton_clicked) diffbtn ``` <blockquote>To run this sample as a dashboard, first run all the cells. Then if you have the `jupyter dashboard` package installed, you should see a `Dashboard View` set of buttons in your cell toolbar. Click on the `Report layout` to specify which cells to hide and then switch to `dashboard preview`.</blockquote> <img src="../../static/img/02_change_detection_guide_01.gif"> ### Conclusion This sample demonstrates how developers can make use of the Python API and the rich Jupyter ecosystem to quickly build web apps.
github_jupyter
``` # Exceptions # You have already seen exceptions in previous code. # They occur when something goes wrong, due to incorrect code or input. # When an exception occurs, the program immediately stops. # The following code produces the ZeroDivisionError exception by trying # to divide 7 by 0. num1 = 7 num2 = 0 print(num1/num2) # Exceptions # Different exceptions are raised for different reasons. # Common exceptions: # ImportError: an import fails; # IndexError: a list is indexed with an out-of-range number; # NameError: an unknown variable is used; # SyntaxError: the code can't be parsed properly; # TypeError: a function is called on a value of an inappropriate type; # ValueError: a function is called on a value of the correct type, but # with an inappropriate value. print('7' + 4) # Exception Handling # To handle exceptions, and to call code when an exception occurs, you # can use a try/except statement. # The try block contains code that might throw an exception. # If that exception occurs, the code in the try block stops being # executed, and the code in the except block is run. If no error occurs, # the code in the except block doesn't run. # For example: try: num1 = 1 num2 = 2 print(num1 / num2) print('Done calculation') except ZeroDivisionError: print('An error occurred') print('due to zero division') try: variable = 10 print(10 / 2) except ZeroDivisionError: print('Error') print('Finished') # Exception Handling # A try statement can have multiple different except blocks to handle different # exceptions. # Multiple exceptions can also be put into a single except block using # parentheses, to have the except block handle all of them. try: variable = 10 print(variable + 'hello') print(variable / 2) except ZeroDivisionError: print('Divided by zero') except(ValueError, TypeError): print('Error occurred') try: meaning = 42 print(meaning/0) print('the meaning of life') except(ValueError, TypeError): print('ValueError or TypeError occurred') except ZeroDivisionError: print('Divided by zero') # Exception Handling # An except statement without any exception specified will catch all errors. These should be used sparingly, as they can catch unexpected errors and hide programming mistakes. # For example: try: word = 'spam' print(word/0) except: print('An error occurred') # Exception handling is particularly useful when dealing with user input. # finally # To ensure some code runs no matter what errors occur, you can use a # finally statement. The finally statement is placed at the bottom of a # try/except statement. Code within a finally statement always runs after # execution of the code in the try, and possibly in the except, blocks. try: print('Hello') print(1/0) except ZeroDivisionError: print('Divided by zero') finally: print('This code will run no matter what') try: print(1) except: print(2) finally: print(3) # finally # Code in a finally statement even runs if an uncaught exception occurs # in one of the preceding blocks. try: print(1) print(10 / 0) except ZeroDivisionError: print(unknown_var) finally: print('This is executed last') # Raising Exceptions # You can raise exceptions by using the raise statement. print(1) raise ValueError print(2) # You need to specify the type of the exception raised. try: print(1 / 0) except ZeroDivisionError: raise ValueError # Raising Exceptions # Exceptions can be raised with arguments that give detail about them. # For example: name = '123' raise NameError('Invalid name') # Raising Exceptions # In except blocks, the raise statement can be used without arguments # to re-raise whatever exception occurred. try: num = 5 / 0 except: print('An error occurred') raise # Assertions # An assertion is a sanity-check that you can turn on or turn off when you # have finished testing the program. # An expression is tested, and if the result comes up false, an exception is raised. # Assertions are carried out through use of the assert statement. print(1) assert 2 + 2 == 4 print(2) assert 1 + 1 == 3 print(3) # Programmers often place assertions at the start of a function to check # for valid input, and after a function call to check for valid output. print(0) assert 'h'!='w' print(1) assert False print(2) assert True print(3) # Assertions # The assert can take a second argument that is passed to the AssertionError # raised if the assertion fails. temp = -10 assert(temp>=0), 'Colder than absolute zero' # Opening Files # You can use Python to read and write the contents of files. # Text files are the easiest to manipulate. Before a file can be edited, # it must be opened, using the open function. myfile = open('filename.txt') # The argument of the open function is the path to the file. If the file is in the current working directory of the program, you can specify only its name. # Opening Files # You can specify the mode used to open a file by applying a second argument to the open function. # Sending "r" means open in read mode, which is the default. # Sending "w" means write mode, for rewriting the contents of a file. # Sending "a" means append mode, for adding new content to the end of the file. # Adding "b" to a mode opens it in binary mode, which is used for non-text files (such as image and sound files). # For example: open('filename.txt', 'w') open('filename.txt', 'r') open(filename.txt) binary write mode open('filename.txt','wb') # Opening Files # Once a file has been opened and used, you should close it. # This is done with the close method of the file object. file = open('filename.txt','w') # do stuf to the flie file.close() # We will read/write content to files in the upcoming lessons. # Reading Files # The contents of a file that has been opened in text mode can be read using the read method. file = open('filename.txt', 'r') cont = file.read() print(cont) file.close() # Reading Files # To read only a certain amount of a file, you can provide a number as an argument to the read function. This determines the number of bytes that should be read. # You can make more calls to read on the same file object to read more of the file byte by byte. With no argument, read returns the rest of the file. file = open('filename.txt', 'r') print(file.read(16)) print(file.read(4)) print(file.read(2)) print(file.read()) file.close() file = open('filename.txt', 'r') for i in range(21): print(file.read(4)) file.close() # Reading Files # After all contents in a file have been read, any attempts to read further from that file will return an empty string, because you are trying to read from the end of the file. file = open('filename.txt', 'r') file.read() print('Re-reading') print(file.read()) print('Finished') file.close() len(open('test.txt').readlines()) # Writing Files # To write to files you use the write method, which writes a string to the file. # For example: file = open('newfile.txt','w') file.write('this had been written to a file') file.close() file = open('newfile.txt', 'r') print(file.read()) file.close() # The "w" mode will create a file, if it does not already exist. # Writing Files # The write method returns the number of bytes written to a file, if successful. msg = 'Hello world!' file = open('newfile.txt', 'w') amount_written = file.write(msg) print(amount_written) file.close() # Working with Files # It is good practice to avoid wasting resources by making sure that files are always closed after they have been used. One way of doing this is to use try and finally. try: f = open('filename.txt') print(f.read()) finally: f.close() try: f = open('filename.txt') print(f.read()) print(1/0) finally: f.close() # Working with Files # An alternative way of doing this is using with statements. This creates a temporary variable (often called f), which is only accessible in the indented block of the with statement. with open('filename.txt')as f: print(f.read()) try: print(1) print(20/0) print(2) except ZeroDivisionError: print(3) finally: print(4) try: print(1) assert 2 + 2 == 5 assert AssertionError: print(3) except: print(4) ```
github_jupyter
``` %cd /content/drive/My Drive/Colab Notebooks import os os.chdir('project') import tensorflow as tf from tensorflow import keras from keras_preprocessing import image from keras_preprocessing.image import ImageDataGenerator from tensorflow.keras.layers import Dense,Dropout,Flatten from tensorflow.keras.models import Sequential import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import pandas as pd TRAINING_DIR='dataset/' training_datagen=ImageDataGenerator( featurewise_center=True, samplewise_center=True, featurewise_std_normalization=True, samplewise_std_normalization=False, zca_whitening=False, zca_epsilon=1e-06, rotation_range=30, width_shift_range=0.2, height_shift_range=0.2, brightness_range=None, shear_range=0.4, zoom_range=0.4, fill_mode='nearest', horizontal_flip=True, vertical_flip=True, rescale=1./255, validation_split=0.2 ) train_generator=training_datagen.flow_from_directory(TRAINING_DIR, target_size=(128,128), shuffle=True, batch_size=128, class_mode='categorical', subset='training') validation_generator=training_datagen.flow_from_directory(TRAINING_DIR, target_size=(128,128), class_mode='categorical', subset='validation') import tensorflow import pandas as pd import numpy as np import os import keras import random import cv2 import math import seaborn as sns from sklearn.metrics import confusion_matrix from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt from tensorflow.keras.layers import Dense,GlobalAveragePooling2D,Convolution2D,BatchNormalization from tensorflow.keras.layers import Flatten,MaxPooling2D,Dropout from tensorflow.keras.applications import DenseNet121,ResNet152V2,VGG19,InceptionV3,MobileNetV2 from tensorflow.keras.applications.densenet import preprocess_input from tensorflow.keras.preprocessing import image from tensorflow.keras.preprocessing.image import ImageDataGenerator,img_to_array from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau import warnings warnings.filterwarnings("ignore") ``` **ResNet CNN Model** ``` model_d=ResNet152V2(weights='imagenet',include_top=False, input_shape=(128,128, 3)) x=model_d.output x= GlobalAveragePooling2D()(x) x= BatchNormalization()(x) x= Dropout(0.5)(x) x= Dense(1024,activation='relu')(x) x= Dense(512,activation='relu')(x) x= BatchNormalization()(x) x= Dropout(0.5)(x) preds=Dense(2,activation='softmax')(x) #FC-layer model=Model(inputs=model_d.input,outputs=preds) model.summary() for layer in model.layers[:-8]: layer.trainable=False for layer in model.layers[-8:]: layer.trainable=True model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy',tf.keras.metrics.Recall(),tf.keras.metrics.Precision()]) anne = ReduceLROnPlateau(monitor='val_accuracy', factor=0.5, patience=5, verbose=1, min_lr=1e-3) checkpoint = ModelCheckpoint('Resnet_1.h5', verbose=1, save_best_only=True) # Fits-the-model history = model.fit_generator(train_generator, validation_data=validation_generator, epochs=50, verbose=1, callbacks=[anne, checkpoint]) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Model Accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model Loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() np.save('Resnet_1.npy',history.history) ``` **VGG19 CNN Model** ``` model_d=VGG19(weights='imagenet',include_top=False, input_shape=(128,128, 3)) x=model_d.output x= GlobalAveragePooling2D()(x) x= BatchNormalization()(x) x= Dropout(0.5)(x) x= Dense(1024,activation='relu')(x) x= Dense(512,activation='relu')(x) x= BatchNormalization()(x) x= Dropout(0.5)(x) preds=Dense(2,activation='softmax')(x) #FC-layer model=Model(inputs=model_d.input,outputs=preds) model.summary() for layer in model.layers[:-8]: layer.trainable=False for layer in model.layers[-8:]: layer.trainable=True model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy',tf.keras.metrics.Recall(),tf.keras.metrics.Precision()]) anne = ReduceLROnPlateau(monitor='val_accuracy', factor=0.5, patience=5, verbose=1, min_lr=1e-3) checkpoint = ModelCheckpoint('VGG191.h5', verbose=1, save_best_only=True) # Fits-the-model history = model.fit_generator(train_generator, validation_data=validation_generator, epochs=50, verbose=1, callbacks=[anne, checkpoint]) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Model Accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() np.save('sagar_VGG.npy',history.history) ``` **Inception CNN Model** ``` model_d=InceptionV3(weights='imagenet',include_top=False, input_shape=(128,128, 3)) x=model_d.output x= GlobalAveragePooling2D()(x) x= BatchNormalization()(x) x= Dropout(0.5)(x) x= Dense(1024,activation='relu')(x) x= Dense(512,activation='relu')(x) x= BatchNormalization()(x) x= Dropout(0.5)(x) preds=Dense(2,activation='softmax')(x) #FC-layer model=Model(inputs=model_d.input,outputs=preds) model.summary() for layer in model.layers[:-8]: layer.trainable=False for layer in model.layers[-8:]: layer.trainable=True model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy',tf.keras.metrics.Recall(),tf.keras.metrics.Precision()]) anne = ReduceLROnPlateau(monitor='val_accuracy', factor=0.5, patience=5, verbose=1, min_lr=1e-3) checkpoint = ModelCheckpoint('InceptionV3.h5', verbose=1, save_best_only=True) # Fits-the-model history = model.fit_generator(train_generator, validation_data=validation_generator, epochs=50, verbose=1, callbacks=[anne, checkpoint]) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Model Accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() np.save('Inception.npy',history.history) ```
github_jupyter
# Maximising the volume of a box Let us consider a sheet of metal of dimensions $l\times L$ ![](../img/sheet/main.png) We can cut in to the sheet of metal a distance $x$ to create folds. ![](../img/cuts/main.png) The volume of a box with these folds will be given by: $V(x) = (l - 2x)\times (L-2x) \times x$ In this notebook we will use calculus (the study of continuous change) and `sympy` to identify the size of the cut to give the biggest volume. ## Defining a function To help with writing our code, we will start by defining a python function for our volume. ``` import sympy as sym sym.init_printing() x, l, L = sym.symbols("x, l, L") def V(x=x, l=l, L=L): """ Return the volume of a box as described. """ return (l - 2 * x) * (L - 2 * x) * x V() ``` We can use this function to get our value as a function of $L$ and $l$: ``` V(x=2) ``` Or we can pass values to all our variables and obtain a given volume: ``` V(2, l=12, L=14) ``` ### Exercises - Define the mathematical function $m x ^ 2 - w$ ## Plotting our function Let us start by looking at the volume as a function of $x$ for the following values of $l=20$ and $L=38$: ``` %matplotlib inline sym.plot(V(x,l=20, L=38), (x, 0, 10)); # We only consider x < min(l, L) / 2 ``` We see that our function has one stationary points (where the graph is flat). ### Exercises - Obtain a plot of $V(x)$ for $0\leq x \leq 20$ - Obtain a plot of the function $f(x) = x ^ 2$ - Obtain a plot of the function $f(x) = 1 / x$ ## Finding stationary points These stationary points correspond to places where the derivative of the function is 0: $$ \frac{dV}{dx}=0 $$ Let us find the $\frac{dV}{dx}$ using `sympy`: ``` first_derivative = V().diff(x) first_derivative ``` Let us simplify our output: ``` first_derivative = first_derivative.simplify() first_derivative ``` Now to find the solutions to the equation: $$\frac{dV}{dx}=0$$ ``` stationary_points = sym.solveset(first_derivative, x) stationary_points ``` ### Exercises - Find the stationary points of $f(x)=x^2$ - Find the stationary points of $f(x)=mx^2-w$ ## Qualifying stationary points As we can see in our graph, one of our stationary points is a maximum and the other a minumum. These can be quantified by looking at the second derivative: - If the second derivative at a stationary point is **positive** then the stationary point is a **local minima**; - If the second derivative at a stationary point is **negative** then the stationary point is a **local maxima**. Let us compute the second derivative using `sympy`: ``` second_derivative = V().diff(x, 2) second_derivative stationary_points second_derivative_values = [(sol, second_derivative.subs({x: sol})) for sol in stationary_points] second_derivative_values ``` We can see that the first solution gives a negative second derivative thus it's a **local** maximum (as we saw in our plot). ``` optimal_x = second_derivative_values[0][0] optimal_x ``` We can compute the actual value for the running example: ``` particular_values = {"l": 20, "L": 38} particular_optimal_x = optimal_x.subs(particular_values) float(particular_optimal_x), float(V(particular_optimal_x, **particular_values)) ``` ### Exercises - Qualify the stationary points of $f(x)=x^2$ - Qualify the stationary points of $f(x)=mx^2-w$
github_jupyter
``` !pip install kaggle !mkdir ~/.kaggle !mkdir ./.kaggle import json token = {"username":"e211097","key":"d1cbc0614124c60ed9eca762e87944cb"} with open('/content/.kaggle/kaggle.json', 'w') as file: json.dump(token, file) !cp /content/.kaggle/kaggle.json ~/.kaggle/kaggle.json !kaggle config set -n path -v{/content} !chmod 600 /root/.kaggle/kaggle.json !wget https://ceb.nlm.nih.gov/proj/malaria/cell_images.zip !unzip cell_images.zip ``` # Importing libraries and define data path ``` import matplotlib.pyplot as plt import os import numpy as np np.random.seed(1000) import pandas as pd import matplotlib.pyplot as plt %matplotlib inline from sklearn.model_selection import train_test_split import keras from keras.layers import Convolution2D, MaxPooling2D, Flatten, Dense, BatchNormalization, Dropout from keras.models import Sequential from keras.optimizers import SGD import os import cv2 from PIL import Image ORIGIN_PATH=os.getcwd() path = os.path.abspath("cell_images") SIZE = 64 dataset = [] label = [] DATA_class1=os.path.join(path,"Parasitized") DATA_class2 = os.path.join(path,"Uninfected") # TRAINING_GENERATOR_PATH = os.path.abspath("MalariaTrainData") # TESTING_GENERATOR_PATH = os.path.abspath("MalariaTestData") ``` ## Read images and define a label for each image ``` from skimage.io import imread parasitized_images = os.listdir(DATA_class1) for i, image_name in enumerate(parasitized_images): try: image_path=os.path.join(DATA_class1,image_name) image = cv2.imread(image_path) image = Image.fromarray(image, 'RGB') image = image.resize((SIZE, SIZE)) dataset.append(np.array(image)) label.append(0) except Exception: print("Could not read image {} with name {}".format(i, image_name)) ``` ## Images display ``` plt.figure(figsize = (20, 12)) for index, image_index in enumerate(np.random.randint(len(parasitized_images), size = 5)): plt.subplot(1, 5, index+1) plt.imshow(dataset[image_index]) Uninfected_images = os.listdir(DATA_class2) for i, image_name in enumerate(Uninfected_images): try: image_path=os.path.join(DATA_class2,image_name) image = cv2.imread(image_path) image = Image.fromarray(image, 'RGB') image = image.resize((SIZE, SIZE)) dataset.append(np.array(image)) label.append(1) except Exception: print("Could not read image {} with name {}".format(i, image_name)) plt.figure(figsize = (20, 12)) for index, image_index in enumerate(np.random.randint(len(Uninfected_images), size = 5)): plt.subplot(1, 5, index+1) plt.imshow(dataset[image_index]) image_data = np.array(dataset) label = np.array(label) idx = np.arange(image_data.shape[0]) np.random.shuffle(idx) image_data = image_data[idx] label = label[idx] image_data.shape ``` # CNN model ##### after some research i choose the following structure which is performing better ``` from keras.models import Sequential from keras.layers import Conv2D from keras.layers import MaxPooling2D from keras.layers import Flatten from keras.layers import Dense from keras.layers import Dropout from keras.optimizers import SGD height = 64 width = 64 classes = 2 channels = 3 chanDim = -1 inputShape = (height, width, channels) chanDim = -1 model = Sequential() model.add(Conv2D(32, (3,3), activation = 'relu', input_shape = inputShape)) model.add(MaxPooling2D(2,2)) model.add(BatchNormalization(axis = chanDim)) model.add(Dropout(0.2)) model.add(Conv2D(32, (3,3), activation = 'relu')) model.add(MaxPooling2D(2,2)) model.add(BatchNormalization(axis = chanDim)) model.add(Dropout(0.2)) model.add(Conv2D(32, (3,3), activation = 'relu')) model.add(MaxPooling2D(2,2)) model.add(BatchNormalization(axis = chanDim)) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(512, activation = 'relu')) model.add(BatchNormalization(axis = chanDim)) model.add(Dropout(0.5)) model.add(Dense(classes, activation = 'softmax')) model.compile(loss = 'categorical_crossentropy', optimizer = 'Adam', metrics = ['accuracy']) model.summary() ``` ## Fitting the model ``` from keras.utils import to_categorical from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(image_data, to_categorical(np.array(label)), test_size = 0.20, random_state = 0) h = model.fit(X_train, y_train, epochs = 20, batch_size = 32) ``` ## Training accuracy and loss ``` plt.figure(figsize = (18,8)) plt.plot(range(20), h.history['acc'], label = 'Training Accuracy') plt.plot(range(20), h.history['loss'], label = 'Taining Loss') plt.xlabel("Number of Epoch's") plt.ylabel('Accuracy/Loss Value') plt.title('Training Accuracy and Training Loss') plt.legend(loc = "best") ``` # Evaluate the model ``` score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` # CNN with data augmentation ``` from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale = 1/255., horizontal_flip = True, width_shift_range = 0.2, height_shift_range = 0.2, fill_mode = 'nearest', zoom_range = 0.3, rotation_range = 30) val_datagen = ImageDataGenerator(rescale = 1/255.) train_generator = train_datagen.flow(X_train, y_train, batch_size = 64, shuffle = False) val_generator = val_datagen.flow(X_test, y_test, batch_size = 64, shuffle = False) model1 = Sequential() model1.add(Conv2D(32, (3,3), activation = 'relu', input_shape = inputShape)) model1.add(MaxPooling2D(2,2)) model1.add(BatchNormalization(axis = chanDim)) model1.add(Dropout(0.2)) model1.add(Conv2D(32, (3,3), activation = 'relu')) model1.add(MaxPooling2D(2,2)) model1.add(BatchNormalization(axis = chanDim)) model1.add(Dropout(0.2)) model1.add(Conv2D(32, (3,3), activation = 'relu')) model1.add(MaxPooling2D(2,2)) model1.add(BatchNormalization(axis = chanDim)) model1.add(Dropout(0.2)) model1.add(Flatten()) model1.add(Dense(512, activation = 'relu')) model1.add(BatchNormalization(axis = chanDim)) model1.add(Dropout(0.5)) model1.add(Dense(classes, activation = 'softmax')) model1.compile(loss = 'categorical_crossentropy', optimizer = 'Adam', metrics = ['accuracy']) model1.summary() from keras import optimizers optim = optimizers.Adam(lr = 0.001, decay = 0.001 / 64) model1.compile(loss = 'categorical_crossentropy', optimizer = optim, metrics = ['accuracy']) h1 = model1.fit_generator(train_generator, steps_per_epoch = len(X_train) // 64, epochs = 10) score = model1.evaluate_generator(val_generator, steps = 5) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` # Pre build model (VGG16) ``` from keras.applications import VGG16 vgg16 = VGG16(weights = 'imagenet',include_top = False,input_shape = (96,96,3)) vgg16.summary() for layers in vgg16.layers[:-4]: layers.trainable = False model2 = Sequential() model2.add(vgg16) model2.add(Flatten()) model2.add(Dense(64,activation='relu')) model2.add(Dense(1,activation = 'sigmoid')) model2.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy']) model2.summary() from keras.callbacks import ModelCheckpoint callback = ModelCheckpoint('model_vgg16.h5',monitor='val_acc',mode = 'max',save_best_only=True) calls = [callback] augmentor = ImageDataGenerator(rescale=1./255,zoom_range=0.2,shear_range=0.2,horizontal_flip=True,validation_split=0.2) train_generator = augmentor.flow_from_directory(path,batch_size=96, target_size = (96,96),class_mode = 'binary',subset = 'training') test_generator = augmentor.flow_from_directory(path,batch_size=96,target_size=(96,96), class_mode='binary',subset='validation') history = model2.fit_generator(train_generator, steps_per_epoch=10, epochs=1, callbacks = calls, validation_data=test_generator, validation_steps=64, ) score = model2.evaluate_generator(test_generator, steps = 5) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ### taking into consideration that the model has been trained for one epoch only since it is taking much time. we can expect a better accuracy for more epoches
github_jupyter
# Transfer Learning In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html). ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU). Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy. With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models ``` Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`. ``` data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=64) ``` We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on. ``` model = models.densenet121(pretrained=True) model ``` This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers. ``` # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False from collections import OrderedDict classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(1024, 500)), ('relu', nn.ReLU()), ('fc2', nn.Linear(500, 2)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier ``` With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time. PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU. ``` import time for device in ['cpu', 'cuda']: criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model.to(device) for ii, (inputs, labels) in enumerate(trainloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) start = time.time() outputs = model.forward(inputs) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() if ii==3: break print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds") ``` You can write device agnostic code which will automatically use CUDA if it's enabled like so: ```python # at beginning of the script device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ... # then whenever you get a new Tensor or Module # this won't copy if they are already on the desired device input = data.to(device) model = MyModule(...).to(device) ``` From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily. >**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen. ``` # Use GPU if it's available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = models.densenet121(pretrained=True) # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False model.classifier = nn.Sequential(nn.Linear(1024, 256), nn.ReLU(), nn.Dropout(0.2), nn.Linear(256, 2), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.003) model.to(device); epochs = 1 steps = 0 running_loss = 0 print_every = 5 for epoch in range(epochs): for inputs, labels in trainloader: steps += 1 # Move input and label tensors to the default device inputs, labels = inputs.to(device), labels.to(device) logps = model.forward(inputs) loss = criterion(logps, labels) optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: test_loss = 0 accuracy = 0 model.eval() with torch.no_grad(): for inputs, labels in testloader: inputs, labels = inputs.to(device), labels.to(device) logps = model.forward(inputs) batch_loss = criterion(logps, labels) test_loss += batch_loss.item() # Calculate accuracy ps = torch.exp(logps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() print(f"Epoch {epoch+1}/{epochs}.. " f"Train loss: {running_loss/print_every:.3f}.. " f"Test loss: {test_loss/len(testloader):.3f}.. " f"Test accuracy: {accuracy/len(testloader):.3f}") running_loss = 0 model.train() ```
github_jupyter
# Handwritten Digit Detection #### Helia Rasooli #### Zahra Bakhtiar #### Bahareh Behroozi #### Seyyedeh Zahra Fallah MirMousavi Ajdad # MNIST #### The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. ##### http://yann.lecun.com/exdb/mnist/ ``` import warnings from matplotlib import pyplot as plt import pandas as pd import numpy as np from scipy import ndimage #import itertools warnings.filterwarnings("ignore") def calculateDigitsAccuracy(predicted, actual): correct = 0 for i in range(len(predicted)): if predicted[i] == actual[i][0]: correct += 1 return correct / len(actual) def calculateLettersAccuracy(predicted, actual): correct = 0 for i in range(len(predicted)): if predicted[i] == actual[i]: correct += 1 return correct / len(actual) def showImage(data): plt.imshow(np.reshape(data, (28, 28)), cmap='gray_r') plt.show() def showImage_L(data): rotated_img = ndimage.rotate(np.reshape(data, (28, 28)), 90) plt.imshow(rotated_img, cmap='gray_r',origin='lower') plt.show() def showPlot(points, xLabel, yLabel): X = [x for (x, y) in points] Y = [y for (x, y) in points] plt.plot(X, Y) plt.ylabel(yLabel) plt.xlabel(xLabel) plt.show() def compareScores(X, trainScores, testScores, xlabel, ylabel): fig, ax = plt.subplots() for scores, label, style in [(trainScores, 'Train Data', ':ob'), (testScores, 'Test Data', ':or')]: ax.plot(X, scores, style, label=label) best_xy = max([(n, score) for n, score in zip(X, scores)], key=lambda x: x[1]) ax.annotate((best_xy[0], round(best_xy[1], 3)), xy=best_xy, xytext=(best_xy[0] + 5, best_xy[1]), arrowprops=dict(arrowstyle="->")) ax.legend() ax.set(xlabel=xlabel, ylabel=ylabel) fig.show() trainData = pd.read_csv('./MNIST_data/train_data.csv', header=None).values trainLabels = pd.read_csv('./MNIST_data/train_label.csv', header=None).values testData = pd.read_csv('./MNIST_data/test_data.csv', header=None).values testLabels = pd.read_csv('./MNIST_data/test_label.csv', header=None).values ``` an example of number 6 in dataset : ``` showImage(trainData[1310]) ``` # K-Nearest Neighbors #### 1. In KNN algorithm the output of each test depends on the k closest training examples in the feature space. * In k-NN classification, the output is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighborsa (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. * In k-NN regression, the output is the property value for the object. This value is the average of the values of k nearest neighbors. #### 2. ``` from sklearn import neighbors clf = neighbors.KNeighborsClassifier(n_neighbors=12) clf.fit(trainData, trainLabels) predictedTrain = clf.predict(trainData) predictedTest = clf.predict(testData) trainAcc = calculateDigitsAccuracy(predictedTrain, trainLabels) testAcc = calculateDigitsAccuracy(predictedTest, testLabels) print('train data accuracy:', trainAcc) print('test data accuracy:', testAcc) ``` #### 3. ``` trainScores = [] testScores = [] X = [x for x in range(5, 15)] for k in X: clf = neighbors.KNeighborsClassifier(n_neighbors=k) clf.fit(trainData, trainLabels) predictedTrain = clf.predict(trainData) predictedTest = clf.predict(testData) trainAcc = calculateDigitsAccuracy(predictedTrain, trainLabels) testAcc = calculateDigitsAccuracy(predictedTest, testLabels) trainScores.append(trainAcc) testScores.append(testAcc) compareScores(X, trainScores, testScores, 'K Neighbors', 'Accuracy') clf = neighbors.KNeighborsClassifier(n_neighbors=20) clf.fit(trainData, trainLabels) nearests = clf.kneighbors([trainData[1042]], return_distance=False) print(nearests) fig, ax = plt.subplots(4, 5, subplot_kw=dict(xticks=[], yticks=[])) for (i, axi) in enumerate(ax.flat): axi.imshow(np.reshape(trainData[nearests[0][i]], (28, 28)), cmap='gray_r') ``` #### 6. * doesn't work well with high dimensional data * doesn't work well with categorical features * Heavy calculation and memory # Decision Tree #### 7. A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf represent classification rules. ``` from sklearn import tree clf = tree.DecisionTreeClassifier(max_depth=22) clf.fit(trainData, trainLabels) predictedTrain = clf.predict(trainData) predictedTest = clf.predict(testData) trainAcc = calculateDigitsAccuracy(predictedTrain, trainLabels) testAcc = calculateDigitsAccuracy(predictedTest, testLabels) print('train data accuracy:', trainAcc) print('test data accuracy:', testAcc) ``` #### 9. ``` trainScores = [] testScores = [] X = range(5, 30) for depth in X: clf = tree.DecisionTreeClassifier(max_depth=depth) clf.fit(trainData, trainLabels) predictedTrain = clf.predict(trainData) predictedTest = clf.predict(testData) trainAcc = calculateDigitsAccuracy(predictedTrain, trainLabels) testAcc = calculateDigitsAccuracy(predictedTest, testLabels) trainScores.append(trainAcc) testScores.append(testAcc) compareScores(X, trainScores, testScores, 'Max Depth', 'Accuracy') ``` ## Logistic Regression #### 10. Logistic Regression is used when the dependent variable(target) is categorical. It uses sigmod hypothesis function (1 / 1 + e^2) for prediction. Types of logistic regression: * Binary Logistic Regression: The categorical response has only two 2 possible outcomes. Example: Spam or Not * Multinomial Logistic Regression: Three or more categories without ordering. Example: Predicting which food is preferred more (Veg, Non-Veg, Vegan) * Ordinal Logistic Regression: Three or more categories with ordering. Example: Movie rating from 1 to 5 #### 11. ``` from sklearn.linear_model import LogisticRegression clf = LogisticRegression(solver='lbfgs') clf.fit(trainData, trainLabels) predictedTrain = clf.predict(trainData) predictedTest = clf.predict(testData) trainAcc = calculateDigitsAccuracy(predictedTrain, trainLabels) testAcc = calculateDigitsAccuracy(predictedTest, testLabels) print('train data accuracy:', trainAcc) print('test data accuracy:', testAcc) ``` # LETTER DETECTION ``` trainData_L = [] trainLabels_L = [] testData_L = [] testLabels_L = [] train = [] test = [] train_z = pd.read_csv('./MNIST_data/emnist-letters-train.csv', header=None).values test_z = pd.read_csv('./MNIST_data/emnist-letters-test.csv', header=None).values for i in range(60000): train.append(train_z[i][1:785]) for i in range(10000): test.append(test_z[i]) trainLabel = [[row[i] for row in train] for i in range(1)][0] print(trainLabel[0]) testLabels_L = [[row[i] for row in test] for i in range(1)][0] for i in range(0,(len(train))): if(trainLabel[i] < 20): trainData_L.append(train[i]) trainLabels_L.append(trainLabel[i]) for i in range(0,(len(test))): testData_L.append(test[i][1:785]) print(len(testData_L)) print(len(trainData_L)) print(trainLabels_L[10]) train_z = pd.read_csv('./MNIST_data/emnist-letters-train.csv', header=None).values test_z = pd.read_csv('./MNIST_data/emnist-letters-test.csv', header=None).values trainData_L = [] testData_L = [] trainLabels_L = [] testLabels_L = [] for i in range(60000): if(train_z[i][0] < 20): trainData_L.append(train_z[i][1:785]) trainLabels_L.append(train_z[i][0]) for i in range(10000): testData_L.append(train_z[i][1:785]) testLabels_L.append(train_z[i][0]) ``` an example of letter 'e' in dataset : ``` showImage_L(trainData_L[10]) ``` ## Logistic Regression ``` from sklearn.linear_model import LogisticRegression clf = LogisticRegression(solver='lbfgs', max_iter=500, multi_class='auto') clf.fit(trainData_L, trainLabels_L) predictedTrain = [clf.predict(trainData_L)] predictedTest = [clf.predict(testData_L)] trainAcc = calculateLettersAccuracy(predictedTrain[0], trainLabels_L) testAcc = calculateLettersAccuracy(predictedTest[0], testLabels_L) print('train data accuracy:', trainAcc) print('test data accuracy:', testAcc) ``` # HandWritten Digit Detection(Using Neural Network : MLP) ``` from sklearn.neural_network import MLPClassifier from sklearn.preprocessing import StandardScaler mlp = MLPClassifier(hidden_layer_sizes=(200),shuffle=True,momentum=0.9, activation='logistic', max_iter = 1000,learning_rate_init=0.001) mlp.fit(trainData, trainLabels) from sklearn.metrics import classification_report predicted = mlp.predict(testData) print(classification_report(testLabels,predicted)) ``` ![mlp.png](attachment:mlp.png) ``` calculateDigitsAccuracy(predicted, testLabels) ``` # Digit Detection using Neural Network ``` import time # Global Variables training_size = 60000 testing_size = 200 alpha = 0.01 iterations = 2000 epochs = 15 labels = 10 # -------------------------------------------------------- def predict(weights, testData): print(testing_size) print(len(testData)) testData = np.hstack((np.ones((testing_size, 1)), testData)) predicted_labels = np.dot(weights, testData.T) # signum activation function predicted_labels = signum(predicted_labels) predicted_labels = np.argmax(predicted_labels, axis=0) return predicted_labels.T def signum(x): x[x > 0] = 1 x[x <= 0] = -1 return x def learning(trainData, trainLabels, weights): epochs_values = [] error_values = [] for k in range(epochs): missclassified = 0 for t, l in zip(trainData, trainLabels): h = np.dot(t, weights) h = signum(h) if h[0] != l[0]: missclassified += 1 gradient = t * (h - l) # reshape gradient gradient = gradient.reshape(gradient.shape[0], -1) weights = weights - (gradient * alpha) error_values.append(missclassified / training_size) epochs_values.append(k) return weights """Find optimal weights for each logistic binary classifier""" def train(trainData, trainLabels): # add 1's as x0 trainData = np.hstack((np.ones((training_size, 1)), trainData)) # add w0 as 0 initially all_weights = np.zeros((labels, trainData.shape[1])) trainLabels = trainLabels.reshape((training_size, 1)) trainLabels_copy = np.copy(trainLabels) for j in range(labels): print("Training Classifier: ", j+1) trainLabels = np.copy(trainLabels_copy) # initialize all weights to zero weights = np.zeros((trainData.shape[1], 1)) for k in range(training_size): if trainLabels[k, 0] == j: trainLabels[k, 0] = 1 else: trainLabels[k, 0] = -1 weights = learning(trainData, trainLabels, weights) all_weights[j, :] = weights.T return all_weights # -------------------------------------------------------- def run(trainData, trainLabels, testData, testLabels): print("------------------------------------------------------------------------------------") print("Running Experiment using Perceptron Learning Rule for Thresholded Unit") print("------------------------------------------------------------------------------------") print("Training ...") start_time = time.clock() all_weights = train(trainData, trainLabels) print("Training Time: %.2f seconds" % (time.clock() - start_time)) print("Weights Learned!") print("Classifying Test Images ...") start_time = time.clock() predicted_labels = predict(all_weights, testData) print("Prediction Time: %.2f seconds" % (time.clock() - start_time)) print("Test Images Classified!") accuracy = calculateDigitsAccuracy(predicted_labels, testLabels) * 100 print("Accuracy: %f" % accuracy, "%") print("---------------------\n") # -------------------------------------------------------- def main(): # load data trainData = [] trainLabels = [] train_z = pd.read_csv('./MNIST_data/mnist_train.csv', header=None).values for i in range(60000): trainData.append(train_z[i][1:785]) trainLabels.append(train_z[i][0]) testData = pd.read_csv('./MNIST_data/test_data.csv', header=None).values testLabels = pd.read_csv('./MNIST_data/test_label.csv', header=None).values print(len(trainData)) trainData = np.array(trainData[0:training_size]) trainLabels = np.array(trainLabels[0:training_size]) testData = np.array(testData[0:testing_size]) testLabels = np.array(testLabels[0:testing_size]) run(trainData, trainLabels, testData, testLabels) # -------------------------------------------------------- main() ``` # ``` import time # Global Variables training_size = 43762 testing_size = 10000 alpha = 0.01 iterations = 2000 epochs = 15 labels = 19 # -------------------------------------------------------- def predict(weights, testData_L): print(testing_size) print(len(testData_L)) testData_L = np.hstack((np.ones((testing_size, 1)), testData_L)) predicted_labels = np.dot(weights, testData_L.T) predicted_labels = signum(predicted_labels) predicted_labels = np.argmax(predicted_labels, axis=0) return predicted_labels.T def signum(x): x[x > 0] = 1 x[x <= 0] = -1 return x def learning(trainData_L, trainLabels_L, weights): epochs_values = [] error_values = [] for k in range(epochs): missclassified = 0 for t, l in zip(trainData_L, trainLabels_L): h = np.dot(t, weights) h = signum(h) if h[0] != l[0]: missclassified += 1 gradient = t * (h - l) # reshape gradient gradient = gradient.reshape(gradient.shape[0], -1) weights = weights - (gradient * alpha) error_values.append(missclassified / training_size) epochs_values.append(k) return weights """Find optimal weights for each logistic binary classifier""" def train(trainData_L, trainLabels_L): # add 1's as x0 trainData_L = np.hstack((np.ones((training_size, 1)), trainData_L)) # add w0 as 0 initially all_weights = np.zeros((labels, trainData_L.shape[1])) trainLabels_L = trainLabels_L.reshape((training_size, 1)) trainLabels_L_copy = np.copy(trainLabels_L) for j in range(labels): print("Training Classifier: ", j+1) trainLabels_L = np.copy(trainLabels_L_copy) # initialize all weights to zero weights = np.zeros((trainData_L.shape[1], 1)) for k in range(training_size): if trainLabels_L[k, 0] == j: trainLabels_L[k, 0] = 1 else: trainLabels_L[k, 0] = -1 weights = learning(trainData_L, trainLabels_L, weights) all_weights[j, :] = weights.T return all_weights # -------------------------------------------------------- def run(trainData_L, trainLabels_L, testData_L, testLabels_L): print("------------------------------------------------------------------------------------") print("Running Experiment using Perceptron Learning Rule for Thresholded Unit") print("------------------------------------------------------------------------------------") print("Training ...") start_time = time.clock() all_weights = train(trainData_L, trainLabels_L) print("Training Time: %.2f seconds" % (time.clock() - start_time)) print("Weights Learned!") print("Classifying Test Images ...") start_time = time.clock() predicted_labels = predict(all_weights, testData_L) print("Prediction Time: %.2f seconds" % (time.clock() - start_time)) print("Test Images Classified!") accuracy = calculateLettersAccuracy(predicted_labels, testLabels_L) * 100 print("Accuracy: %f" % accuracy, "%") print("---------------------\n") # -------------------------------------------------------- def main(): # load data train_z = pd.read_csv('./MNIST_data/emnist-letters-train.csv', header=None).values test_z = pd.read_csv('./MNIST_data/emnist-letters-test.csv', header=None).values trainData_L = [] testData_L = [] trainLabels_L = [] testLabels_L = [] for i in range(60000): if(train_z[i][0] < 20): trainData_L.append(train_z[i][1:785]) trainLabels_L.append(train_z[i][0]) for i in range(10000): testData_L.append(train_z[i][1:785]) testLabels_L.append(train_z[i][0]) trainData_L = np.array(trainData_L[:training_size]) trainLabels_L = np.array(trainLabels_L[:training_size]) testData_L = np.array(testData_L[:testing_size]) testLabels_L = np.array(testLabels_L[:testing_size]) run(trainData_L, trainLabels_L, testData_L, testLabels_L) # -------------------------------------------------------- main() ```
github_jupyter
``` DOC = '''Supreme Court Oral Argument Predictor (SCOAP) Creates models for predicting outcomes of Supreme Court oral arguments. Pulls justice-specific phrases associated with winning and losing arguments. LICENSE: MIT AUTHOR: theonaunheim@gmail.com COPYRIGHT: 2017, Theo Naunheim VERSION: 0.4.3 MODIFIED: 2017-03-26 DATA DIR: .scoap REQUIRES: Jupyter Notebook and Xpdf/Poppler WARNING: THIS SCRIPT DOWNLOADS AND PROCESSES A LARGE VOLUME OF MATERIAL. IT IS COMPUTATIONALLY EXPENSIVE AND TAKES A NON-NEGLIGIBLE AMOUNT OF TIME AND BANDWIDTH. ''' # Standard library imports import asyncio import copy import itertools import os import re import string import sys import zipfile # Web/data imports import bs4 import numpy as np import pandas as pd import requests # Scikit learn imports import sklearn import sklearn.feature_extraction import sklearn.metrics import sklearn.model_selection import sklearn.linear_model import sklearn.naive_bayes import sklearn.pipeline import sklearn.svm import sklearn.ensemble # Constants and constant-ish things. # Debug flag cuts down amount of data used. DEBUG = False # Website URLs for downloads TRANSCRIPT_INFO = 'https://www.supremecourt.gov/oral_arguments/argument_transcript/' TRANSCRIPT_DOWNLOADS = 'https://www.supremecourt.gov/oral_arguments/' SCDB_CSV_DOWNLOAD_LINK = 'http://scdb.wustl.edu/_brickFiles/2016_01/SCDB_2016_01_justiceCentered_Docket.csv.zip' # Transcript years for dynamic URL creation START_YEAR = 2006 END_YEAR = 2017 # OS-specific path for PDF to text extraction utility. if os.name == 'nt': PDF2TEXT_PATH = r'C:\Program Files\Xpdf\pdftotext.exe' elif os.name == 'posix': PDF2TEXT_PATH = '/usr/bin/pdftotext' else: raise Exception('This script requires Xpdf/Poppler utility pdftotext to run.') # Paths for SCOAP specific data. DATA_FOLDER = os.path.join(os.path.expanduser('~'), '.scoap') SCDB_ZIP_NAME = SCDB_CSV_DOWNLOAD_LINK.rpartition('/')[2] SCDB_CSV_NAME = SCDB_ZIP_NAME.rpartition('.')[0] SCDB_ZIP_PATH = os.path.join(DATA_FOLDER, SCDB_ZIP_NAME) SCDB_CSV_PATH = SCDB_ZIP_PATH.rpartition('.')[0] # The current term justices and cases we wish to analyze. CURRENT_JUSTICES = ['Roberts', 'Kennedy', 'Thomas', 'Ginsburg', 'Breyer', 'Alito', 'Sotomayor', 'Kagan'] CURRENT_CASES = ['15-214', '15-1031', '15-497', '15-1189', '16-369', '16-254', '15-118', '15-1248', '16-32', '15-1194', '16-54', '15-9260', '16-149', '16-1256', '15-1500', '15-1391', '15-1406', '15-827', '15-1498', '16-348', '15-1293', '15-1358', '15-8544', '15-797', '15-1204', '15-680', '15-1262', '14-1538', '15-649', '15-866', '15-513', '15-927', '15-423', '15-1251', '15-1111', '14-1055', '15-1191', '15-537', '15-5991', '15-628', '15-8049', '14-9496', '15-777', '15-606', '15-7250',] # Voting relationships for OT15, courtesy of http://www.scotusblog.com/statistics/ VOTING_RELATIONSHIPS = {"KENNEDY" :{"KENNEDY":1.00,"SCALIA":0.82,"THOMAS":0.71,"KAGAN":0.95,"ROBERTS":0.88,"GINSBURG":0.84,"ALITO":0.82,"BREYER":0.91,"SOTOMAYOR":0.79}, "SCALIA" :{"KENNEDY":0.82,"SCALIA":1.00,"THOMAS":0.88,"KAGAN":0.82,"ROBERTS":0.88,"GINSBURG":0.71,"ALITO":0.94,"BREYER":0.82,"SOTOMAYOR":0.65}, "THOMAS" :{"KENNEDY":0.71,"SCALIA":0.88,"THOMAS":1.00,"KAGAN":0.67,"ROBERTS":0.75,"GINSBURG":0.62,"ALITO":0.78,"BREYER":0.67,"SOTOMAYOR":0.64}, "KAGAN" :{"KENNEDY":0.95,"SCALIA":0.82,"THOMAS":0.67,"KAGAN":1.00,"ROBERTS":0.87,"GINSBURG":0.87,"ALITO":0.81,"BREYER":0.92,"SOTOMAYOR":0.81}, "ROBERTS" :{"KENNEDY":0.88,"SCALIA":0.88,"THOMAS":0.75,"KAGAN":0.87,"ROBERTS":1.00,"GINSBURG":0.78,"ALITO":0.84,"BREYER":0.84,"SOTOMAYOR":0.77}, "GINSBURG" :{"KENNEDY":0.84,"SCALIA":0.71,"THOMAS":0.62,"KAGAN":0.87,"ROBERTS":0.78,"GINSBURG":1.00,"ALITO":0.73,"BREYER":0.86,"SOTOMAYOR":0.88}, "ALITO" :{"KENNEDY":0.82,"SCALIA":0.94,"THOMAS":0.78,"KAGAN":0.81,"ROBERTS":0.84,"GINSBURG":0.73,"ALITO":1.00,"BREYER":0.77,"SOTOMAYOR":0.64}, "BREYER" :{"KENNEDY":0.91,"SCALIA":0.82,"THOMAS":0.67,"KAGAN":0.92,"ROBERTS":0.84,"GINSBURG":0.86,"ALITO":0.77,"BREYER":1.00,"SOTOMAYOR":0.83}, "SOTOMAYOR":{"KENNEDY":0.79,"SCALIA":0.65,"THOMAS":0.64,"KAGAN":0.81,"ROBERTS":0.77,"GINSBURG":0.88,"ALITO":0.64,"BREYER":0.83,"SOTOMAYOR":1.00}} # Define function. def create_dataframe(): '''Create a skeleton for our df.''' df = pd.DataFrame(columns=['CASE', 'DOCKET', 'ARGUMENT_YEAR', 'ARGUMENT_LINK', 'ARGUMENT_PATH',]) return df # Run function. arg_df = create_dataframe() # Define function. def get_argument_metadata(df, start=START_YEAR - 1, end=END_YEAR + 1): '''This fetches oral argument location metadata.''' # For each year for year in range(start, end): # Create web address and download data address = TRANSCRIPT_INFO + str(year) r = requests.get(address) # Parse data try: soup = bs4.BeautifulSoup(r.text, 'lxml') table = soup.find('table', 'table datatables') for row in table.findAll('tr'): link = row.find('a') case = row.find('span') # Write table info to dataframe. if link: link_text = link.text[:-2].lower() case_text = case.text link_tail = link.attrs['href'].lstrip('../') full_link = TRANSCRIPT_DOWNLOADS + link_tail # Write to frame path = os.path.join(DATA_FOLDER, link_text, 'argument.pdf') df = df.append({'CASE': case_text, 'DOCKET': link_text, 'ARGUMENT_LINK': full_link, 'ARGUMENT_PATH': path, 'ARGUMENT_YEAR': str(year)}, ignore_index=True) except AttributeError: print('Attribute error. Probably an empty page.') return df # Run function. arg_df = get_argument_metadata(arg_df) # Show dataframe for clarity. arg_df.head(3) # Debug to shorten time during testing if DEBUG: arg_df = arg_df.iloc[-10:].copy() # Define function. def make_directories(row): '''All cases get their own folder.''' try: path = os.path.join(DATA_FOLDER, row['DOCKET']) os.makedirs(path) except FileExistsError: pass # Apply function. Output unnecessary. _ = arg_df.apply(make_directories, axis=1) # Define function. def download_pdfs(row): '''Get PDFs and put in the folder if necessary.''' # If there's a link and no file, download. if row['ARGUMENT_LINK'] is not np.NaN: if os.path.exists(row['ARGUMENT_PATH']): return False r = requests.get(row['ARGUMENT_LINK'], stream=True) with open(row['ARGUMENT_PATH'], 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: f.write(chunk) # Apply function. No assinment required. _ = arg_df.apply(download_pdfs, axis=1) arg_df.head(3) # Define functions. async def get_text(pdf_path): '''This function is a coroutine for a single pdf2text.py instance.''' # Create the subprocess, redirect the standard output into a pipe process = await asyncio.create_subprocess_exec(PDF2TEXT_PATH, pdf_path, '-', stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE) # Read output data = await process.communicate() # Have process exit and return data. await process.wait() # Decode cp1252 for windows try: decoded_data = data[0].decode('cp1252') # And UTF-8 for Linux. except: decoded_data = data[0].decode() return decoded_data async def get_all_text(pdf_paths): '''This gathers the pdf2text.py results.''' # Create list for return results. result_list = [] # Create a list of tasks input_len = len(pdf_paths) num_chunks = (input_len // 10) + 1 chunked_input = np.array_split(pdf_paths, num_chunks) # Now run each of the chunks in parallel to speed things up. for chunk in chunked_input: # Create tasks tasks = [get_text(path) for path in chunk] # Run all the tasks in parallel results = await asyncio.gather(*tasks) # Put the zipped (path, results) in result list for path, result in zip(chunk, results): result_list.append((path, result)) return result_list def add_arguments(df): '''Adds argument text to df.''' # Get unique PDFs unique_pdfs = df['ARGUMENT_PATH'].unique() # Windows only supports proactorloop. if os.name == 'nt': loop = asyncio.ProactorEventLoop() elif os.name == 'posix': loop = asyncio.SelectorEventLoop() else: loop == None asyncio.set_event_loop(loop) # Run our coroutine to extract text. arg_data = loop.run_until_complete(get_all_text(unique_pdfs)) # Loop no longer necessary. loop.close() # Create dataframe for data. tdf = pd.DataFrame.from_records(arg_data, columns=['ARGUMENT_PATH', 'TEXT']) # Join to input df and fill na. df = df.merge(tdf, how='left', on='ARGUMENT_PATH').fillna('') return df # Run function arg_df = add_arguments(arg_df) # Show dataframe for clarity. arg_df.head(3) # Define function. def cut_unnecessary_text(df): '''This function cuts low information text from transcript.''' # First chop off the caption ('PROCEEDINGS' or 'P R O C E E D I N G S') capture_string = r'P\s?R\s?O\s?C\s?E\s?E\s?D\s?I\s?N\s?G\s?S([\s\S]*\Z)' df['TEXT'] = df['TEXT'].str.extract(capture_string, expand=False, flags=re.MULTILINE) # First we specify the patterns we don't want patterns_to_cut = [ # Cut carriage returns and form feeds because f*** those guys. r'[\r\f]', # Remove tables at end ##:## 4 within no more than 100 chars of Alderson (r'\s*' + r'Alderson Reporting Company' + # period because a.m. messes it up. r'[\s\S.]{0,75}\d?\d:\d?\d' * 3 + r'[\s\S]*' + r'\Z'), # Remove [2004 - 2005] footer r'1111 14th[\s\S]{0,100}20005', # Remove [2006 - 2016] header/footer unofficial r'Alderson[\s\S]{0,100}Review', # Remove [2006 - 2016] header/footer official r'Alderson[\s\S]{0,100}[oO]fficial', # Remove Genric Alderson r'Alderson Reporting Company', # Cut court reporter annotations r'[(\[][\s\S]{0,100}[)\]]', # Cut line numbers, page numbers, all other low-information numbers r'[0-9]', # Cut PAGE r'[Pp][Aa][Gg][Ee]', ] # Replace above patterns with empty space. for pattern in patterns_to_cut: df['TEXT'] = df['TEXT'].str.replace(pat=pattern, repl='', flags=re.MULTILINE) return df # Run function. arg_df = cut_unnecessary_text(arg_df) # Show df for clarity arg_df.head(3) # Define function. def create_heading_columns(df): '''This function finds the section headings for each case''' # Create Petitioner oral argument heading col pet_arg_pattern = ''.join([r'(', r'ORAL ARGUMENT[\S\s]{,200}', r'(?:PETITIONER|APPELLANT)S?', # As appointed by this court optional r'(?:[\S\s]{,50}THIS COURT)?', r')']) df['PET_ARG_HEADING'] = df['TEXT'].str.extract(pet_arg_pattern, expand=False, flags=re.MULTILINE).fillna('') # Create Respondent oral argument heading col res_arg_pattern = ''.join([r'(ORAL ARGUMENT[\S\s]{,200}', r'(?:RESPONDENT|APPELLEE)S?', # As appointed by this court optional r'(?:[\S\s]{,50}THIS COURT)?', r')']) df['RES_ARG_HEADING'] = df['TEXT'].str.extract(res_arg_pattern, expand=False, flags=re.MULTILINE).fillna('') # Create Petitioner rebuttal heading col pet_reb_pattern = ''.join([r'(REBUTTAL ARGUMENT[\S\s]{,200}', r'(?:PETITIONER|APPELLANT)S?', # As appointed by this court optional r'(?:[\S\s]{,50}THIS COURT)?', r')']) df['PET_REB_HEADING'] = df['TEXT'].str.extract(pet_reb_pattern, expand=False, flags=re.MULTILINE).fillna('') return df # TODO: # IN ##-#### optional ... r'(?:[\S\s]{,10}IN[\S\s]{,5}-)?' # Run function arg_df = create_heading_columns(arg_df).fillna('') # Define function. def extract_petitioner_arg(df): '''Pulls out petitioner argument using section headers.''' # Create extraction (between pet arg heading and res arg heading) df['PET_ARG_REGEX'] = df.apply(lambda row: ''.join([row['PET_ARG_HEADING'], r'([\S\s]*?)', r'(?:ORAL)']), axis=1) # Extract and create petitioner argument column df['PETITIONER_ARGUMENT'] = df.apply(lambda row: re.findall(row['PET_ARG_REGEX'], row['TEXT'], flags=re.MULTILINE), axis=1) # If no match, empty string. Else, take match's. df['PETITIONER_ARGUMENT'] = df['PETITIONER_ARGUMENT'].map(lambda matches: ''.join(matches)) return df # Run function. arg_df = extract_petitioner_arg(arg_df) # Define function. def extract_respondent_arg(df): '''Pulls out respondent argument using previously generated section heads.''' # Get respondent argument (between res arg heading and pet reb heading) df['RES_ARG_REGEX'] = df.apply(lambda row: ''.join([row['RES_ARG_HEADING'], r'([\S\s]*?)', r'(?:REBUTTAL)|(?:ORAL)']), axis=1) df['RESPONDENT_ARGUMENT'] = df.apply(lambda row: re.findall(row['RES_ARG_REGEX'], row['TEXT'], flags=re.MULTILINE), axis=1) # If no match, empty string. Else, take match's. df['RESPONDENT_ARGUMENT'] = df['RESPONDENT_ARGUMENT'].map(lambda matches: ''.join(matches)) return df # Run function. arg_df = extract_respondent_arg(arg_df) # Define function. def extract_petitioner_reb(df): '''Pulls out petitioner rebuttal using previously generated section heads.''' # Get petitioner rebuttal (between pet reb heading and res reb heading) df['PET_REB_REGEX'] = df.apply(lambda row: ''.join([row['PET_REB_HEADING'], r'([\S\s]*?)', r'(?:\Z)']), axis=1) df['PETITIONER_REBUTTAL'] = df.apply(lambda row: re.search(row['PET_REB_REGEX'], row['TEXT'], flags=re.MULTILINE).group(1), axis=1) return df # Run function. arg_df = extract_petitioner_reb(arg_df) # TODO. If transcript omits info (e.g. SAMSUNG/WAXMAN 15-777), no match. len(arg_df) # Show dataframe for clarity arg_df.head(3) bak = arg_df.copy() def split_arguments(df): '''Split argument into a series of comments.''' # Must be double quote because raw string addressing ' for O'Connor comment_pattern = r"([A-Z.'\s]{5,25}:\s[\s\S]*?)(?=[A-Z'.\s]{5,25}[:\Z])" for column in ['PETITIONER_ARGUMENT', 'RESPONDENT_ARGUMENT', 'PETITIONER_REBUTTAL']: # We only want periods in the middle of names. df[column] = df[column].str.findall(comment_pattern) return df # Run functions arg_df = split_arguments(arg_df) def tuplify_cell(cell_value): '''Helper function for tuplify_argument().''' return_value = [] for comment in cell_value: justice, _, comment = comment.partition(':') return_value.append(tuple([justice.replace('.', '').strip(), comment.strip()])) return return_value def tuplify_arguments(df): '''Turn question strings into (justice, text) tuples.''' for column in ['PETITIONER_ARGUMENT', 'RESPONDENT_ARGUMENT', 'PETITIONER_REBUTTAL']: df[column] = df[column].map(tuplify_cell) return df.fillna('') # Run function arg_df = tuplify_arguments(arg_df) def condense_cell(cell_value): '''Helper function for condense_argument().''' return_dict = {} for input_tuple in cell_value: justice, comment = input_tuple try: return_dict[justice].append(comment) except KeyError: return_dict[justice] = [comment] return return_dict def condense_arguments(df): '''Turn args into: {'justice': ['comment 1', 'comment 2']}''' for column in ['PETITIONER_ARGUMENT', 'RESPONDENT_ARGUMENT', 'PETITIONER_REBUTTAL']: df[column] = df[column].map(condense_cell) return df arg_df = condense_arguments(arg_df) arg_df.head(3) # Define function creating secondary df. def create_scdb_df(): '''Download data frome the SCDB and instantiate dataframe.''' # If we've already download database, just load. if os.path.exists(SCDB_CSV_PATH): pass else: # Get data r = requests.get(SCDB_CSV_DOWNLOAD_LINK, stream=True) with open(SCDB_ZIP_PATH, 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: f.write(chunk) # Unzip context manager with zipfile.ZipFile(SCDB_ZIP_PATH) as zip_file: # Read data context manager with zip_file.open(SCDB_CSV_NAME) as pseudo_file: data = pseudo_file.read() # Write data with context manager and write to csv. with open(SCDB_CSV_PATH, 'wb+') as f: f.write(data) # Now create dataframe from csv. case_df = pd.read_csv(SCDB_CSV_PATH, encoding='latin-1') return case_df # Run df. case_df = create_scdb_df() # Show case dataframe for clarity case_df.head(3) cut_arg_df = arg_df[['DOCKET', 'CASE', 'PETITIONER_ARGUMENT', #'RESPONDENT_ARGUMENT', 'PETITIONER_REBUTTAL']] cut_arg_df.head(3) cut_case_df = case_df[['docket', 'majority', 'partyWinning', 'justiceName']] cut_case_df.columns = ['DOCKET', 'majority', 'partyWinning', 'JUSTICE'] cut_case_df.head(3) # Join case_df and arg_df to create joint dataframe jdf. jdf = pd.merge(cut_arg_df, cut_case_df, how='left', on='DOCKET') # Drop (Reargued) because it creates dupes. contains_reargue = jdf['CASE'].str.contains('Reargue') jdf = jdf[~contains_reargue] # Show joint dataframe for clarity (all tail end will be np.NaN) jdf.head(3) ''' From documentation on 'partyWinning' column: http://scdb.wustl.edu/documentation.php?var=partyWinning 0: no favorable disposition for petitioning party apparent 1: petitioning party received a favorable disposition 2: favorable disposition for petitioning party unclear We want to be able to separate those who won from those who did not win. Consequently, we drop all cases where the decision was ambiguous, or the winner was not apparent. We then convert the 0 to False and the 1 to True, which gives us a True/False 'PETITIONER_WINS' column. PETITIONER_WINS 0 -> False 1 -> True If the petitioner wins, it is because it was the decision of the majority of the court. We can accurately describe the nature of this column as 'PETITIONER_WINS_MAJORITY'. ''' jdf = jdf[jdf['partyWinning'] != 2.0].copy() jdf['PETITIONER_WINS_MAJORITY'] = jdf['partyWinning'].astype(bool) ''' From documentation on 'majority' columns: http://scdb.wustl.edu/documentation.php?var=majority 1: dissent 2: majority We want to convert this into a 'VOTED_WITH_MAJORITY' column. To do this we subtract one from each and every value so that dissent becomes 0 and majority becomes 1. majority 0: dissent (result 1 - 1) 1: majority (result from 2 - 1) Then we convert the 0 to False and 1 to True, so that we have a 'VOTED_WITH_MAJORITY' column. VOTED_WITH_MAJORITY 0 -> False 1 -> True ''' jdf['majority_minus_one'] = jdf['majority'] - 1 jdf['VOTED_WITH_MAJORITY'] = jdf['majority_minus_one'].astype(bool) jdf ''' We can determine whether a petitioner won over a specific justice based on: 1. Whether the petitioner won over a majority, and 2. Whether the specific justice was a part of that majority. If the answer to both of these questions is the same (that is, either both the answers are Yes or both the answers are no), then the petitioner won over the justice. Logically: P_WINS_MAJ, J_VOTES_MAJ = P_WINS_J If petitioner wins majority and justice voted with majority, the petitioner won over the justice P_WINS_MAJ, ~J_VOTES_MAJ = P_LOSES_J If petitioner wins majority and justice NOT a part of the majority, petitioner did not win justice ~P_WINS_MAJ, J_VOTES_MAJ = P_LOSES_J If petitioner does NOT win majority and justice voted in majority, petitioner did not win justice ~P_WINS_MAJ, ~J_VOTES_MAJ = P_WINS_J If petitioner does NOT win majority and justice voted NOT in majority, petitioner won justice ''' def determine_vote(row): # If petitioner wins majority if row['PETITIONER_WINS_MAJORITY']: # Pet wins majority AND justice voted with majority if row['VOTED_WITH_MAJORITY']: return True # Pet wins majority AND justice voted against majority else: return False # If petitioner loses majority else: # Pet loses majority AND justice voted with majority if row['VOTED_WITH_MAJORITY']: return False # Pet loses majority AND justice voted against majority else: return True # Voted with majority jdf['VOTED_FOR_PETITIONER'] = jdf.apply(determine_vote, axis=1) jdf.head(3) # Demonstration dataframe pd.DataFrame(data={'Justice Votes With Majority': ['Petitioner Wins Justice', 'Petitioner Loses Justice'], 'Justice Votes Againt Majority': ['Petitioner Loses Justice', 'Petitioner Wins Justice']}, index=['Petitioner Wins Majority', 'Petitioner Loses Majority']) jdf[['CASE', 'JUSTICE', 'VOTED_FOR_PETITIONER']].dropna().head(9) # Define function. def trim_columns(df): # Trim columns df = df[['DOCKET', 'CASE', 'JUSTICE', 'PETITIONER_ARGUMENT', #'RESPONDENT_ARGUMENT', 'PETITIONER_REBUTTAL', 'VOTED_FOR_PETITIONER']] return df # Run function. jdf = trim_columns(jdf) # Show joint dataframe for clarity. jdf.head(3) # Define filter function. def filter_justice_data(row): '''Converts SCDB: RHJackson to JACKSON, which can be pulled from JUSTICE JACKSON. Then for JACKSON: {'JUSTICE JACKSON': [1, 2], 'JUSTICE ROBERTS': [2, 3]} Becomes [1,2] for JACKSON's row. ''' # Handle SCDB justice names. Based on capitalization # which messes up SDOConnor -> Connor # SDOConner should be OCONNER, not CONNER. if row['JUSTICE'] == 'SDOConnor': row['JUSTICE'] = 'SDOconnor' # Pick first lower case letter and start name one previous lower_mask = [letter.islower() for letter in row['JUSTICE']] first_lower = lower_mask.index(True) one_prior = first_lower - 1 row['JUSTICE'] = row['JUSTICE'][one_prior:].upper() # Handle text columns for index in ['PETITIONER_ARGUMENT', #'RESPONDENT_ARGUMENT', 'PETITIONER_REBUTTAL',]: # Find if justice name is in any of the keys. # 1 if found in string, 0 if not. # [1, 0, 0] -> True justice_represented = any([key.count(row['JUSTICE']) for key in row[index].keys()]) # If represented, fill with value. if justice_represented: for key in row[index].keys(): if row['JUSTICE'] in key and 'JUSTICE' in key: try: row[index] = row[index][key] except TypeError: # Fallback to edit distance? pass # If a number has not been placed in the cell, place zero. if type(row[index]) == dict: row[index] = [] # If not represented else: row[index] = [] return row # Apply function. If justice is NA ... not yet decided jdf = (jdf.dropna(subset=['JUSTICE']) .apply(filter_justice_data, axis=1)) # Show joint dataframe for clarity jdf.head(3) # Write argument data df to csv arg_data_csv_path = os.path.join(os.path.expanduser('~'), '.scoap', 'argument_data.csv') jdf.to_csv(arg_data_csv_path, encoding='utf-8') # Create text_df text_df = pd.melt(jdf, id_vars=['JUSTICE', 'DOCKET', 'VOTED_FOR_PETITIONER'], value_vars=['PETITIONER_ARGUMENT', #'RESPONDENT_ARGUMENT', 'PETITIONER_REBUTTAL'], var_name='ARG_TYPE', value_name='TEXT') text_df.head(3) # Define function def reorient_args(row): '''Apply function to make respondent arguments useful. WARNING: HAND-WAVY, UNSCIENTIFIC FEATURE ENGINEERING BELOW. THIS ACTUALLY DECREASES ACCURACY AT PRESENT. We hamfistedly force the petitioner argument, respondent argument, and petitioner rebuttal into a single type of entry. Where before we had: JUSTICE, PET_ARG, PETITIONER_WINS JUSTICE, RES_ARG, PETITIONER_WINS JUSTICE, PET_REB, PETITIONER_WINS We will now have: JUSTICE, PET_ARG, QUESTIONEE_WON JUSTICE, RES_ARG, QUESTIONEE_WON JUSTICE, PET_REB, QUESTIONEE_WON The first notable change is that we transform PETITIONER_WINS to QUESTIONEE_WON. Previously, we could see what text is associated with petitioner wins because target PET_WINS was framed in terms of the petitioner. It was previously useless for respondent comments. We can get around this by reframing the target in terms of "Did the party to whom the justice directed the comment win?" instead of "Did the petitioner win?". This requires a big assumption: namely that petitioner arguments, respondent arguments, and petitioner rebuttals are roughly interchangable. In other words, we are presuming that justices will use similar terms i.e. "Your argument is bad and you should feel bad" whether it's the petitioner or it's the respondent. This theoretically results in some loss of prediction quality: negative words directed at a respondent may be markedly different in quality from those directed at a petitioner. However, this trades off with the fact that we have uroughly doubled the number of samples. ''' vote_pet = row['VOTED_FOR_PETITIONER'] arg_type = row['ARG_TYPE'] if arg_type in ['RESPONDENT_ARGUMENT']: if vote_pet is True: voted_for_speaker = False else: voted_for_speaker = True if arg_type in ['PETITIONER_ARGUMENT', 'PETITIONER_REBUTTAL']: if vote_pet is True: voted_for_speaker = True else: voted_for_speaker = False return voted_for_speaker # Run function text_df['QUESTIONEE_WON'] = text_df.apply(reorient_args, axis=1) text_df # Define function def create_text_df(df): '''Clean up text.''' # ' '.join[question_1, question_2, question_3] so single string df['TEXT'] = df['TEXT'].map(lambda item: ' '.join(item)) # Create string of punctuation chars to remove (but not '-') punctuation = string.punctuation.replace('-', '').replace('/', '') # Remove punctuation via [.!?,;:] regex df['TEXT'] = df['TEXT'].str.replace('[' + punctuation + ']', # Replacement value '') # Remove double dash pattern df['TEXT'] = df['TEXT'].str.replace('--', # Replacement value '') # Get rid of all items without text. df = df.loc[df['TEXT'].str.strip().str.len() > 0,:] return df # Run function text_df = create_text_df(text_df) text_df # Create test/train split for text data split = sklearn.model_selection.train_test_split # Split test and train. train_text_df, test_text_df = split(text_df, test_size = 0.2) train_text_df = train_text_df.copy() test_text_df = test_text_df.copy() train_text_df.head(3) train_text_df['JUSTICE'].unique() # Define function def create_pipelines(df): '''Creates pipelines for each justice.''' # Basic setup. gb = text_df.groupby('JUSTICE') justices = df['JUSTICE'].unique() dataframes = [gb.get_group(justice) for justice in justices] nb_pipelines = [] sgd_pipelines = [] rf_pipelines = [] # Probably a vectorized way to do this. for justice, dataframe in zip(justices, dataframes): # Make aliases Pipe = sklearn.pipeline.Pipeline Vectorizer = sklearn.feature_extraction.text.CountVectorizer Transformer = sklearn.feature_extraction.text.TfidfTransformer MultiNB = sklearn.naive_bayes.MultinomialNB SGD = sklearn.linear_model.SGDClassifier RF = sklearn.ensemble.RandomForestClassifier # Reuseable arguments. vectorizer_params = {'ngram_range': (3, 5), 'min_df': 10} transformer_params = {'use_idf': True} ############# Multinomial Naive Bayes classifier nb_pipeline = Pipe([('vectorizer', Vectorizer(**vectorizer_params)), ('transformer', Transformer(**transformer_params)), ('classifier', MultiNB()),]) try: nb_pipeline = nb_pipeline.fit(dataframe['TEXT'], dataframe['QUESTIONEE_WON']) except (ValueError, AttributeError): nb_pipeline = None nb_pipelines.append(nb_pipeline) ############ Gradient descent SGD sgd_pipeline = Pipe([('vectorizer', Vectorizer(**vectorizer_params)), ('transformer', Transformer(**transformer_params)), ('classifier', SGD(loss='log', penalty='l2')),]) try: sgd_pipeline = sgd_pipeline.fit(dataframe['TEXT'], dataframe['QUESTIONEE_WON']) except (ValueError, AttributeError): sgd_pipeline = None sgd_pipelines.append(sgd_pipeline) ############ RF rf_pipeline = Pipe([('vectorizer', Vectorizer(**vectorizer_params)), ('transformer', Transformer(**transformer_params)), ('classifier', RF(n_estimators=100))]) try: rf_pipeline = rf_pipeline.fit(dataframe['TEXT'], dataframe['QUESTIONEE_WON']) except (ValueError, AttributeError): rf_pipeline = None rf_pipelines.append(rf_pipeline) return [item for item in zip(justices, nb_pipelines, sgd_pipelines, rf_pipelines)] # Create test and train pipelines pipelines = create_pipelines(train_text_df) pipelines[0][1] # Define function for creating an argument for add_predictions() def create_model_dict(model_pipelines): '''For convenience we create an associative array of models.''' model_dict = {} for justice, nb_pipe, sgd_pipe, rf_pipe in model_pipelines: # Nested dicts model_dict[justice] = {} model_dict[justice]['SGD'] = sgd_pipe model_dict[justice]['NB'] = nb_pipe model_dict[justice]['RF'] = rf_pipe return model_dict # Run function model_dict = create_model_dict(pipelines) # Define function to add predictions to test frame def add_predictions(row, model_dict, model_type): '''Apply() function for adding predictions''' justice_name = row['JUSTICE'] try: model = model_dict[justice_name][model_type] prediction = model.predict([row['TEXT']])[0] # If no model, predict will not be an attribute. # No justice, no peace (also no model). except (KeyError, AttributeError): return np.NaN return prediction test_text_df['NB_PREDICTION'] = test_text_df.apply(add_predictions, args=(model_dict, 'NB'), axis=1).astype(bool) test_text_df['SGD_PREDICTION'] = test_text_df.apply(add_predictions, args=(model_dict, 'SGD'), axis=1).astype(bool) test_text_df['RF_PREDICTION'] = test_text_df.apply(add_predictions, args=(model_dict, 'RF'), axis=1).astype(bool) test_text_df.head(3) # Check output for clarity test_text_df.head(3) # Assess accuracy score = sklearn.metrics.accuracy_score test_text_df = test_text_df.dropna() # Conduct scoring nb_score = score(test_text_df['QUESTIONEE_WON'], test_text_df['NB_PREDICTION']) sgd_score = score(test_text_df['QUESTIONEE_WON'], test_text_df['SGD_PREDICTION']) rf_score = score(test_text_df['QUESTIONEE_WON'], test_text_df['RF_PREDICTION']) # Format as string base_string = ''' \n The Naive Bayes model scored {:.1%}.\n\n The Stochastic Gradient Decent model scored {:.1%}.\n\n The Random Forest model scored {:.1%}.\n\n This can't be real. TODO. ''' print(base_string.format(nb_score, sgd_score, rf_score)) score = sklearn.metrics.roc_auc_score(test_text_df['QUESTIONEE_WON'].values, test_text_df['RF_PREDICTION'].values) score # Define function def get_nb_phrases(nb_pipeline, number): '''Pull relevant phrases from model''' nb_vec = nb_pipeline.named_steps['vectorizer'] nb_clf = nb_pipeline.named_steps['classifier'] nb_names = nb_vec.get_feature_names() # nb_clf.feature_log_prob_[0] is for False (voted against party) # nb_clf.feature_log_prob_[1] is for True (voted for party) nb_probs = nb_clf.feature_log_prob_[1] nb_series = pd.Series({name: prob for name, prob in zip(nb_names, nb_probs)}) # Turn into series top_values_nb = nb_series.sort_values(ascending=False).head(number).copy() top_values_nb.name = 'Top Naive Bayes Log Prob' bottom_values_nb = nb_series.sort_values(ascending=True).head(number).copy() bottom_values_nb.name = 'Bottom Naive Bayes Log Prob' return (top_values_nb, bottom_values_nb) # Define function def get_sgd_phrases(sgd_pipeline, number): '''Pull phrases from model.''' sgd_clf = sgd_pipeline.named_steps['classifier'] sgd_vec = sgd_pipeline.named_steps['vectorizer'] sgd_names = sgd_vec.get_feature_names() # sgd_clf.coef_[0] is for False (voted against party) sgd_probs = sgd_clf.coef_[0] sgd_series = pd.Series({name: prob for name, prob in zip(sgd_names, sgd_probs)}) # Turn into series. top_values_sgd = sgd_series.sort_values(ascending=False).head(number).copy() top_values_sgd.name = 'Top SGD Log Prob' bottom_values_sgd = sgd_series.sort_values(ascending=True).head(number).copy() bottom_values_sgd.name = 'Bottom SGD Log Prob' return(top_values_sgd, bottom_values_sgd) # Define function def get_rf_phrases(rf_pipeline, number): '''Pull phrases from model. Importances are both top and bottom items.''' rf_clf = rf_pipeline.named_steps['classifier'] rf_vec = rf_pipeline.named_steps['vectorizer'] rf_names = rf_vec.get_feature_names() # rf_clf.geature_importances rf_probs = rf_clf.feature_importances_ rf_series = pd.Series({name: prob for name, prob in zip(rf_names, rf_probs)}) # Turn into series. top_values_rf = rf_series.sort_values(ascending=False).head(number).copy() top_values_rf.name = 'Top RF Feature Imp' bottom_values_rf = rf_series.sort_values(ascending=True).head(number).copy() bottom_values_rf.name = 'Bottom RF Feature Imp' return(top_values_rf, bottom_values_rf) # Define function def create_phrase_series(pipelines, number=500): '''Top and bottom phrase DFs. Two columns per justice in each (SGD & NB models).''' # Create data holding dicts return_value = [] # Iterate through pipelines to get data we need. for justice, nb_pipeline, sgd_pipeline, rf_pipeline in pipelines: # Skip any empty pipelines (insufficient comments) if any([nb_pipeline is None, sgd_pipeline is None, rf_pipeline is None]): continue # Get actual phrases top_values_nb, bottom_values_nb = get_nb_phrases(nb_pipeline, number) top_values_sgd, bottom_values_sgd = get_sgd_phrases(sgd_pipeline, number) top_values_rf, bottom_values_rf = get_rf_phrases(rf_pipeline, number) # Add to return value return_value.append({'justice': justice, 'TOP_NB': top_values_nb, 'BOTTOM_NB': bottom_values_nb, 'TOP_SGD': top_values_sgd, 'BOTTOM_SGD': bottom_values_sgd, 'TOP_RF': top_values_rf, 'BOTTOM_RF': bottom_values_rf}) # Return list of dicts. return return_value # Run function justice_data = create_phrase_series(pipelines) # Show sample data for clarity pd.DataFrame(justice_data[6]['BOTTOM_RF']).head(3) # Define function def create_frequency_dfs(justice_data, text_df): '''This function takes bottom phrases and computes frequency.''' # Results bottom_phrase_results = [] for data_dict in justice_data: # Get justice name justice = data_dict['justice'] # Get bottom values bottom_nb_phrases = data_dict['BOTTOM_NB'] bottom_sgd_phrases = data_dict['BOTTOM_SGD'] bottom_rf_phrases = data_dict['BOTTOM_RF'] bottom_phrases = (bottom_nb_phrases.append(bottom_sgd_phrases) .append(bottom_rf_phrases) .drop_duplicates() .index.values) # Create won and lost dataframes. won_df = text_df[(text_df['JUSTICE'] == justice) & (text_df['QUESTIONEE_WON'] == True)] lost_df = text_df[(text_df['JUSTICE'] == justice) & (text_df['QUESTIONEE_WON'] == False)] # To string won_string = won_df['TEXT'].str.lower().str.cat(sep=' ') lost_string = lost_df['TEXT'].str.lower().str.cat(sep=' ') # Calculate bottom phrases for phrase in bottom_phrases: won_count = 0 lost_count = 0 # Get counts won_count += won_string.count(phrase) lost_count += lost_string.count(phrase) all_count = won_count + lost_count if all_count == 0: percentage = np.NaN else: percentage = won_count / all_count # Stick in results (list of dicts) bottom_phrase_results.append({'JUSTICE': justice, 'PHRASE': phrase, 'AT_WINNER_COUNT': won_count, 'AT_LOSER_COUNT': lost_count, 'AT_WINNER_PERCENT': percentage}) # Create bottom dataframe bottom_df = pd.DataFrame(bottom_phrase_results) bottom_df = bottom_df.set_index(['JUSTICE', 'PHRASE']) bottom_df = bottom_df[['AT_WINNER_COUNT', 'AT_LOSER_COUNT', 'AT_WINNER_PERCENT']] bottom_df['AT_LOSER_PERCENT'] = 1 - bottom_df['AT_WINNER_PERCENT'] return bottom_df # Run function bottom_freq_df = create_frequency_dfs(justice_data, text_df) bottom_freq_df.head(5).dropna() # Write bottom_freq_df to file bottom_csv_path = os.path.join(DATA_FOLDER, 'bottom_phrases.csv') bottom_freq_df.to_csv(bottom_csv_path, encoding='utf-8') # Define function def create_tabulation_df(): '''(Case, justice, arg_vect) x model.''' # Make justice/model multiindex for columns cases = CURRENT_CASES # Really should have standardized this earlier justices = [justice.upper() for justice in CURRENT_JUSTICES] arg_types = ['PETITIONER_ARGUMENT', 'RESPONDENT_ARGUMENT', 'PETITIONER_REBUTTAL'] models = ['NB', 'SGD', 'RF'] cja_index = pd.MultiIndex.from_product([cases, justices, arg_types]) # Make dataframe tabulation_df = pd.DataFrame(index=cja_index, columns=models, data=np.NaN) return tabulation_df # Run function tabulation_df = create_tabulation_df() # Demo for clarity ... should be empty. tabulation_df.head(3) def make_current_df(arg_df): # Create a lookup dataframe. lookup_df = arg_df[arg_df['DOCKET'].isin(CURRENT_CASES)] output_rows = [] input_rows = [row.to_dict() for index, row in lookup_df.iterrows()] for justice in CURRENT_JUSTICES: for row_dict in input_rows: dict_copy = copy.deepcopy(row_dict) dict_copy['JUSTICE'] = justice output_rows.append(dict_copy) return pd.DataFrame.from_dict(output_rows) # Create new df current_df = make_current_df(arg_df) # Run previous functions current_df = current_df.apply(filter_justice_data, axis=1) current_df.head(3) # Define function def create_lookup_series(current_df): '''Place text in df for processing. Each justice gets same data.''' # Flatten and reindex lookup_df = current_df[['DOCKET', 'JUSTICE', 'PETITIONER_ARGUMENT', #'RESPONDENT_ARGUMENT', 'PETITIONER_REBUTTAL']] lookup_df = lookup_df.set_index(['DOCKET', 'JUSTICE']) # Flatten lists lookup_df = lookup_df.applymap(lambda x: ' '.join(x)) # Make lookup series lookup_series = lookup_df.stack() # Sort and dedupe (where do dupes come from?) lookup_series.sort_index(inplace=True) lookup_series.drop_duplicates(inplace=True) return lookup_series # Run lookup_series = create_lookup_series(current_df) lookup_series.head(3) # Define function def populate_tabulation_df(row, lookup_series): '''Fill in the dataframe. Meant to be applied.''' try: case, justice, arg_type = row.name # Make this not chained indexing value = lookup_series[case][justice][arg_type] row[['NB', 'SGD', 'RF']] = value, value, value except KeyError: row[['NB', 'SGD', 'RF']] = np.NaN, np.NaN, np.NaN return row # Run function tabulation_df = tabulation_df.apply(populate_tabulation_df, args=(lookup_series,), axis=1) tabulation_df.head(3) # Define function def run_predictions(column, model_dict, tabulation_df): '''This applied function adds results to the result series. It is initially framed in terms of "QUESTIONEE_WINS", which is the output of the model.predict(). It is then converted to "PLAINTIFF_WINS", by flipping the respondent argument (e.g. if questionee is plaintiff because plaintiff arg or plaintiff rebuttal, QUESTIONEE_WINS == PLAINTIFF_WINS ... if respondent argument, QUESTOINEE_WINS != PLAINTIFF_WINS). ''' # There has to be a better way to vectorize with groupby. model_name = column.name # Need copy so we can iterrate and change "in place" column_copy = column.copy() # iterate through items for index, text in column_copy.iteritems(): case, justice, arg_type = index # Get text try: model = model_dict[justice][model_name] # Cannot comapre np.NaN if model is None or pd.isnull(text): column.loc[index] = np.NaN continue # If you've already gone over it, it's bool. Therefore skip. if type(text) is np.bool_ or type(text) is bool: continue # Predict prediction = model.predict([text])[0] # Flip prediction because speaker -> party flip recorrect. if arg_type == 'RESPONDENT_ARGUMENT': prediction = not prediction # Write back to clolumn column.loc[index] = prediction except KeyError: column.loc[index] = np.NaN return column # Run function tabulation_df = tabulation_df.apply(run_predictions, axis=0, args=(model_dict, tabulation_df)) # Demo for clarity tabulation_df.head(3) def modified_sum(row): if row['RESPONDENT_VOTES'] < row['PETITIONER_VOTES']: return 'Petitioner' if row['RESPONDENT_VOTES'] > row['PETITIONER_VOTES']: return 'Respondent' else: return None #### Define function def calculate_votes(tabulation_df): # Consensus vector ... vectorize this. consensus = pd.Series(index=tabulation_df.index .droplevel(2) .copy(), dtype='object') consensus.name = 'VOTES' # Iterate through tabulation tdf = tabulation_df.unstack() tdf = tdf.apply(lambda row: pd.value_counts(row.values), axis=1) tdf.columns = ['RESPONDENT_VOTES', 'PETITIONER_VOTES'] tdf = tdf.fillna(0) tdf['VOTE'] = tdf.apply(modified_sum, axis=1) return tdf # Run function votes = calculate_votes(tabulation_df) votes.head(8) def harmonize_empty(votes, VOTING_RELATIONSHIPS): '''If null, make this justice copy another similarly-minded justice.''' voting_df = pd.DataFrame(VOTING_RELATIONSHIPS) # Don't want our inputed picks affecting other imputed picks. imputed_probabilities = [] for index, row in votes.iterrows(): # Parse case, justice = index if row['VOTE'] is None: # Get similarity rankings: ALITO: 7, BREYER: 2, KAGAN: 3 similarity_ranks = voting_df.loc[justice].argsort() # Then rank so we have BREYER: 2, KAGAN: 3, ALITO: 7 similarity_order = similarity_ranks.sort_values() # Similar justice list: [BREYER, KAGAN, ALITO] most_similar = similarity_order.index.values # Go through justice list to get closest. for sim_justice in most_similar: if sim_justice == 'SCALIA': continue other_justice_prob = votes.loc[(case, sim_justice)]['VOTE'] if other_justice_prob is None: continue else: imputed_probabilities.append({'case': case, 'justice': justice, 'prob': other_justice_prob}) # Now all imputed_probs are complete. Add back in. for prob in imputed_probabilities: index_tuple = tuple([prob['case'], prob['justice']]) votes.loc[index_tuple, 'VOTE'] = prob['prob'] return None harmonize_empty(votes, VOTING_RELATIONSHIPS) votes.head(8) def get_petitioner_votes(row): '''Helper function for apply.''' vc = row.value_counts() try: petitioner_count = vc['Petitioner'] except KeyError: petitioner_count = 0 return petitioner_count def get_respondent_votes(row): '''Helper function for apply.''' vc = row.value_counts() try: respondent_count = vc['Respondent'] except KeyError: respondent_count = 0 return respondent_count def process_votes(votes): '''Apply function for results.''' # Get rid of superfluous columns result = votes[['VOTE']] # Turn into dataframe. result = result.unstack() # Get rid of superfluous multiindex result.columns = result.columns.droplevel(0) # Add winner and loser counts. result['PET_VOTES'] = result.apply(get_petitioner_votes, axis=1) result['RES_VOTES'] = result.apply(get_respondent_votes, axis=1) # Arbitrary result['VICTOR'] = result['PET_VOTES'] > result['RES_VOTES'] result['VICTOR'] = result['VICTOR'].map({True: 'Petitioner', False: 'Respondent'}) return result result = process_votes(votes) try: result = result.drop('15-1112') except Exception: pass result # Write results to file. result_csv_path = os.path.join(DATA_FOLDER, 'case_results.csv') result.to_csv(result_csv_path, encoding='utf-8') result['VICTOR'].value_counts() ```
github_jupyter
周内效应——右侧买入 ``` import pandas as pd from datetime import datetime import trdb2py import numpy as np isStaticImg = False width = 960 height = 768 pd.options.display.max_columns = None pd.options.display.max_rows = None trdb2cfg = trdb2py.loadConfig('./trdb2.yaml') # 具体基金 # asset = 'jrj.510310' # baselineasset = 'jrj.510310' asset = 'jqdata.000300_XSHG|1d' # 起始时间,0表示从最开始算起 tsStart = 0 tsStart = int(trdb2py.str2timestamp('2013-05-01', '%Y-%m-%d')) # 结束时间,-1表示到现在为止 tsEnd = -1 tsEnd = int(trdb2py.str2timestamp('2020-09-30', '%Y-%m-%d')) # 初始资金池 paramsinit = trdb2py.trading2_pb2.InitParams( money=10000, ) # 买入参数,用全部的钱来买入(也就是复利) paramsbuy = trdb2py.trading2_pb2.BuyParams( perHandMoney=1, ) paramsbuy1 = trdb2py.trading2_pb2.BuyParams( perHandMoney=1, nextTimes=1, ) # 买入参数,用全部的钱来买入(也就是复利) paramsbuy2 = trdb2py.trading2_pb2.BuyParams( moneyParts=2, ) # 卖出参数,全部卖出 paramssell = trdb2py.trading2_pb2.SellParams( perVolume=1, ) # 卖出参数,全部卖出 paramssell7 = trdb2py.trading2_pb2.SellParams( # perVolume=1, keepTime=7 * 24 * 60 * 60, ) lststart = [1, 2, 3, 4, 5] lsttitle = ['周一', '周二', '周三', '周四', '周五'] def calcweekday2val2(wday, offday): if offday == 1: if wday == 5: return 3 if offday == 2: if wday >= 4: return 4 if offday == 3: if wday >= 3: return 5 if offday == 4: if wday >= 2: return 6 if offday == 5: if wday >= 1: return 7 return offday ``` 大家好,我是格子衫小C(微信公众号:格子衫小C),我们今天继续研究周内效应。 上次和大家聊到了左侧买入,今天接着这个话题聊聊右侧买入。 右侧买入就是在信号出现后,再买入。 我们依然基于国内场外基金交易策略探讨右侧交易的可能。 目标策略如下: 1. 如果今天是周四,且当日收盘价在 ema 29 日线上方,则买入,并于下周周三无条件卖出。 2. 如果今天是周一,且当日收盘价在 ema 29 日线下方,则买入,并于下周周五无条件卖出。 3. 如果周一已经买入,则周四不操作,反之亦然。 交易规则如下: 1. 交易日下午3点以前买入,按当日价格结算份额。 2. 交易日下午3点以后买入,则按下一个交易日价格结算(周五下午3点后买入,按周一价格结算,QDII基金一般还要再延后)。 ``` # baseline s0 = trdb2py.trading2_pb2.Strategy( name="normal", asset=trdb2py.str2asset(asset), ) buy0 = trdb2py.trading2_pb2.CtrlCondition( name='buyandhold', ) # paramsbuy = trdb2py.trading2_pb2.BuyParams( # perHandMoney=1, # ) # paramsinit = trdb2py.trading2_pb2.InitParams( # money=10000, # ) s0.buy.extend([buy0]) s0.paramsBuy.CopyFrom(paramsbuy) s0.paramsInit.CopyFrom(paramsinit) p0 = trdb2py.trading2_pb2.SimTradingParams( assets=[trdb2py.str2asset(asset)], startTs=tsStart, endTs=tsEnd, strategies=[s0], title='沪深300', ) pnlBaseline = trdb2py.simTrading(trdb2cfg, p0) trdb2py.showPNL(pnlBaseline, toImg=isStaticImg, width=width, height=height) lstparams = [] lstassetcode = ['jqdata.000300_XSHG|1d'] lstassetname = ['沪深300 目标策略'] # tsStart = 0 # tsEnd = -1 for ai in range(0, len(lstassetcode)): s0 = trdb2py.trading2_pb2.Strategy( name="normal", asset=trdb2py.str2asset(lstassetcode[ai]), ) buy0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[4, calcweekday2val2(4, 4)], ) buy1 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['up'], strVals=['ema.{}'.format(29)], ) buy2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[1, calcweekday2val2(1, 4)], group=1, ) buy3 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['down'], strVals=['ema.{}'.format(29)], group=1, ) sell0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[3], ) sell1 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[1], strVals=['buy'], ) sell2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[5], group=1, ) sell3 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[2], strVals=['buy'], group=1, ) # paramsbuy = trdb2py.trading2_pb2.BuyParams( # perHandMoney=1, # ) # paramsinit = trdb2py.trading2_pb2.InitParams( # money=10000, # ) feebuy = trdb2py.trading2_pb2.FeeParams( percentage=0.0003, ) feesell = trdb2py.trading2_pb2.FeeParams( percentage=0.0003, ) s0.buy.extend([buy0, buy1, buy2, buy3]) s0.sell.extend([sell0, sell1, sell2, sell3]) s0.paramsBuy.CopyFrom(paramsbuy) s0.paramsSell.CopyFrom(paramssell) s0.paramsInit.CopyFrom(paramsinit) # s0.feeBuy.CopyFrom(feebuy) # s0.feeSell.CopyFrom(feesell) p0 = trdb2py.trading2_pb2.SimTradingParams( assets=[trdb2py.str2asset(lstassetcode[ai])], startTs=tsStart, endTs=tsEnd, strategies=[s0], title=lstassetname[ai], ) lstparams.append(p0) lstpnl1 = trdb2py.simTradings(trdb2cfg, lstparams) trdb2py.showPNLs(lstpnl1 + [pnlBaseline], toImg=isStaticImg, width=width, height=height) dfpnl = trdb2py.buildPNLReport([pnlBaseline] + lstpnl1) # dfpnl1 = dfpnl[dfpnl['totalReturns'] >= 1] dfpnl[['title', 'maxDrawdown', 'maxDrawdownStart', 'maxDrawdownEnd', 'totalReturns', 'sharpe', 'annualizedReturns', 'annualizedVolatility', 'variance']].sort_values(by='totalReturns', ascending=False) ``` 上面是目标策略和基线PNL曲线,我们直接将交易规则改为右侧交易,也就是周四得到交易信号,但周五再交易,看看结果如何(其实不用看也知道结果一定不好)。 ``` lstparams = [] lstassetcode = ['jqdata.000300_XSHG|1d'] lstassetname = ['沪深300 右侧买入目标策略'] # tsStart = 0 # tsEnd = -1 for ai in range(0, len(lstassetcode)): s0 = trdb2py.trading2_pb2.Strategy( name="normal", asset=trdb2py.str2asset(lstassetcode[ai]), ) buy0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[4, calcweekday2val2(4, 4)], ) buy1 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['up'], strVals=['ema.{}'.format(29)], ) buy2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[1, calcweekday2val2(1, 4)], group=1, ) buy3 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['down'], strVals=['ema.{}'.format(29)], group=1, ) sell0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[3], ) sell1 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[1], strVals=['buy'], ) sell2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[5], group=1, ) sell3 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[2], strVals=['buy'], group=1, ) # paramsbuy = trdb2py.trading2_pb2.BuyParams( # perHandMoney=1, # nextTimes=1, # ) # paramsinit = trdb2py.trading2_pb2.InitParams( # money=10000, # ) feebuy = trdb2py.trading2_pb2.FeeParams( percentage=0.0003, ) feesell = trdb2py.trading2_pb2.FeeParams( percentage=0.0003, ) s0.buy.extend([buy0, buy1, buy2, buy3]) s0.sell.extend([sell0, sell1, sell2, sell3]) s0.paramsBuy.CopyFrom(paramsbuy1) s0.paramsSell.CopyFrom(paramssell) s0.paramsInit.CopyFrom(paramsinit) # s0.feeBuy.CopyFrom(feebuy) # s0.feeSell.CopyFrom(feesell) p0 = trdb2py.trading2_pb2.SimTradingParams( assets=[trdb2py.str2asset(lstassetcode[ai])], startTs=tsStart, endTs=tsEnd, strategies=[s0], title=lstassetname[ai], ) lstparams.append(p0) lstpnl2 = trdb2py.simTradings(trdb2cfg, lstparams) trdb2py.showPNLs(lstpnl1 + lstpnl2 + [pnlBaseline], toImg=isStaticImg, width=width, height=height) dfpnl = trdb2py.buildPNLReport([pnlBaseline] + lstpnl1 + lstpnl2) # dfpnl1 = dfpnl[dfpnl['totalReturns'] >= 1] dfpnl[['title', 'maxDrawdown', 'maxDrawdownStart', 'maxDrawdownEnd', 'totalReturns', 'sharpe', 'annualizedReturns', 'annualizedVolatility', 'variance']].sort_values(by='totalReturns', ascending=False) ``` 结果当然是非常之不好。 我们的目标策略是基于周内效应来做的,前面得出的结论是: 1. 周四下跌的概率最大。 2. 周一的波动率最大,上涨趋势里,涨幅大,下跌趋势里,跌幅大。 因此目标策略本质是在上涨趋势里,避过周四;在下跌趋势里,避过周一。 如果我们改为右侧买入(只有买入需要考虑ema,卖出只考虑周几),就是将周四买入变成了周五买入,这能好才怪呢。 顺便插一句,为啥A股市场会有周内效应呢? 我和一些炒股的朋友聊天,发现很多朋友虽然没有仔细研究过周内效应,但他们都有周一价格波动最大,周三周四价格会相对稳定下来的概念。 他们的理解是,周一因为要消化周末2天的消息,所以波动最大,然后经过周二的平复,周三周四相对来说,是最好的交易时机。 A股应该也是因为参与玩家大都是类似这个想法,所以才会形成这样的局面吧。 但就数据来看,美股和港股也都是一周交易5天,周末休息2天,但他们表现却没这么明显。 接下来,我们实测一下,如果右侧买入,应该用怎样的参数最合适。 我们遍历5-60日ema均线,周一到周五,得到信号后,第二天买入,分别持有2-5天(因为第二天才买入,所以持有时间需要延长1天),一共55x5X4=22000种情况。 ``` lstparams = [] for ema in range(5, 61): for sdo in range(2, 6): for sd in range(1, 6): buy0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[sd, calcweekday2val2(sd, sdo)], ) buy1 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['up'], strVals=['ema.{}'.format(ema)], ) sell0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[trdb2py.nextWeekDay(sd, sdo)], ) sell1 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[1], strVals=['buy'], ) for edo in range(1, 5): for ed in range(1, 6): buy2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[ed, calcweekday2val2(ed, edo)], group=1, ) buy3 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['down'], strVals=['ema.{}'.format(ema)], group=1, ) sell2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[trdb2py.nextWeekDay(ed, edo)], group=1, ) sell3 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[2], strVals=['buy'], group=1, ) s0 = trdb2py.trading2_pb2.Strategy( name="normal", asset=trdb2py.str2asset(asset), ) s0.buy.extend([buy0, buy1, buy2, buy3]) s0.sell.extend([sell0, sell1, sell2, sell3]) s0.paramsBuy.CopyFrom(paramsbuy1) s0.paramsSell.CopyFrom(paramssell) s0.paramsInit.CopyFrom(paramsinit) lstparams.append(trdb2py.trading2_pb2.SimTradingParams( assets=[trdb2py.str2asset(asset)], startTs=tsStart, endTs=tsEnd, strategies=[s0], title='ema{} up{}持有{}天 down{}持有{}天 右侧'.format(ema, lsttitle[sd-1], sdo, lsttitle[ed-1], edo), )) lstpnlmix = trdb2py.simTradings(trdb2cfg, lstparams, ignoreTotalReturn=6) trdb2py.showPNLs2(lstpnlmix + [pnlBaseline] + lstpnl1, baseline=pnlBaseline, showNums=5, toImg=isStaticImg, width=width, height=height) dfpnl = trdb2py.buildPNLReport(lstpnlmix + [pnlBaseline] + lstpnl1) # dfpnl1 = dfpnl[dfpnl['totalReturns'] >= 1] dfpnl[['title', 'maxDrawdown', 'maxDrawdownStart', 'maxDrawdownEnd', 'totalReturns', 'sharpe', 'annualizedReturns', 'annualizedVolatility', 'variance']].sort_values(by='totalReturns', ascending=False) ``` 虽然有策略和目标策略差别不大,但我们看到是周三持有4天,也就是周四买入,持有3天,而周五持有2天,就是周一买入,持有1天。 这个策略和前面得到的结果有差异,这个参数,避开的日期太多了。 ``` # 起始时间,0表示从最开始算起 tsStart = 0 # tsStart = int(trdb2py.str2timestamp('2013-05-01', '%Y-%m-%d')) # 结束时间,-1表示到现在为止 tsEnd = -1 # tsEnd = int(trdb2py.str2timestamp('2020-09-30', '%Y-%m-%d')) # baseline s0 = trdb2py.trading2_pb2.Strategy( name="normal", asset=trdb2py.str2asset(asset), ) buy0 = trdb2py.trading2_pb2.CtrlCondition( name='buyandhold', ) # paramsbuy = trdb2py.trading2_pb2.BuyParams( # perHandMoney=1, # ) # paramsinit = trdb2py.trading2_pb2.InitParams( # money=10000, # ) s0.buy.extend([buy0]) s0.paramsBuy.CopyFrom(paramsbuy) s0.paramsInit.CopyFrom(paramsinit) p0 = trdb2py.trading2_pb2.SimTradingParams( assets=[trdb2py.str2asset(asset)], startTs=tsStart, endTs=tsEnd, strategies=[s0], title='沪深300', ) pnlBaselineF = trdb2py.simTrading(trdb2cfg, p0) trdb2py.showPNL(pnlBaselineF, toImg=isStaticImg, width=width, height=height) lstparams = [] lstassetcode = ['jqdata.000300_XSHG|1d'] lstassetname = ['沪深300 目标策略'] tsStart = 0 tsEnd = -1 for ai in range(0, len(lstassetcode)): s0 = trdb2py.trading2_pb2.Strategy( name="normal", asset=trdb2py.str2asset(lstassetcode[ai]), ) buy0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[4, calcweekday2val2(4, 4)], ) buy1 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['up'], strVals=['ema.{}'.format(29)], ) buy2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[1, calcweekday2val2(1, 4)], group=1, ) buy3 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['down'], strVals=['ema.{}'.format(29)], group=1, ) sell0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[3], ) sell1 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[1], strVals=['buy'], ) sell2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[5], group=1, ) sell3 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[2], strVals=['buy'], group=1, ) # paramsbuy = trdb2py.trading2_pb2.BuyParams( # perHandMoney=1, # ) # paramsinit = trdb2py.trading2_pb2.InitParams( # money=10000, # ) feebuy = trdb2py.trading2_pb2.FeeParams( percentage=0.0003, ) feesell = trdb2py.trading2_pb2.FeeParams( percentage=0.0003, ) s0.buy.extend([buy0, buy1, buy2, buy3]) s0.sell.extend([sell0, sell1, sell2, sell3]) s0.paramsBuy.CopyFrom(paramsbuy) s0.paramsSell.CopyFrom(paramssell) s0.paramsInit.CopyFrom(paramsinit) # s0.feeBuy.CopyFrom(feebuy) # s0.feeSell.CopyFrom(feesell) p0 = trdb2py.trading2_pb2.SimTradingParams( assets=[trdb2py.str2asset(lstassetcode[ai])], startTs=tsStart, endTs=tsEnd, strategies=[s0], title=lstassetname[ai], ) lstparams.append(p0) lstpnl1F = trdb2py.simTradings(trdb2cfg, lstparams) trdb2py.showPNLs(lstpnl1F + [pnlBaselineF], toImg=isStaticImg, width=width, height=height) lstparams = [] lstassetcode = ['jqdata.000300_XSHG|1d'] lstassetname = ['沪深300 ema26右侧'] # 起始时间,0表示从最开始算起 tsStart = 0 # tsStart = int(trdb2py.str2timestamp('2013-05-01', '%Y-%m-%d')) # 结束时间,-1表示到现在为止 tsEnd = -1 # tsEnd = int(trdb2py.str2timestamp('2020-09-30', '%Y-%m-%d')) for ai in range(0, len(lstassetcode)): s0 = trdb2py.trading2_pb2.Strategy( name="normal", asset=trdb2py.str2asset(lstassetcode[ai]), ) buy0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[3, calcweekday2val2(3, 4)], ) buy1 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['up'], strVals=['ema.{}'.format(26)], ) buy2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[5, calcweekday2val2(5, 2)], group=1, ) buy3 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['down'], strVals=['ema.{}'.format(26)], group=1, ) sell0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[2], ) sell1 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[1], strVals=['buy'], ) sell2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[2], group=1, ) sell3 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[2], strVals=['buy'], group=1, ) # paramsbuy = trdb2py.trading2_pb2.BuyParams( # perHandMoney=1, # nextTimes=1, # ) # paramsinit = trdb2py.trading2_pb2.InitParams( # money=10000, # ) feebuy = trdb2py.trading2_pb2.FeeParams( percentage=0.0003, ) feesell = trdb2py.trading2_pb2.FeeParams( percentage=0.0003, ) s0.buy.extend([buy0, buy1, buy2, buy3]) s0.sell.extend([sell0, sell1, sell2, sell3]) s0.paramsBuy.CopyFrom(paramsbuy1) s0.paramsSell.CopyFrom(paramssell) s0.paramsInit.CopyFrom(paramsinit) # s0.feeBuy.CopyFrom(feebuy) # s0.feeSell.CopyFrom(feesell) p0 = trdb2py.trading2_pb2.SimTradingParams( assets=[trdb2py.str2asset(lstassetcode[ai])], startTs=tsStart, endTs=tsEnd, strategies=[s0], title=lstassetname[ai], ) lstparams.append(p0) lstpnl2t = trdb2py.simTradings(trdb2cfg, lstparams) trdb2py.showPNLs(lstpnl2t + lstpnl1F + [pnlBaselineF], toImg=isStaticImg, width=width, height=height) ``` 我们把回测数据放开,就能看到问题了。 虽然在测试数据区间内,咱们得到了一个看似有效的参数,但放入非测试数据里,会发现情况并不理想。 这就是回测过拟合的问题。 这也是为什么前面,我们会拿这一组参数还测过十多种指数,都能说明问题,才一直使用这套参数的原因。 这里也不是说右侧交易就没办法,只能说,ema指标不适合目标策略的右侧日间交易(当然,也有可能是我们测试参数的宽度不够)。 最后,还要强调一下过拟合,有些同学应该能想到,咱们可以拿完整数据去回测,说不定会得到一个在完整数据上都能表现得非常不错的参数。 但这个项目本身,就过拟合的,在历史数据上,得到一条完美的PNL曲线是不难的,但我们的目的难道是这个么?我们不是要找到一个能适用于未来数据的参数么?(至少也得是未来一段时间都适用的) ``` lstema = [26, 24, 27, 28, 25, 23, 24, 26, 28, 15] lstday1 = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3] lstod1 = [4, 4, 4, 4, 4, 4, 5, 5, 2, 5] lstday2 = [5, 5, 5, 5, 5, 5, 5, 5, 4, 4] lstod2 = [2, 2, 2, 2, 2, 2, 2, 2, 4, 3] lstparams = [] # 起始时间,0表示从最开始算起 tsStart = 0 # tsStart = int(trdb2py.str2timestamp('2013-05-01', '%Y-%m-%d')) # 结束时间,-1表示到现在为止 tsEnd = -1 # tsEnd = int(trdb2py.str2timestamp('2020-09-30', '%Y-%m-%d')) for ii in range(0, len(lstema)): buy0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[lstday1[ii], calcweekday2val2(lstday1[ii], lstod1[ii])], ) buy1 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['up'], strVals=['ema.{}'.format(lstema[ii])], ) sell0 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[trdb2py.nextWeekDay(lstday1[ii], lstod1[ii])], ) sell1 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[1], strVals=['buy'], ) buy2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday2', vals=[lstday2[ii], calcweekday2val2(lstday2[ii], lstod2[ii])], group=1, ) buy3 = trdb2py.trading2_pb2.CtrlCondition( name='indicatorsp', operators=['down'], strVals=['ema.{}'.format(lstema[ii])], group=1, ) sell2 = trdb2py.trading2_pb2.CtrlCondition( name='weekday', vals=[trdb2py.nextWeekDay(lstday2[ii], lstod2[ii])], group=1, ) sell3 = trdb2py.trading2_pb2.CtrlCondition( name='ctrlconditionid', vals=[2], strVals=['buy'], group=1, ) s0 = trdb2py.trading2_pb2.Strategy( name="normal", asset=trdb2py.str2asset(asset), ) s0.buy.extend([buy0, buy1, buy2, buy3]) s0.sell.extend([sell0, sell1, sell2, sell3]) s0.paramsBuy.CopyFrom(paramsbuy1) s0.paramsSell.CopyFrom(paramssell) s0.paramsInit.CopyFrom(paramsinit) lstparams.append(trdb2py.trading2_pb2.SimTradingParams( assets=[trdb2py.str2asset(asset)], startTs=tsStart, endTs=tsEnd, strategies=[s0], title='ema{} up{}持有{}天 down{}持有{}天 右侧'.format(lstema[ii], lsttitle[lstday1[ii]-1], lstod1[ii], lsttitle[lstday2[ii]-1], lstod2[ii]), )) lstpnlmixF = trdb2py.simTradings(trdb2cfg, lstparams) trdb2py.showPNLs2(lstpnlmixF + lstpnl1F + [pnlBaselineF], toImg=isStaticImg, width=width, height=height) dfpnl = trdb2py.buildPNLReport(lstpnlmixF + lstpnl1F + [pnlBaselineF]) # dfpnl1 = dfpnl[dfpnl['totalReturns'] >= 1] dfpnl[['title', 'maxDrawdown', 'maxDrawdownStart', 'maxDrawdownEnd', 'totalReturns', 'sharpe', 'annualizedReturns', 'annualizedVolatility', 'variance']].sort_values(by='totalReturns', ascending=False) ```
github_jupyter
``` ######################################CONSTANTS###################################### METRIC = 'calibration_error' MODE = 'max' HOLDOUT_RATIO = 0.1 RUNS = 100 LOG_FREQ = 100 threshold = 0.98 # threshold for x-axis cutoff COLOR = {'non-active_no_prior': '#1f77b4', 'ts_uniform': 'red',#'#ff7f0e', 'ts_informed': 'green', 'epsilon_greedy_no_prior': 'tab:pink', 'bayesian_ucb_no_prior': 'cyan' } COLOR = {'non-active': '#1f77b4', 'ts': '#ff7f0e', 'epsilon_greedy': 'pink', 'bayesian_ucb': 'cyan' } METHOD_NAME_DICT = {'non-active': 'Non-active', 'epsilon_greedy': 'Epsilon greedy', 'bayesian_ucb': 'Bayesian UCB', 'ts': 'TS'} TOPK_METHOD_NAME_DICT = {'non-active': 'Non-active', 'epsilon_greedy': 'Epsilon greedy', 'bayesian_ucb': 'Bayesian UCB', 'ts': 'MP-TS'} LINEWIDTH = 13.97 ######################################CONSTANTS###################################### import sys sys.path.insert(0, '..') import argparse from typing import Dict, Any import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.ticker as ticker import numpy as np from data_utils import DATASIZE_DICT, FIGURE_DIR, RESULTS_DIR from data_utils import DATASET_NAMES, TOPK_DICT import matplotlib;matplotlib.rcParams['font.family'] = 'serif' RESULTS_DIR = RESULTS_DIR + 'active_learning_topk/' def plot_topk_ece(ax: mpl.axes.Axes, experiment_name: str, topk: int, eval_metric: str, pool_size: int, threshold: float) -> None: benchmark = 'ts' for method in METHOD_NAME_DICT: metric_eval = np.load( RESULTS_DIR + experiment_name + ('%s_%s.npy' % (eval_metric, method))).mean(axis=0) x = np.arange(len(metric_eval)) * LOG_FREQ / pool_size if topk == 1: label = METHOD_NAME_DICT[method] else: label = TOPK_METHOD_NAME_DICT[method] if method == 'non-active': linestyle = "-" else: linestyle = '-' ax.plot(x, metric_eval, linestyle, color=COLOR[method], label=label) if method == benchmark: if max(metric_eval) > threshold: cutoff = list(map(lambda i: i > threshold, metric_eval.tolist()[10:])).index(True) + 10 cutoff = min(int(cutoff * 1.5), len(metric_eval) - 1) else: cutoff = len(metric_eval) - 1 ax.set_xlim(0, cutoff * LOG_FREQ / pool_size) ax.set_ylim(0, 1.0) xmin, xmax = ax.get_xlim() step = ((xmax - xmin) / 4.0001) ax.xaxis.set_major_formatter(ticker.PercentFormatter(xmax=1)) ax.xaxis.set_ticks(np.arange(xmin, xmax + 0.001, step)) ax.yaxis.set_ticks(np.arange(0, 1.01, 0.20)) ax.tick_params(pad=0.25, length=1.5) return ax def main(eval_metric: str, top1: bool, pseudocount: int, threshold: float) -> None: fig, axes = plt.subplots(ncols=len(TOPK_DICT), dpi=300, sharey=True) idx = 0 for dataset in TOPK_DICT: print(dataset) if top1: topk = 1 else: topk = TOPK_DICT[dataset] experiment_name = '%s_%s_%s_top%d_runs%d_pseudocount%.2f/' % \ (dataset, METRIC, MODE, topk, RUNS, pseudocount) plot_kwargs = {} plot_topk_ece(axes[idx], experiment_name, topk, eval_metric, int(DATASIZE_DICT[dataset] * (1 - HOLDOUT_RATIO)), threshold=threshold) if topk == 1: axes[idx].set_title(DATASET_NAMES[dataset]) else: axes[idx].set_xlabel("#queries") if idx > 0: axes[idx].tick_params(left=False) idx += 1 axes[-1].legend() if topk == 1: axes[0].set_ylabel("MRR, top-1") else: axes[0].set_ylabel("MRR, top-m") fig.tight_layout() fig.set_size_inches(LINEWIDTH, 2.5) fig.subplots_adjust(bottom=0.05, wspace=0.20) if top1: figname = FIGURE_DIR + '%s_%s_%s_top1_pseudocount%d.pdf' % (METRIC, MODE, eval_metric, pseudocount) else: figname = FIGURE_DIR + '%s_%s_%s_topk_pseudocount%d.pdf' % (METRIC, MODE, eval_metric, pseudocount) fig.savefig(figname, bbox_inches='tight', pad_inches=0) for pseudocount in [2, 5, 10]: for eval_metric in ['avg_num_agreement', 'mrr']: for top1 in [True, False]: main(eval_metric, top1, pseudocount, threshold) ```
github_jupyter
``` import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"; # The GPU id to use, usually either "0" or "1"; os.environ["CUDA_VISIBLE_DEVICES"]="1"; import numpy as np import tensorflow as tf import random as rn # The below is necessary for starting Numpy generated random numbers # in a well-defined initial state. np.random.seed(42) # The below is necessary for starting core Python generated random numbers # in a well-defined state. rn.seed(12345) # Force TensorFlow to use single thread. # Multiple threads are a potential source of non-reproducible results. # For further details, see: https://stackoverflow.com/questions/42022950/ session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) from tensorflow.keras import backend as K # The below tf.set_random_seed() will make random number generation # in the TensorFlow backend have a well-defined initial state. # For further details, see: # https://www.tensorflow.org/api_docs/python/tf/set_random_seed tf.set_random_seed(1234) sess = tf.Session(graph=tf.get_default_graph(), config=session_conf) K.set_session(sess) import networkx as nx import pandas as pd import numpy as np import os import random import h5py import matplotlib.pyplot as plt from tqdm import tqdm from scipy.spatial import cKDTree as KDTree from tensorflow.keras.utils import to_categorical import stellargraph as sg from stellargraph.data import EdgeSplitter from stellargraph.mapper import GraphSAGELinkGenerator from stellargraph.layer import GraphSAGE, link_classification from stellargraph.layer.graphsage import AttentionalAggregator from stellargraph.data import UniformRandomWalk from stellargraph.data import UnsupervisedSampler from sklearn.model_selection import train_test_split import tensorflow as tf from tensorflow import keras from sklearn import preprocessing, feature_extraction, model_selection from sklearn.linear_model import LogisticRegressionCV, LogisticRegression from sklearn.metrics import accuracy_score from stellargraph import globalvar from numpy.random import seed seed(1) from tensorflow import set_random_seed set_random_seed(2) ``` ## Load Data ``` import numpy as np import h5py from collections import OrderedDict pixel_per_um = 15.3846 # from BioRxiv paper um_per_pixel = 1.0 / pixel_per_um f = h5py.File("../data/osmFISH_Codeluppi_et_al/mRNA_coords_raw_counting.hdf5", 'r') keys = list(f.keys()) pos_dic = OrderedDict() genes = [] # Exclude bad quality data, according to the supplementary material of osmFISH paper blacklists = ['Cnr1_Hybridization4', 'Plp1_Hybridization4', 'Vtn_Hybridization4', 'Klk6_Hybridization5', 'Lum_Hybridization9', 'Tbr1_Hybridization11'] barcodes_df = pd.DataFrame({'Gene':[], 'Centroid_X':[], 'Centroid_Y':[]}) for k in keys: if k in blacklists: continue gene = k.split("_")[0] # Correct wrong gene labels if gene == 'Tmem6': gene = 'Tmem2' elif gene == 'Kcnip': gene = 'Kcnip2' points = np.array(f[k]) * um_per_pixel if gene in pos_dic: pos_dic[gene] = np.vstack((pos_dic[gene], points)) else: pos_dic[gene] = points genes.append(gene) barcodes_df = barcodes_df.append(pd.DataFrame({'Gene':[gene]*points.shape[0], 'Centroid_X':points[:,0], 'Centroid_Y':points[:,1]}),ignore_index=True) # Gene panel taglist tagList_df = pd.DataFrame(sorted(genes),columns=['Gene']) # Spot dataframe from Codeluppi et al. barcodes_df.reset_index(drop=True, inplace=True) import matplotlib.pyplot as plt X = -barcodes_df.Centroid_X Y = -barcodes_df.Centroid_Y plt.figure(figsize=(10,10)) plt.scatter(X,Y,s=1) plt.axis('scaled') ``` ## Build Graph ``` # Auxiliary function to compute d_max def plotNeighbor(barcodes_df): barcodes_df.reset_index(drop=True, inplace=True) kdT = KDTree(np.array([barcodes_df.Centroid_X.values,barcodes_df.Centroid_Y.values]).T) d,i = kdT.query(np.array([barcodes_df.Centroid_X.values,barcodes_df.Centroid_Y.values]).T,k=2) plt.hist(d[:,1],bins=200); plt.axvline(x=np.percentile(d[:,1],97),c='r') print(np.percentile(d[:,1],97)) d_th = np.percentile(d[:,1],97) return d_th # Compute d_max for generating spatial graph d_th = plotNeighbor(barcodes_df) # Auxiliary function to build spatial gene expression graph def buildGraph(barcodes_df, d_th, tagList_df): G = nx.Graph() features =[] barcodes_df.reset_index(drop=True, inplace=True) gene_list = tagList_df.Gene.values # Generate node categorical features one_hot_encoding = dict(zip(tagList_df.Gene.unique(),to_categorical(np.arange(tagList_df.Gene.unique().shape[0]),num_classes=tagList_df.Gene.unique().shape[0]).tolist())) barcodes_df["feature"] = barcodes_df['Gene'].map(one_hot_encoding).tolist() features.append(np.vstack(barcodes_df.feature.values)) kdT = KDTree(np.array([barcodes_df.Centroid_X.values,barcodes_df.Centroid_Y.values]).T) res = kdT.query_pairs(d_th) res = [(x[0],x[1]) for x in list(res)] # Add nodes to graph G.add_nodes_from((barcodes_df.index.values), test=False, val=False, label=0) # Add node features to graph nx.set_node_attributes(G,dict(zip((barcodes_df.index.values), barcodes_df.feature)), 'feature') # Add edges to graph G.add_edges_from(res) return G, barcodes_df # Build spatial gene expression graph G, barcodes_df = buildGraph(barcodes_df, d_th, tagList_df) # Remove components with less than N nodes N=3 for component in tqdm(list(nx.connected_components(G))): if len(component)<N: for node in component: G.remove_node(node) ``` #### 1. Create the Stellargraph with node features. ``` G = sg.StellarGraph(G, node_features="feature") print(G.info()) ``` #### 2. Specify the other optional parameter values: root nodes, the number of walks to take per node, the length of each walk, and random seed. ``` nodes = list(G.nodes()) number_of_walks = 1 length = 2 ``` #### 3. Create the UnsupervisedSampler instance with the relevant parameters passed to it. ``` unsupervised_samples = UnsupervisedSampler(G, nodes=nodes, length=length, number_of_walks=number_of_walks, seed=42) ``` #### 4. Create a node pair generator: ``` batch_size = 50 epochs = 10 num_samples = [20,10] train_gen = GraphSAGELinkGenerator(G, batch_size, num_samples, seed=42).flow(unsupervised_samples) ``` #### 5. Create neural network model ``` layer_sizes = [50,50] assert len(layer_sizes) == len(num_samples) graphsage = GraphSAGE( layer_sizes=layer_sizes, generator=train_gen, aggregator=AttentionalAggregator, bias=True, dropout=0.0, normalize="l2", kernel_regularizer='l1' ) # Build the model and expose input and output sockets of graphsage, for node pair inputs: x_inp, x_out = graphsage.build() prediction = link_classification( output_dim=1, output_act="sigmoid", edge_embedding_method='ip' )(x_out) import os, datetime logdir = os.path.join("logs", datetime.datetime.now().strftime("osmFISH-%Y%m%d-%H%M%S")) tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir) earlystop_callback = tf.keras.callbacks.EarlyStopping(monitor='loss', mode='min', verbose=1, patience=0) model = keras.Model(inputs=x_inp, outputs=prediction) model.compile( optimizer=keras.optimizers.Adam(lr=0.5e-4), loss=keras.losses.binary_crossentropy, metrics=[keras.metrics.binary_accuracy] ) model.summary() ``` #### 6. Train neural network model ``` import tensorflow as tf os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' history = model.fit_generator( train_gen, epochs=epochs, verbose=1, use_multiprocessing=True, workers=6, shuffle=True, callbacks=[tensorboard_callback,earlystop_callback] ) ``` ### Extracting node embeddings ``` from sklearn.decomposition import PCA from sklearn.manifold import TSNE from stellargraph.mapper import GraphSAGENodeGenerator import pandas as pd import numpy as np import matplotlib.pyplot as plt import request %matplotlib inline x_inp_src = x_inp[0::2] x_out_src = x_out[0] embedding_model = keras.Model(inputs=x_inp_src, outputs=x_out_src) # Save the model embedding_model.save('../models/osmFISH_Codeluppi_et_al/nn_model.h5') # Recreate the exact same model purely from the file embedding_model = keras.models.load_model('../models/osmFISH_Codeluppi_et_al/nn_model.h5', custom_objects={'AttentionalAggregator':AttentionalAggregator}) embedding_model.compile( optimizer=keras.optimizers.Adam(lr=0.5e-4), loss=keras.losses.binary_crossentropy, metrics=[keras.metrics.binary_accuracy] ) node_gen = GraphSAGENodeGenerator(G, batch_size, num_samples, seed=42).flow(nodes) node_embeddings = embedding_model.predict_generator(node_gen, workers=12, verbose=1) node_embeddings.shape np.save('../results/osmFISH_et_al/embedding_osmFISH.npy',node_embeddings) quit() ```
github_jupyter
# Antennal Lobe Demo In this notebook, we go through the following examples. 1. Simulating Larva Antennal Lobe with affinity values inferred using `OlfTrans` 2. Change Number of PNs per channel 3. Change PN Parameter 4. Change Connectivity between OSN Axon Terminal and PN ``` %load_ext autoreload %autoreload 2 import typing as tp import numpy as np from itertools import product from eoscircuits import al import olftrans as olf import olftrans.data import olftrans.fbl from olftrans.plot import plot_mat from neurokernel.LPU.InputProcessors.StepInputProcessor import StepInputProcessor import matplotlib.pyplot as plt import typing as tp import pandas as pd import seaborn as sns from eoscircuits.plot import plot_data, plot_spikes ``` ## Setup ``` receptor_names = olf.fbl.LARVA.affinities.loc['ethyl acetate'].index.values.astype(str) affinities = olf.fbl.LARVA.affinities.loc['ethyl acetate'].values dt = 1e-5 dur = 4 steps = int((dur+dt/2)//dt) t = np.arange(steps)*dt ``` # 1. Simulate Larva Antennal Lobe ``` cfg = al.ALConfig( affs=affinities, NP=1, NO=1, NPreLN=1, NPosteLN=1, NPostiLN=2, receptors=receptor_names, node_params=dict( osn_bsgs=dict(sigma=0.), osn_axts=dict(gamma=1.), postelns=dict(a2=0.012) ) ) al_circ = al.ALCircuit.create_from_config(cfg) fi = StepInputProcessor('conc', al_circ.inputs['conc'], 100., start=1, stop=3) fi, fo, lpu = al_circ.simulate( t, fi, record_var_list=[ ( 'I', sum(al_circ.config.osn_otps,[]) + sum(al_circ.config.osn_axts,[]) + sum(al_circ.config.postelns,[]) + sum(al_circ.config.postilns,[]) ), ('r', sum(al_circ.config.pns,[])), ('g', sum(al_circ.config.osn_alphas,[])), ('spike_state', sum(al_circ.config.osn_bsgs, [])) ]) %matplotlib inline otp_I = fo.get_output(var='I', uids=sum(al_circ.config.osn_otps,[])) axt_I = fo.get_output(var='I', uids=sum(al_circ.config.osn_axts,[])) eln_I = fo.get_output(var='I', uids=sum(al_circ.config.postelns,[])) iln_I = fo.get_output(var='I', uids=sum(al_circ.config.postilns,[])) bsg_spikes = fo.get_output(var='spike_state', uids=sum(al_circ.config.osn_bsgs,[])) pn_r = fo.get_output(var='r') alp_g = fo.get_output(var='g') fig,axes = plt.subplots(1,8, figsize=(25,5), gridspec_kw={'width_ratios': [1]+[10]*7, 'wspace':.5}) axes[0].imshow(np.log10(al_circ.config.affs)[:,None], cmap=plt.cm.gnuplot, origin='lower', aspect='auto') _ = plot_data(otp_I, t=t, ax=axes[1], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_spikes(bsg_spikes, ax=axes[2], markersize=2, color='k') _ = plot_data(alp_g, t=t, ax=axes[3], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'Cond A.U.'}) _ = plot_data(axt_I, t=t, ax=axes[4], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(eln_I, t=t, ax=axes[5], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(iln_I, t=t, ax=axes[6], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(pn_r, t=t, ax=axes[7], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'Spike Rate A.U.'}) for ax in axes[1:]: ax.set_xlabel('Time [sec]') axes[0].set_xticks([]) axes[0].set_title('$\log_{10}\mathbf{b}_o$') axes[0].set_yticks(np.arange(al_circ.config.NR)) axes[0].set_yticklabels(al_circ.config.receptors) axes[2].set_xlim([t.min(), t.max()]) axes[1].set_title('OSN-OTP') axes[2].set_title('OSN-BSG') axes[3].set_title('OSN-Alpha') axes[4].set_title('OSN-Axon Terminal') axes[5].set_title('Post-eLN Terminal') axes[6].set_title('Post-iLN Terminal') axes[7].set_title('PN') axes[2].grid() fig.suptitle('Larva Antennal Lobe, Ethyl Acetate at 100ppm', fontsize=15) plt.show() ``` # 2. Change Number of PNs - 2 PNs per glomulus for given receptor type ``` # use 2 PNs for each receptor channel al_circ = al_circ.set_neuron_number(node_type='pns', number=2) fi = StepInputProcessor('conc', al_circ.inputs['conc'], 100., start=1, stop=3) fi, fo, lpu = al_circ.simulate( t, fi, record_var_list=[ ( 'I', sum(al_circ.config.osn_otps,[]) + sum(al_circ.config.osn_axts,[]) + sum(al_circ.config.postelns,[]) + sum(al_circ.config.postilns,[]) ), ('r', sum(al_circ.config.pns,[])), ('g', sum(al_circ.config.osn_alphas,[])), ('spike_state', sum(al_circ.config.osn_bsgs, [])) ]) %matplotlib inline otp_I = fo.get_output(var='I', uids=sum(al_circ.config.osn_otps,[])) axt_I = fo.get_output(var='I', uids=sum(al_circ.config.osn_axts,[])) eln_I = fo.get_output(var='I', uids=sum(al_circ.config.postelns,[])) iln_I = fo.get_output(var='I', uids=sum(al_circ.config.postilns,[])) bsg_spikes = fo.get_output(var='spike_state', uids=sum(al_circ.config.osn_bsgs,[])) pn_r = fo.get_output(var='r') alp_g = fo.get_output(var='g') fig,axes = plt.subplots(1,8, figsize=(25,5), gridspec_kw={'width_ratios': [1]+[10]*7, 'wspace':.5}) axes[0].imshow(np.log10(al_circ.config.affs)[:,None], cmap=plt.cm.gnuplot, origin='lower', aspect='auto') _ = plot_data(otp_I, t=t, ax=axes[1], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_spikes(bsg_spikes, ax=axes[2], markersize=2, color='k') _ = plot_data(alp_g, t=t, ax=axes[3], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'Cond A.U.'}) _ = plot_data(axt_I, t=t, ax=axes[4], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(eln_I, t=t, ax=axes[5], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(iln_I, t=t, ax=axes[6], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(pn_r, t=t, ax=axes[7], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'Spike Rate A.U.'}) for ax in axes[1:]: ax.set_xlabel('Time [sec]') axes[0].set_xticks([]) axes[0].set_title('$\mathbf{b}_o$') axes[0].set_yticks(np.arange(al_circ.config.NR)) axes[0].set_yticklabels(al_circ.config.receptors) axes[2].set_xlim([t.min(), t.max()]) axes[1].set_title('OSN-OTP') axes[2].set_title('OSN-BSG') axes[3].set_title('OSN-Alpha') axes[4].set_title('OSN-Axon Terminal') axes[5].set_title('Post-eLN Terminal') axes[6].set_title('Post-iLN Terminal') axes[7].set_title('PN') axes[2].grid() fig.suptitle('Larva Antennal Lobe, Ethyl Acetate at 100ppm - 2 Uniglomerular PNs per Channel', fontsize=15) plt.show() ``` # 3. Change PN Parameter - set a higher spike rate threshold ``` al_circ.set_node_params(node_type='pns', key='threshold', value=.1) fi = StepInputProcessor('conc', al_circ.inputs['conc'], 100., start=1, stop=3) fi, fo, lpu = al_circ.simulate( t, fi, record_var_list=[ ( 'I', sum(al_circ.config.osn_otps,[]) + sum(al_circ.config.osn_axts,[]) + sum(al_circ.config.postelns,[]) + sum(al_circ.config.postilns,[]) ), ('r', sum(al_circ.config.pns,[])), ('g', sum(al_circ.config.osn_alphas,[])), ('spike_state', sum(al_circ.config.osn_bsgs, [])) ]) %matplotlib inline otp_I = fo.get_output(var='I', uids=sum(al_circ.config.osn_otps,[])) axt_I = fo.get_output(var='I', uids=sum(al_circ.config.osn_axts,[])) eln_I = fo.get_output(var='I', uids=sum(al_circ.config.postelns,[])) iln_I = fo.get_output(var='I', uids=sum(al_circ.config.postilns,[])) bsg_spikes = fo.get_output(var='spike_state', uids=sum(al_circ.config.osn_bsgs,[])) pn_r = fo.get_output(var='r') alp_g = fo.get_output(var='g') fig,axes = plt.subplots(1,8, figsize=(25,5), gridspec_kw={'width_ratios': [1]+[10]*7, 'wspace':.5}) axes[0].imshow(np.log10(al_circ.config.affs)[:,None], cmap=plt.cm.gnuplot, origin='lower', aspect='auto') axes[0].set_xticks([]) axes[0].set_title('$\mathbf{b}_o$') _ = plot_data(otp_I, t=t, ax=axes[1], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_spikes(bsg_spikes, ax=axes[2], markersize=2, color='k') _ = plot_data(alp_g, t=t, ax=axes[3], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'Cond A.U.'}) _ = plot_data(axt_I, t=t, ax=axes[4], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(eln_I, t=t, ax=axes[5], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(iln_I, t=t, ax=axes[6], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(pn_r, t=t, ax=axes[7], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'Spike Rate A.U.'}) axes[0].set_yticks(np.arange(al_circ.config.NR)) axes[0].set_yticklabels(al_circ.config.receptors) axes[2].set_xlim([t.min(), t.max()]) axes[1].set_title('OSN-OTP') axes[2].set_title('OSN-BSG') axes[3].set_title('OSN-Alpha') axes[4].set_title('OSN-Axon Terminal') axes[5].set_title('Post-eLN Terminal') axes[6].set_title('Post-iLN Terminal') axes[7].set_title('PN') axes[2].grid() for ax in axes[1:]: ax.set_xlabel('Time [sec]') axes[2].set_xlim([t.min(), t.max()]) axes[1].set_title('OSN-OTP') axes[2].set_title('OSN-BSG') axes[3].set_title('OSN-Alpha') axes[4].set_title('OSN-Axon Terminal') axes[5].set_title('Post-eLN Terminal') axes[6].set_title('Post-iLN Terminal') axes[7].set_title('PN') axes[2].grid() fig.suptitle('Larva Antennal Lobe, Ethyl Acetate at 100ppm - Higher Spike Rate Threshold', fontsize=15) plt.show() ``` # 4. Change Axon-Terimnal to PN Routing - Ablate connection from 3 receptor channels to corresponding PNs ``` # disconnect axon terminal from pn in the first 3 receptor channels al_circ = al_circ.set_routing([None, None, None], 'axt_to_pn', receptor=al_circ.config.receptors[:3]) fi = StepInputProcessor('conc', al_circ.inputs['conc'], 100., start=1, stop=3) fi, fo, lpu = al_circ.simulate( t, fi, record_var_list=[ ( 'I', sum(al_circ.config.osn_otps,[]) + sum(al_circ.config.osn_axts,[]) + sum(al_circ.config.postelns,[]) + sum(al_circ.config.postilns,[]) ), ('r', sum(al_circ.config.pns,[])), ('g', sum(al_circ.config.osn_alphas,[])), ('spike_state', sum(al_circ.config.osn_bsgs, [])) ]) %matplotlib inline otp_I = fo.get_output(var='I', uids=sum(al_circ.config.osn_otps,[])) axt_I = fo.get_output(var='I', uids=sum(al_circ.config.osn_axts,[])) eln_I = fo.get_output(var='I', uids=sum(al_circ.config.postelns,[])) iln_I = fo.get_output(var='I', uids=sum(al_circ.config.postilns,[])) bsg_spikes = fo.get_output(var='spike_state', uids=sum(al_circ.config.osn_bsgs,[])) pn_r = fo.get_output(var='r') alp_g = fo.get_output(var='g') fig,axes = plt.subplots(1,8, figsize=(25,5), gridspec_kw={'width_ratios': [1]+[10]*7, 'wspace':.5}) axes[0].imshow(np.log10(al_circ.config.affs)[:,None], cmap=plt.cm.gnuplot, origin='lower', aspect='auto') axes[0].set_xticks([]) axes[0].set_title('$\mathbf{b}_o$') _ = plot_data(otp_I, t=t, ax=axes[1], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_spikes(bsg_spikes, ax=axes[2], markersize=2, color='k') _ = plot_data(alp_g, t=t, ax=axes[3], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'Cond A.U.'}) _ = plot_data(axt_I, t=t, ax=axes[4], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(eln_I, t=t, ax=axes[5], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(iln_I, t=t, ax=axes[6], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'I [$\mu A$]'}) _ = plot_data(pn_r, t=t, ax=axes[7], cmap=plt.cm.gnuplot, cax=True, cbar_kw={'label': 'Spike Rate A.U.'}) for ax in axes[1:]: ax.set_xlabel('Time [sec]') axes[0].set_yticks(np.arange(al_circ.config.NR)) axes[0].set_yticklabels(al_circ.config.receptors) axes[2].set_xlim([t.min(), t.max()]) axes[1].set_title('OSN-OTP') axes[2].set_title('OSN-BSG') axes[3].set_title('OSN-Alpha') axes[4].set_title('OSN-Axon Terminal') axes[5].set_title('Post-eLN') axes[6].set_title('Post-iLN') axes[7].set_title('PN') axes[2].grid() fig.suptitle('Larva Antennal Lobe, Ethyl Acetate at 100ppm - Ablate first 3 OSN to PN connection', fontsize=15) plt.show() ```
github_jupyter
# Full-time Scores in the Premier League ``` import pandas as pd import numpy as np df = pd.read_csv("../data/fivethirtyeight/spi_matches.csv") # df = df[(df['league_id'] == 2412) | (df['league_id'] == 2411)] df = df[df['league_id'] == 2411] df = df[["season", "league_id", "team1", "team2", "score1", "score2", "date"]].dropna() ``` ## Exploratory Data Analysis ``` df[["score1", "score2"]].mean() df[df['season'] == 2020][["score1", "score2"]].mean() ``` While there is a considerably greater number of goals scored at home. The 2020-21 Season seems to exempt from this advantage. The Covid-19 Pandemic causing fans to be absent from stadiums must have affected this considerably. ``` import matplotlib.pyplot as plt import matplotlib as mpl mpl.rcParams['figure.dpi'] = 300 from highlight_text import fig_text body_font = "Open Sans" watermark_font = "DejaVu Sans" text_color = "w" background = "#282B2F" title_font = "DejaVu Sans" mpl.rcParams['xtick.color'] = text_color mpl.rcParams['ytick.color'] = text_color mpl.rcParams['text.color'] = text_color mpl.rcParams['axes.edgecolor'] = text_color mpl.rcParams['xtick.labelsize'] = 5 mpl.rcParams['ytick.labelsize'] = 6 from scipy.stats import poisson fig, ax = plt.subplots(tight_layout=True) fig.set_facecolor(background) ax.patch.set_alpha(0) max_goals = 8 _, _, _ = ax.hist( df[df['season'] != 2020][["score1", "score2"]].values, label=["Home", "Away"], bins=np.arange(0, max_goals)-.5, density=True, color=['#016DBA', '#B82A2A'], edgecolor='w', linewidth=0.25, alpha=1) home_poisson = poisson.pmf(range(max_goals), df["score1"].mean()) away_poisson = poisson.pmf(range(max_goals), df["score2"].mean()) ax.plot( [i for i in range(0, max_goals)], home_poisson, linestyle="-", color="#01497c", label="Home Poisson", ) ax.plot( [i for i in range(0, max_goals)], away_poisson, linestyle="-", color="#902121", label="Away Poisson", ) ax.set_xticks(np.arange(0, max_goals), minor=False) ax.set_xlabel( "Goals", fontfamily=title_font, fontweight="bold", fontsize=8, color=text_color) ax.set_ylabel( "Proportion of matches", fontfamily=title_font, fontweight="bold", fontsize=8, color=text_color) fig_text( x=0.1, y=1.025, s="Number of Goals Scored Per Match at <Home> and <Away>.", highlight_textprops=[ {"color": '#016DBA'}, {"color": '#B82A2A'}, ], fontweight="regular", fontsize=12, fontfamily=title_font, color=text_color, alpha=1) fig_text( x=0.8, y=-0.02, s="Created by <Paul Fournier>", highlight_textprops=[{"fontstyle": "italic"}], fontsize=6, fontfamily=watermark_font, color=text_color) plt.show() fig, ax = plt.subplots(tight_layout=True) fig.set_facecolor(background) ax.patch.set_alpha(0) max_goals = 8 _, _, _ = ax.hist( df[df['season'] == 2020][["score1", "score2"]].values, label=["Home", "Away"], bins=np.arange(0, max_goals)-.5, density=True, color=['#016DBA', '#B82A2A'], edgecolor='w', linewidth=0.25, alpha=1) home_poisson = poisson.pmf(range(max_goals), df["score1"].mean()) away_poisson = poisson.pmf(range(max_goals), df["score2"].mean()) ax.plot( [i for i in range(0, max_goals)], home_poisson, linestyle="-", color="#01497c", label="Home Poisson", ) ax.plot( [i for i in range(0, max_goals)], away_poisson, linestyle="-", color="#902121", label="Away Poisson", ) ax.set_xticks(np.arange(0, max_goals), minor=False) ax.set_xlabel( "Goals", fontfamily=title_font, fontweight="bold", fontsize=8, color=text_color) ax.set_ylabel( "Proportion of matches", fontfamily=title_font, fontweight="bold", fontsize=8, color=text_color) fig_text(x=0.1, y=1.025, s="Goals Scored at <Home> and <Away> during the 2020-21 Season.", highlight_textprops=[ {"color": '#016DBA'}, {"color": '#B82A2A'}, ], fontweight="regular", fontsize=12, fontfamily=title_font, color=text_color, alpha=1) fig_text( x=0.8, y=-0.02, s="Created by <Paul Fournier>", highlight_textprops=[{"fontstyle": "italic"}], fontsize=6, fontfamily=watermark_font, color=text_color) plt.show() mpl.rcParams['xtick.labelsize'] = 6 mpl.rcParams['ytick.labelsize'] = 6 fig, ax = plt.subplots(tight_layout=True) fig.set_facecolor(background) ax.patch.set_alpha(0) heat = np.zeros((7, 7)) for i in range(7): for j in range(7): heat[6 - i, j] = df[(df["score1"] == i) & (df["score2"] == j)].shape[0] for i in range(7): for j in range(7): text = ax.text(j, i, np.round(heat[i, j]/np.sum(heat), 2), ha="center", va="center") plt.imshow(heat, cmap='magma_r', interpolation='nearest') ax.set_xticks(np.arange(0, 7)) ax.set_yticks(np.arange(0, 7)) ax.set_xticklabels(np.arange(0, 7)) ax.set_yticklabels(np.flip(np.arange(0, 7))) ax.set_xlabel( f"Away Goals", fontfamily=title_font, fontweight="bold", fontsize=7, color=text_color) ax.set_ylabel( f"Home Goals", fontfamily=title_font, fontweight="bold", fontsize=7, color=text_color) fig_text(x=0.22, y=1.04, s=f"Distribution of historical scorelines", fontweight="regular", fontsize=12, fontfamily=title_font, color=text_color, alpha=1) fig_text( x=0.6, y=-0.02, s="Created by <Paul Fournier>", highlight_textprops=[{"fontstyle": "italic"}], fontsize=6, fontfamily=watermark_font, color=text_color) plt.show() ```
github_jupyter
## **TODO:** Set the value of `URL` to the URL from your learning materials ``` URL = None import os assert URL and (type(URL) is str), "Be sure to initialize URL using the value from your learning materials" os.environ['URL'] = URL %%bash wget -q $URL -O ./data.zip mkdir -p data find *.zip | xargs unzip -o -d data/ ``` ## Use PyTorch `Dataset` and `Dataloader` with a structured dataset ``` import os import pandas as pd import torch as pt from torch import nn from torch.utils.data import DataLoader from torch.utils.data import TensorDataset pt.set_default_dtype(pt.float64) ``` Read the files that match `part-*.csv` from the `data` subdirectory into a Pandas data frame named `df`. ``` from pathlib import Path df = pd.concat( pd.read_csv(file) for file in Path('data/').glob('part-*.csv') ) ``` ## Explore the `df` data frame, including the column names, the first few rows of the dataset, and the data frame's memory usage. ``` ``` ## Drop the `origindatetime_tr` column from the data frame. For now you are going to predict the taxi fare just based on the lat/lon coordinates of the pickup and the drop off locations. Remove the `origindatetime_tr` column from the data frame in your working dataset. ``` ``` ## Sample 10% of your working dataset into a test dataset data frame * **hint:** use the Pandas `sample` function with the dataframe. Specify a value for the `random_state` to achieve reproducibility. ``` ``` ## Drop the rows that exist in your test dataset from the working dataset to produce a training dataset. * **hint** DataFrame's `drop` function can use index values from a data frame to drop specific rows. ``` ``` ## Define 2 Python lists: 1st for the feature column names; 2nd for the target column name ``` ``` ## Create `X` and `y` tensors with the values of your feature and target columns in the training dataset ``` ``` ## Create a `TensorDataset` instance with the `y` and `X` tensors (in that order) ``` ``` ## Create a `DataLoader` instance specifying a custom batch size A batch size of `2 ** 18 = 262,144` should work well. ``` ``` ## Create a model using `nn.Linear` ``` ``` ## Create an instance of the `AdamW` optimizer for the model ``` ``` ## Declare your `forward`, `loss` and `metric` functions * **hint:** if you are tried of computing MSE by hand you can use `nn.functional.mse_loss` instead. ``` ``` ## Iterate over the batches returned by your `DataLoader` instance For every step of gradient descent, print out the MSE, RMSE, and the batch index * **hint:** you can use Python's `enumerable` for an iterable * **hint:** the batch returned by the `enumerable` has the same contents as your `TensorDataset` instance ``` ``` ## Implement 10 epochs of gradient descent training For every step of gradient descent, printout the MSE, RMSE, epoch index, and batch index. * **hint:** you can call `enumerate(DataLoader)` repeatedly in a `for` loop ``` ``` Copyright 2020 CounterFactual.AI LLC. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
github_jupyter
##### Copyright 2021 The IREE Authors ``` #@title Licensed under the Apache License v2.0 with LLVM Exceptions. # See https://llvm.org/LICENSE.txt for license information. # SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception ``` # Variables and State This notebook 1. Creates a TensorFlow program with basic tf.Variable use 2. Imports that program into IREE's compiler 3. Compiles the imported program to an IREE VM bytecode module 4. Tests running the compiled VM module using IREE's runtime 5. Downloads compilation artifacts for use with the native (C API) sample application ``` #@title General setup import os import tempfile ARTIFACTS_DIR = os.path.join(tempfile.gettempdir(), "iree", "colab_artifacts") os.makedirs(ARTIFACTS_DIR, exist_ok=True) print(f"Using artifacts directory '{ARTIFACTS_DIR}'") ``` ## Create a program using TensorFlow and import it into IREE This program uses `tf.Variable` to track state internal to the program then exports functions which can be used to interact with that variable. Note that each function we want to be callable from our compiled program needs to use `@tf.function` with an `input_signature` specified. References: * ["Introduction to Variables" Guide](https://www.tensorflow.org/guide/variable) * [`tf.Variable` reference](https://www.tensorflow.org/api_docs/python/tf/Variable) * [`tf.function` reference](https://www.tensorflow.org/api_docs/python/tf/function) ``` #@title Define a simple "counter" TensorFlow module import tensorflow as tf class CounterModule(tf.Module): def __init__(self): super().__init__() self.counter = tf.Variable(0) @tf.function(input_signature=[]) def get_value(self): return self.counter @tf.function(input_signature=[tf.TensorSpec([], tf.int32)]) def set_value(self, new_value): self.counter.assign(new_value) @tf.function(input_signature=[tf.TensorSpec([], tf.int32)]) def add_to_value(self, x): self.counter.assign(self.counter + x) @tf.function(input_signature=[]) def reset_value(self): self.set_value(0) %%capture !python -m pip install iree-compiler-snapshot iree-tools-tf-snapshot -f https://github.com/google/iree/releases #@title Import the TensorFlow program into IREE as MLIR from IPython.display import clear_output from iree.compiler import tf as tfc compiler_module = tfc.compile_module(CounterModule(), import_only=True) clear_output() # Skip over TensorFlow's output. # Print the imported MLIR to see how the compiler views this TensorFlow program. # Note IREE's `flow.variable` ops and the public (exported) functions. print("Counter MLIR:\n```\n", compiler_module.decode("utf-8"), "```\n") # Save the imported MLIR to disk. imported_mlir_path = os.path.join(ARTIFACTS_DIR, "counter.mlir") with open(imported_mlir_path, "wt") as output_file: output_file.write(compiler_module.decode("utf-8")) print(f"Wrote MLIR to path '{imported_mlir_path}'") ``` ## Test the imported program _Note: you can stop after each step and use intermediate outputs with other tools outside of Colab._ _See the [README](https://github.com/google/iree/tree/main/iree/samples/variables_and_state#changing-compilation-options) for more details and example command line instructions._ * _The "imported MLIR" can be used by IREE's generic compiler tools_ * _The "flatbuffer blob" can be saved and used by runtime applications_ _The specific point at which you switch from Python to native tools will depend on your project._ ``` %%capture !python -m pip install iree-compiler-snapshot -f https://github.com/google/iree/releases #@title Compile the imported MLIR further into an IREE VM bytecode module from iree.compiler import compile_str flatbuffer_blob = compile_str(compiler_module, target_backends=["vmvx"]) # Save the imported MLIR to disk. flatbuffer_path = os.path.join(ARTIFACTS_DIR, "counter_vmvx.vmfb") with open(flatbuffer_path, "wb") as output_file: output_file.write(flatbuffer_blob) print(f"Wrote .vmfb to path '{flatbuffer_path}'") %%capture !python -m pip install iree-runtime-snapshot -f https://github.com/google/iree/releases #@title Test running the compiled VM module using IREE's runtime from iree import runtime as ireert vm_module = ireert.VmModule.from_flatbuffer(flatbuffer_blob) config = ireert.Config("vmvx") ctx = ireert.SystemContext(config=config) ctx.add_vm_module(vm_module) # Our @tf.functions are accessible by name on the module named 'module' counter = ctx.modules.module print(counter.get_value()) counter.set_value(101) print(counter.get_value()) counter.add_to_value(20) print(counter.get_value()) counter.add_to_value(-50) print(counter.get_value()) counter.reset_value() print(counter.get_value()) ``` ## Download compilation artifacts ``` ARTIFACTS_ZIP = "/tmp/variables_and_state_colab_artifacts.zip" print(f"Zipping '{ARTIFACTS_DIR}' to '{ARTIFACTS_ZIP}' for download...") !cd {ARTIFACTS_DIR} && zip -r {ARTIFACTS_ZIP} . # Note: you can also download files using Colab's file explorer from google.colab import files print("Downloading the artifacts zip file...") files.download(ARTIFACTS_ZIP) ```
github_jupyter
``` !pip install exetera # Copyright 2020 KCL-BMEIS - King's College London # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from datetime import datetime, timezone import time from collections import defaultdict import numpy as np import numba import h5py from exeteracovid.algorithms.age_from_year_of_birth import calculate_age_from_year_of_birth_fast from exeteracovid.algorithms.weight_height_bmi import weight_height_bmi_fast_1 from exeteracovid.algorithms.inconsistent_symptoms import check_inconsistent_symptoms_1 from exeteracovid.algorithms.temperature import validate_temperature_1 from exeteracovid.algorithms.combined_healthcare_worker import combined_hcw_with_contact from exetera.core import persistence from exetera.core.persistence import DataStore from exetera.core.session import Session from exetera.core import readerwriter as rw from exetera.core import utils def log(*a, **kwa): print(*a, **kwa) def postprocess(dataset, destination, timestamp=None, flags=None): if flags is None: flags = set() do_daily_asmts = 'daily' in flags has_patients = 'patients' in dataset.keys() has_assessments = 'assessments' in dataset.keys() has_tests = 'tests' in dataset.keys() has_diet = 'diet' in dataset.keys() sort_enabled = lambda x: True process_enabled = lambda x: True sort_patients = sort_enabled(flags) and True sort_assessments = sort_enabled(flags) and True sort_tests = sort_enabled(flags) and True sort_diet = sort_enabled(flags) and True make_assessment_patient_id_fkey = process_enabled(flags) and True year_from_age = process_enabled(flags) and True clean_weight_height_bmi = process_enabled(flags) and True health_worker_with_contact = process_enabled(flags) and True clean_temperatures = process_enabled(flags) and True check_symptoms = process_enabled(flags) and True create_daily = process_enabled(flags) and do_daily_asmts make_patient_level_assessment_metrics = process_enabled(flags) and True make_patient_level_daily_assessment_metrics = process_enabled(flags) and do_daily_asmts make_new_test_level_metrics = process_enabled(flags) and True make_diet_level_metrics = True make_healthy_diet_index = True ds = DataStore(timestamp=timestamp) s = Session() # patients ================================================================ sorted_patients_src = None if has_patients: patients_src = dataset['patients'] write_mode = 'write' if 'patients' not in destination.keys(): patients_dest = ds.get_or_create_group(destination, 'patients') sorted_patients_src = patients_dest # Patient sort # ============ if sort_patients: duplicate_filter = \ persistence.filter_duplicate_fields(ds.get_reader(patients_src['id'])[:]) for k in patients_src.keys(): t0 = time.time() r = ds.get_reader(patients_src[k]) w = r.get_writer(patients_dest, k) ds.apply_filter(duplicate_filter, r, w) print(f"'{k}' filtered in {time.time() - t0}s") print(np.count_nonzero(duplicate_filter == True), np.count_nonzero(duplicate_filter == False)) sort_keys = ('id',) ds.sort_on( patients_dest, patients_dest, sort_keys, write_mode='overwrite') # Patient processing # ================== if year_from_age: log("year of birth -> age; 18 to 90 filter") t0 = time.time() age = ds.get_numeric_writer(patients_dest, 'age', 'uint32', write_mode) age_filter = ds.get_numeric_writer(patients_dest, 'age_filter', 'bool', write_mode) age_16_to_90 = ds.get_numeric_writer(patients_dest, '16_to_90_years', 'bool', write_mode) print('year_of_birth:', patients_dest['year_of_birth']) for k in patients_dest['year_of_birth'].attrs.keys(): print(k, patients_dest['year_of_birth'].attrs[k]) calculate_age_from_year_of_birth_fast( ds, 16, 90, patients_dest['year_of_birth'], patients_dest['year_of_birth_valid'], age, age_filter, age_16_to_90, 2020) log(f"completed in {time.time() - t0}") print('age_filter count:', np.sum(patients_dest['age_filter']['values'][:])) print('16_to_90_years count:', np.sum(patients_dest['16_to_90_years']['values'][:])) if clean_weight_height_bmi: log("height / weight / bmi; standard range filters") t0 = time.time() weights_clean = ds.get_numeric_writer(patients_dest, 'weight_kg_clean', 'float32', write_mode) weights_filter = ds.get_numeric_writer(patients_dest, '40_to_200_kg', 'bool', write_mode) heights_clean = ds.get_numeric_writer(patients_dest, 'height_cm_clean', 'float32', write_mode) heights_filter = ds.get_numeric_writer(patients_dest, '110_to_220_cm', 'bool', write_mode) bmis_clean = ds.get_numeric_writer(patients_dest, 'bmi_clean', 'float32', write_mode) bmis_filter = ds.get_numeric_writer(patients_dest, '15_to_55_bmi', 'bool', write_mode) weight_height_bmi_fast_1(ds, 40, 200, 110, 220, 15, 55, None, None, None, None, patients_dest['weight_kg'], patients_dest['weight_kg_valid'], patients_dest['height_cm'], patients_dest['height_cm_valid'], patients_dest['bmi'], patients_dest['bmi_valid'], weights_clean, weights_filter, None, heights_clean, heights_filter, None, bmis_clean, bmis_filter, None) log(f"completed in {time.time() - t0}") if health_worker_with_contact: with utils.Timer("health_worker_with_contact field"): #writer = ds.get_categorical_writer(patients_dest, 'health_worker_with_contact', 'int8') combined_hcw_with_contact(ds, ds.get_reader(patients_dest['healthcare_professional']), ds.get_reader(patients_dest['contact_health_worker']), ds.get_reader(patients_dest['is_carer_for_community']), patients_dest, 'health_worker_with_contact') # assessments ============================================================= sorted_assessments_src = None if has_assessments: assessments_src = dataset['assessments'] if 'assessments' not in destination.keys(): assessments_dest = ds.get_or_create_group(destination, 'assessments') sorted_assessments_src = assessments_dest if sort_assessments: sort_keys = ('patient_id', 'created_at') with utils.Timer("sorting assessments"): ds.sort_on( assessments_src, assessments_dest, sort_keys) if has_patients: if make_assessment_patient_id_fkey: print("creating 'assessment_patient_id_fkey' foreign key index for 'patient_id'") t0 = time.time() patient_ids = ds.get_reader(sorted_patients_src['id']) assessment_patient_ids =\ ds.get_reader(sorted_assessments_src['patient_id']) assessment_patient_id_fkey =\ ds.get_numeric_writer(assessments_dest, 'assessment_patient_id_fkey', 'int64') ds.get_index(patient_ids, assessment_patient_ids, assessment_patient_id_fkey) print(f"completed in {time.time() - t0}s") if clean_temperatures: print("clean temperatures") t0 = time.time() temps = ds.get_reader(sorted_assessments_src['temperature']) temp_units = ds.get_reader(sorted_assessments_src['temperature_unit']) temps_valid = ds.get_reader(sorted_assessments_src['temperature_valid']) dest_temps = temps.get_writer(assessments_dest, 'temperature_c_clean', write_mode) dest_temps_valid =\ temps_valid.get_writer(assessments_dest, 'temperature_35_to_42_inclusive', write_mode) dest_temps_modified =\ temps_valid.get_writer(assessments_dest, 'temperature_modified', write_mode) validate_temperature_1(35.0, 42.0, temps, temp_units, temps_valid, dest_temps, dest_temps_valid, dest_temps_modified) print(f"temperature cleaning done in {time.time() - t0}") if check_symptoms: print('check inconsistent health_status') t0 = time.time() check_inconsistent_symptoms_1(ds, sorted_assessments_src, assessments_dest) print(time.time() - t0) # tests =================================================================== if has_tests: if sort_tests: tests_src = dataset['tests'] tests_dest = ds.get_or_create_group(destination, 'tests') sort_keys = ('patient_id', 'created_at') ds.sort_on(tests_src, tests_dest, sort_keys) # diet ==================================================================== if has_diet: diet_src = dataset['diet'] if 'diet' not in destination.keys(): diet_dest = ds.get_or_create_group(destination, 'diet') sorted_diet_src = diet_dest if sort_diet: sort_keys = ('patient_id', 'display_name', 'id') ds.sort_on(diet_src, diet_dest, sort_keys) if has_assessments: if do_daily_asmts: daily_assessments_dest = ds.get_or_create_group(destination, 'daily_assessments') # post process patients # TODO: need an transaction table print(patients_src.keys()) print(dataset['assessments'].keys()) print(dataset['tests'].keys()) # write_mode = 'overwrite' write_mode = 'write' # Daily assessments # ================= if has_assessments: if create_daily: print("generate daily assessments") patient_ids = ds.get_reader(sorted_assessments_src['patient_id']) created_at_days = ds.get_reader(sorted_assessments_src['created_at_day']) raw_created_at_days = created_at_days[:] if 'assessment_patient_id_fkey' in assessments_src.keys(): patient_id_index = assessments_src['assessment_patient_id_fkey'] else: patient_id_index = assessments_dest['assessment_patient_id_fkey'] patient_id_indices = ds.get_reader(patient_id_index) raw_patient_id_indices = patient_id_indices[:] print("Calculating patient id index spans") t0 = time.time() patient_id_index_spans = ds.get_spans(fields=(raw_patient_id_indices, raw_created_at_days)) print(f"Calculated {len(patient_id_index_spans)-1} spans in {time.time() - t0}s") print("Applying spans to 'health_status'") t0 = time.time() default_behavour_overrides = { 'id': ds.apply_spans_last, 'patient_id': ds.apply_spans_last, 'patient_index': ds.apply_spans_last, 'created_at': ds.apply_spans_last, 'created_at_day': ds.apply_spans_last, 'updated_at': ds.apply_spans_last, 'updated_at_day': ds.apply_spans_last, 'version': ds.apply_spans_max, 'country_code': ds.apply_spans_first, 'date_test_occurred': None, 'date_test_occurred_guess': None, 'date_test_occurred_day': None, 'date_test_occurred_set': None, } for k in sorted_assessments_src.keys(): t1 = time.time() reader = ds.get_reader(sorted_assessments_src[k]) if k in default_behavour_overrides: apply_span_fn = default_behavour_overrides[k] if apply_span_fn is not None: apply_span_fn(patient_id_index_spans, reader, reader.get_writer(daily_assessments_dest, k)) print(f" Field {k} aggregated in {time.time() - t1}s") else: print(f" Skipping field {k}") else: if isinstance(reader, rw.CategoricalReader): ds.apply_spans_max( patient_id_index_spans, reader, reader.get_writer(daily_assessments_dest, k)) print(f" Field {k} aggregated in {time.time() - t1}s") elif isinstance(reader, rw.IndexedStringReader): ds.apply_spans_concat( patient_id_index_spans, reader, reader.get_writer(daily_assessments_dest, k)) print(f" Field {k} aggregated in {time.time() - t1}s") elif isinstance(reader, rw.NumericReader): ds.apply_spans_max( patient_id_index_spans, reader, reader.get_writer(daily_assessments_dest, k)) print(f" Field {k} aggregated in {time.time() - t1}s") else: print(f" No function for {k}") print(f"apply_spans completed in {time.time() - t0}s") if has_patients and has_assessments: if make_patient_level_assessment_metrics: if 'assessment_patient_id_fkey' in assessments_dest: src = assessments_dest['assessment_patient_id_fkey'] else: src = assessments_src['assessment_patient_id_fkey'] assessment_patient_id_fkey = ds.get_reader(src) # generate spans from the assessment-space patient_id foreign key spans = ds.get_spans(field=assessment_patient_id_fkey) ids = ds.get_reader(patients_dest['id']) print('calculate assessment counts per patient') t0 = time.time() writer = ds.get_numeric_writer(patients_dest, 'assessment_count', 'uint32') aggregated_counts = ds.aggregate_count(fkey_index_spans=spans) ds.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans) print(f"calculated assessment counts per patient in {time.time() - t0}") print('calculate first assessment days per patient') t0 = time.time() reader = ds.get_reader(sorted_assessments_src['created_at_day']) writer = ds.get_fixed_string_writer(patients_dest, 'first_assessment_day', 10) aggregated_counts = ds.aggregate_first(fkey_index_spans=spans, reader=reader) ds.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans) print(f"calculated first assessment days per patient in {time.time() - t0}") print('calculate last assessment days per patient') t0 = time.time() reader = ds.get_reader(sorted_assessments_src['created_at_day']) writer = ds.get_fixed_string_writer(patients_dest, 'last_assessment_day', 10) aggregated_counts = ds.aggregate_last(fkey_index_spans=spans, reader=reader) ds.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans) print(f"calculated last assessment days per patient in {time.time() - t0}") print('calculate maximum assessment test result per patient') t0 = time.time() reader = ds.get_reader(sorted_assessments_src['tested_covid_positive']) writer = reader.get_writer(patients_dest, 'max_assessment_test_result') max_result_value = ds.aggregate_max(fkey_index_spans=spans, reader=reader) ds.join(ids, assessment_patient_id_fkey, max_result_value, writer, spans) print(f"calculated maximum assessment test result in {time.time() - t0}") if has_assessments and do_daily_asmts and make_patient_level_daily_assessment_metrics: print("creating 'daily_assessment_patient_id_fkey' foreign key index for 'patient_id'") t0 = time.time() patient_ids = ds.get_reader(sorted_patients_src['id']) daily_assessment_patient_ids =\ ds.get_reader(daily_assessments_dest['patient_id']) daily_assessment_patient_id_fkey =\ ds.get_numeric_writer(daily_assessments_dest, 'daily_assessment_patient_id_fkey', 'int64') ds.get_index(patient_ids, daily_assessment_patient_ids, daily_assessment_patient_id_fkey) print(f"completed in {time.time() - t0}s") spans = ds.get_spans( field=ds.get_reader(daily_assessments_dest['daily_assessment_patient_id_fkey'])) print('calculate daily assessment counts per patient') t0 = time.time() writer = ds.get_numeric_writer(patients_dest, 'daily_assessment_count', 'uint32') aggregated_counts = ds.aggregate_count(fkey_index_spans=spans) daily_assessment_patient_id_fkey =\ ds.get_reader(daily_assessments_dest['daily_assessment_patient_id_fkey']) ds.join(ids, daily_assessment_patient_id_fkey, aggregated_counts, writer, spans) print(f"calculated daily assessment counts per patient in {time.time() - t0}") # TODO - new test count per patient: if has_tests and make_new_test_level_metrics: print("creating 'test_patient_id_fkey' foreign key index for 'patient_id'") t0 = time.time() patient_ids = ds.get_reader(sorted_patients_src['id']) test_patient_ids = ds.get_reader(tests_dest['patient_id']) test_patient_id_fkey = ds.get_numeric_writer(tests_dest, 'test_patient_id_fkey', 'int64') ds.get_index(patient_ids, test_patient_ids, test_patient_id_fkey) test_patient_id_fkey = ds.get_reader(tests_dest['test_patient_id_fkey']) spans = ds.get_spans(field=test_patient_id_fkey) print(f"completed in {time.time() - t0}s") print('calculate test_counts per patient') t0 = time.time() writer = ds.get_numeric_writer(patients_dest, 'test_count', 'uint32') aggregated_counts = ds.aggregate_count(fkey_index_spans=spans) ds.join(ids, test_patient_id_fkey, aggregated_counts, writer, spans) print(f"calculated test counts per patient in {time.time() - t0}") print('calculate test_result per patient') t0 = time.time() test_results = ds.get_reader(tests_dest['result']) writer = test_results.get_writer(patients_dest, 'max_test_result') aggregated_results = ds.aggregate_max(fkey_index_spans=spans, reader=test_results) ds.join(ids, test_patient_id_fkey, aggregated_results, writer, spans) print(f"calculated max_test_result per patient in {time.time() - t0}") if has_diet and make_diet_level_metrics: with utils.Timer("Making patient-level diet questions count", new_line=True): d_pids_ = s.get(diet_dest['patient_id']).data[:] d_pid_spans = s.get_spans(d_pids_) d_distinct_pids = s.apply_spans_first(d_pid_spans, d_pids_) d_pid_counts = s.apply_spans_count(d_pid_spans) p_diet_counts = s.create_numeric(patients_dest, 'diet_counts', 'int32') s.merge_left(left_on=s.get(patients_dest['id']).data[:], right_on=d_distinct_pids, right_fields=(d_pid_counts,), right_writers=(p_diet_counts,)) generate_daily = # False or True to generate aggregated daily assessments input_filename = # File name for the imported dataset output_filename = # Filename to write the processed dataset to timestamp = str(datetime.now(timezone.utc)) # Override with a specific timestamp if required flags = set() if generate_daily is True: flags.add('daily') with h5py.File(input_filename, 'r') as ds: with h5py.File(output_filename, 'w') as ts: postprocess(ds, ts, timestamp, flags) ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. # Model Development with Custom Weights This example shows how to retrain a model with custom weights and fine-tune the model with quantization, then deploy the model running on FPGA. Only Windows is supported. We use TensorFlow and Keras to build our model. We are going to use transfer learning, with ResNet50 as a featurizer. We don't use the last layer of ResNet50 in this case and instead add our own classification layer using Keras. The custom wegiths are trained with ImageNet on ResNet50. We will use the Kaggle Cats and Dogs dataset to retrain and fine-tune the model. The dataset can be downloaded [here](https://www.microsoft.com/en-us/download/details.aspx?id=54765). Download the zip and extract to a directory named 'catsanddogs' under your user directory ("~/catsanddogs"). Please set up your environment as described in the [quick start](project-brainwave-quickstart.ipynb). ``` import os import sys import tensorflow as tf import numpy as np from keras import backend as K ``` ## Setup Environment After you train your model in float32, you'll write the weights to a place on disk. We also need a location to store the models that get downloaded. ``` custom_weights_dir = os.path.expanduser("~/custom-weights") saved_model_dir = os.path.expanduser("~/models") ``` ## Prepare Data Load the files we are going to use for training and testing. By default this notebook uses only a very small subset of the Cats and Dogs dataset. That makes it run relatively quickly. ``` import glob import imghdr datadir = os.path.expanduser("~/catsanddogs") cat_files = glob.glob(os.path.join(datadir, 'PetImages', 'Cat', '*.jpg')) dog_files = glob.glob(os.path.join(datadir, 'PetImages', 'Dog', '*.jpg')) # Limit the data set to make the notebook execute quickly. cat_files = cat_files[:64] dog_files = dog_files[:64] # The data set has a few images that are not jpeg. Remove them. cat_files = [f for f in cat_files if imghdr.what(f) == 'jpeg'] dog_files = [f for f in dog_files if imghdr.what(f) == 'jpeg'] if(not len(cat_files) or not len(dog_files)): print("Please download the Kaggle Cats and Dogs dataset form https://www.microsoft.com/en-us/download/details.aspx?id=54765 and extract the zip to " + datadir) raise ValueError("Data not found") else: print(cat_files[0]) print(dog_files[0]) # Construct a numpy array as labels image_paths = cat_files + dog_files total_files = len(cat_files) + len(dog_files) labels = np.zeros(total_files) labels[len(cat_files):] = 1 # Split images data as training data and test data from sklearn.model_selection import train_test_split onehot_labels = np.array([[0,1] if i else [1,0] for i in labels]) img_train, img_test, label_train, label_test = train_test_split(image_paths, onehot_labels, random_state=42, shuffle=True) print(len(img_train), len(img_test), label_train.shape, label_test.shape) ``` ## Construct Model We use ResNet50 for the featuirzer and build our own classifier using Keras layers. We train the featurizer and the classifier as one model. The weights trained on ImageNet are used as the starting point for the retraining of our featurizer. The weights are loaded from tensorflow chkeckpoint files. Before passing image dataset to the ResNet50 featurizer, we need to preprocess the input file to get it into the form expected by ResNet50. ResNet50 expects float tensors representing the images in BGR, channel last order. We've provided a default implementation of the preprocessing that you can use. ``` import azureml.contrib.brainwave.models.utils as utils def preprocess_images(): # Convert images to 3D tensors [width,height,channel] - channels are in BGR order. in_images = tf.placeholder(tf.string) image_tensors = utils.preprocess_array(in_images) return in_images, image_tensors ``` We use Keras layer APIs to construct the classifier. Because we're using the tensorflow backend, we can train this classifier in one session with our Resnet50 model. ``` def construct_classifier(in_tensor): from keras.layers import Dropout, Dense, Flatten K.set_session(tf.get_default_session()) FC_SIZE = 1024 NUM_CLASSES = 2 x = Dropout(0.2, input_shape=(1, 1, 2048,))(in_tensor) x = Dense(FC_SIZE, activation='relu', input_dim=(1, 1, 2048,))(x) x = Flatten()(x) preds = Dense(NUM_CLASSES, activation='softmax', input_dim=FC_SIZE, name='classifier_output')(x) return preds ``` Now every component of the model is defined, we can construct the model. Constructing the model with the project brainwave models is two steps - first we import the graph definition, then we restore the weights of the model into a tensorflow session. Because the quantized graph defintion and the float32 graph defintion share the same node names in the graph definitions, we can initally train the weights in float32, and then reload them with the quantized operations (which take longer) to fine-tune the model. ``` def construct_model(quantized, starting_weights_directory = None): from azureml.contrib.brainwave.models import Resnet50, QuantizedResnet50 # Convert images to 3D tensors [width,height,channel] in_images, image_tensors = preprocess_images() # Construct featurizer using quantized or unquantized ResNet50 model if not quantized: featurizer = Resnet50(saved_model_dir) else: featurizer = QuantizedResnet50(saved_model_dir, custom_weights_directory = starting_weights_directory) features = featurizer.import_graph_def(input_tensor=image_tensors) # Construct classifier preds = construct_classifier(features) # Initialize weights sess = tf.get_default_session() tf.global_variables_initializer().run() featurizer.restore_weights(sess) return in_images, image_tensors, features, preds, featurizer ``` ## Train Model First we train the model with custom weights but without quantization. Training is done with native float precision (32-bit floats). We load the traing data set and batch the training with 10 epochs. When the performance reaches desired level or starts decredation, we stop the training iteration and save the weights as tensorflow checkpoint files. ``` def read_files(files): """ Read files to array""" contents = [] for path in files: with open(path, 'rb') as f: contents.append(f.read()) return contents def train_model(preds, in_images, img_train, label_train, is_retrain = False, train_epoch = 10): """ training model """ from keras.objectives import binary_crossentropy from tqdm import tqdm learning_rate = 0.001 if is_retrain else 0.01 # Specify the loss function in_labels = tf.placeholder(tf.float32, shape=(None, 2)) cross_entropy = tf.reduce_mean(binary_crossentropy(in_labels, preds)) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) def chunks(a, b, n): """Yield successive n-sized chunks from a and b.""" if (len(a) != len(b)): print("a and b are not equal in chunks(a,b,n)") raise ValueError("Parameter error") for i in range(0, len(a), n): yield a[i:i + n], b[i:i + n] chunk_size = 16 chunk_num = len(label_train) / chunk_size sess = tf.get_default_session() for epoch in range(train_epoch): avg_loss = 0 for img_chunk, label_chunk in tqdm(chunks(img_train, label_train, chunk_size)): contents = read_files(img_chunk) _, loss = sess.run([optimizer, cross_entropy], feed_dict={in_images: contents, in_labels: label_chunk, K.learning_phase(): 1}) avg_loss += loss / chunk_num print("Epoch:", (epoch + 1), "loss = ", "{:.3f}".format(avg_loss)) # Reach desired performance if (avg_loss < 0.001): break def test_model(preds, in_images, img_test, label_test): """Test the model""" from keras.metrics import categorical_accuracy in_labels = tf.placeholder(tf.float32, shape=(None, 2)) accuracy = tf.reduce_mean(categorical_accuracy(in_labels, preds)) contents = read_files(img_test) accuracy = accuracy.eval(feed_dict={in_images: contents, in_labels: label_test, K.learning_phase(): 0}) return accuracy # Launch the training tf.reset_default_graph() sess = tf.Session(graph=tf.get_default_graph()) with sess.as_default(): in_images, image_tensors, features, preds, featurizer = construct_model(quantized=False) train_model(preds, in_images, img_train, label_train, is_retrain=False, train_epoch=10) accuracy = test_model(preds, in_images, img_test, label_test) print("Accuracy:", accuracy) featurizer.save_weights(custom_weights_dir + "/rn50", tf.get_default_session()) ``` ## Test Model After training, we evaluate the trained model's accuracy on test dataset with quantization. So that we know the model's performance if it is deployed on the FPGA. ``` tf.reset_default_graph() sess = tf.Session(graph=tf.get_default_graph()) with sess.as_default(): print("Testing trained model with quantization") in_images, image_tensors, features, preds, quantized_featurizer = construct_model(quantized=True, starting_weights_directory=custom_weights_dir) accuracy = test_model(preds, in_images, img_test, label_test) print("Accuracy:", accuracy) ``` ## Fine-Tune Model Sometimes, the model's accuracy can drop significantly after quantization. In those cases, we need to retrain the model enabled with quantization to get better model accuracy. ``` if (accuracy < 0.93): with sess.as_default(): print("Fine-tuning model with quantization") train_model(preds, in_images, img_train, label_train, is_retrain=True, train_epoch=10) accuracy = test_model(preds, in_images, img_test, label_test) print("Accuracy:", accuracy) ``` ## Service Definition Like in the QuickStart notebook our service definition pipeline consists of three stages. ``` from azureml.contrib.brainwave.pipeline import ModelDefinition, TensorflowStage, BrainWaveStage model_def_path = os.path.join(saved_model_dir, 'model_def.zip') model_def = ModelDefinition() model_def.pipeline.append(TensorflowStage(sess, in_images, image_tensors)) model_def.pipeline.append(BrainWaveStage(sess, quantized_featurizer)) model_def.pipeline.append(TensorflowStage(sess, features, preds)) model_def.save(model_def_path) print(model_def_path) ``` ## Deploy Go to our [GitHub repo](https://aka.ms/aml-real-time-ai) "docs" folder to learn how to create a Model Management Account and find the required information below. ``` from azureml.core import Workspace ws = Workspace.from_config() ``` The first time the code below runs it will create a new service running your model. If you want to change the model you can make changes above in this notebook and save a new service definition. Then this code will update the running service in place to run the new model. ``` from azureml.core.model import Model from azureml.core.image import Image from azureml.core.webservice import Webservice from azureml.contrib.brainwave import BrainwaveWebservice, BrainwaveImage from azureml.exceptions import WebserviceException model_name = "catsanddogs-resnet50-model" image_name = "catsanddogs-resnet50-image" service_name = "modelbuild-service" registered_model = Model.register(ws, model_def_path, model_name) image_config = BrainwaveImage.image_configuration() deployment_config = BrainwaveWebservice.deploy_configuration() try: service = Webservice(ws, service_name) service.delete() service = Webservice.deploy_from_model(ws, service_name, [registered_model], image_config, deployment_config) service.wait_for_deployment(True) except WebserviceException: service = Webservice.deploy_from_model(ws, service_name, [registered_model], image_config, deployment_config) service.wait_for_deployment(True) ``` The service is now running in Azure and ready to serve requests. We can check the address and port. ``` print(service.ipAddress + ':' + str(service.port)) ``` ## Client There is a simple test client at amlrealtimeai.PredictionClient which can be used for testing. We'll use this client to score an image with our new service. ``` from azureml.contrib.brainwave.client import PredictionClient client = PredictionClient(service.ipAddress, service.port) ``` You can adapt the client [code](../../pythonlib/amlrealtimeai/client.py) to meet your needs. There is also an example C# [client](../../sample-clients/csharp). The service provides an API that is compatible with TensorFlow Serving. There are instructions to download a sample client [here](https://www.tensorflow.org/serving/setup). ## Request Let's see how our service does on a few images. It may get a few wrong. ``` # Specify an image to classify print('CATS') for image_file in cat_files[:8]: results = client.score_image(image_file) result = 'CORRECT ' if results[0] > results[1] else 'WRONG ' print(result + str(results)) print('DOGS') for image_file in dog_files[:8]: results = client.score_image(image_file) result = 'CORRECT ' if results[1] > results[0] else 'WRONG ' print(result + str(results)) ``` ## Cleanup Run the cell below to delete your service. ``` service.delete() ``` ## Appendix License for plot_confusion_matrix: New BSD License Copyright (c) 2007-2018 The scikit-learn developers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: a. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. b. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. c. Neither the name of the Scikit-learn Developers nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
github_jupyter
``` from scipy import stats import pandas as pd import numpy as np import os, sys import matplotlib.pyplot as plt import seaborn as sns RESULT_DIR = '../../results/correlation-tests/' PREFIX_DIR = os.path.join(RESULT_DIR, 'prefix-alignment', 'processed') hmm_dirname = os.listdir(os.path.join(RESULT_DIR, 'hmmconf'))[0] HMMCONF_DIR = os.path.join(RESULT_DIR, 'hmmconf') PATT_DIR = os.path.join(RESULT_DIR, 'pattern') def join_dfs(store, k=5, type_='test'): dfs = list() to_keep = [ 'case_prefix', 'caseid', 'completeness', 'finalconf', 'injected_distance', ] df_name = '{}_hmmconf_feature_fold_{}_df' for i in range(k): df_name_i = df_name.format(type_, i) # print('Adding {}'.format(df_name_i)) df_i = store[df_name_i] df_i = df_i[to_keep] df_i['fold_no'] = i dfs.append(df_i) df = pd.concat(dfs) df = df.reset_index(drop=True) del dfs return df def process_hmmconf_df(df): # add log and caseid split = df['caseid'].str.split(':', n=1, expand=True) df['log'] = split[0].str.replace('.csv', '') df['caseid'] = split[1] df['case_length'] = df['case_prefix'].str.split(';').apply(lambda r: len(r)) df['activity'] = df['case_prefix'].str.split(';').apply(lambda r: r[-1]) to_keep = [ 'caseid', 'completeness', 'finalconf', 'net', 'log', 'case_length', 'injected_distance', 'activity', 'fold_no' ] return df[to_keep] def process_hmmconf_result_dir(result_dir): test_df_list = list() for net in os.listdir(result_dir): print('Processing {}'.format(net)) net_result_dir = os.path.join(result_dir, net) store_fp = os.path.join(net_result_dir, 'results_store.h5') store = pd.HDFStore(store_fp) test_df = join_dfs(store, k=5, type_='test') # train_df = join_dfs(store, k=5, type_='train') test_df['net'] = net # train_df['net'] = net test_df = process_hmmconf_df(test_df) test_df_list.append(test_df) # train_df_list.append(train_df) store.close() # train_df = pd.concat(train_df_list).reset_index(drop=True) test_df = pd.concat(test_df_list).reset_index(drop=True) # test_df = process_hmmconf_df(test_df) # train_df = process_hmmconf_df(train_df) print('Finished processing {}'.format(result_dir)) return test_df hmm_test_df = process_hmmconf_result_dir(HMMCONF_DIR) prefix_df_list = [] for fname in os.listdir(PREFIX_DIR): if not fname.endswith('.csv'): continue fp = os.path.join(PREFIX_DIR, fname) df = pd.read_csv(fp) prefix_df_list.append(df) prefix_df = pd.concat(prefix_df_list) patt_df_list = [] for fname in os.listdir(PATT_DIR): if not fname.endswith('.csv'): continue fp = os.path.join(PATT_DIR, fname) df = pd.read_csv(fp, sep='\t') df['log'] = fname.replace('.csv', '') patt_df_list.append(df) patt_df = pd.concat(patt_df_list) patt_df.rename(columns={ 'T:concept:name': 'caseid', 'E:concept:name': 'activity' }, inplace=True) # create case_length patt_df['tmp'] = 1 patt_df['case_length'] = patt_df[['caseid', 'log', 'tmp']].groupby(['log', 'caseid']).cumsum() patt_df.drop(columns=['tmp'], inplace=True) patt_df['net'] = patt_df['log'].apply(lambda l: l.split('.pnml')[0].replace('log_', '')) prefix_df['caseid'] = prefix_df['caseid'].astype(str) patt_df['caseid'] = patt_df['caseid'].astype(str) ``` ### Merge all results together ``` patt_df.head() prefix_df.head() hmm_test_df.head() prefix_df.shape hmm_test_df.shape patt_df.shape merged_df = pd.merge(prefix_df, patt_df, on=['log', 'caseid', 'case_length', 'net']) merged_df['caseid'] = merged_df['caseid'].astype(str) merged_df = pd.merge(merged_df, hmm_test_df, on=['log', 'caseid', 'case_length', 'net'], suffixes=('_prefix', '_hmmconf')) assert (merged_df['activity_prefix'] == merged_df['activity_hmmconf']).all() merged_df.rename(columns={'activity_prefix': 'activity'}, inplace=True) merged_df.drop(columns=['activity_hmmconf'], inplace=True) # add model attributes # NET_DIR = '../../data/BPM2018/correlation-tests/models' # desc_fp = os.path.join(NET_DIR, 'description.csv') # desc_df = pd.read_csv(desc_fp) # get_netid = lambda s: s.replace('model_triangular_10_20_30_id_', '').replace('.pnml', '') # desc_df['net_id'] = desc_df['net'].apply(get_netid) # add to merged_df get_netid = lambda s: s.replace('log_model_triangular_10_20_30_id_', '').split('.pnml')[0] merged_df['net_id'] = merged_df['log'].apply(get_netid) # merged_df = pd.merge(desc_df, merged_df, on='net_id') merged_df.rename(columns={ 'Cost of the alignment': 'cost' }, inplace=True) mean_finalconf = merged_df.groupby(['log', 'caseid']).agg({'finalconf': np.mean}).reset_index() mean_finalconf.rename(columns={ 'finalconf': 'mean_finalconf' }, inplace=True) merged_df = merged_df.merge(mean_finalconf, on=['log', 'caseid']) del patt_df del hmm_test_df del prefix_df ``` ### Spearman correlation with non-conforming results ``` noisy = merged_df['cost'] > 0 case_length = merged_df['case_length'] > 0 filtered_df = merged_df.loc[noisy & case_length, :] rho_conf = stats.spearmanr(filtered_df['cost'], filtered_df['finalconf']) rho_mean_conf = stats.spearmanr(filtered_df['cost'], filtered_df['mean_finalconf']) rho_injected_distance = stats.spearmanr(filtered_df['cost'], filtered_df['injected_distance']) rho_completeness = stats.spearmanr(filtered_df['cost'], filtered_df['completeness_hmmconf']) print( 'Final conformance: spearman rho: {:.3f}, p-value: {:.10f}'.format(rho_conf[0], rho_conf[1]), '\nMean conformance: spearman rho: {:.3f}, p-value: {:.10f}'.format(rho_mean_conf[0], rho_mean_conf[1]), '\nInjected distance: spearman rho: {:.3f}, p-value: {:.10f}'.format(rho_injected_distance[0], rho_injected_distance[1]), '\nCompleteness: spearman rho: {:.3f}, p-value: {:.10f}'.format(rho_completeness[0], rho_completeness[1]) ) noisy = merged_df['cost'] > 0 case_length = merged_df['case_length'] >= 1 is_net = merged_df['net_id'] != '32' scatter_df = merged_df.loc[noisy & case_length, :] cost_var = 'cost' var = 'injected_distance' _min = scatter_df[var].min() _max = scatter_df[var].max() bins = np.linspace(_min, _max, 30) scatter_df['binned'] = pd.cut(scatter_df[var], bins=bins) grouped = scatter_df[[cost_var, 'binned', 'caseid']].groupby([cost_var, 'binned']) bubble_df = grouped.count().reset_index(drop=False) bubble_df[var] = bubble_df['binned'].apply(lambda interval: (interval.left + interval.right) / 2) bubble_df.rename(columns={'caseid': 'Count'}, inplace=True) fig, ax = plt.subplots(figsize=(7, 6)) cmap = sns.cubehelix_palette(dark=.3, light=.7, as_cmap=True) g = sns.scatterplot(x=cost_var, y=var, hue='Count', size='Count', sizes=(20, 200), palette=cmap, data=bubble_df, ax=ax) _ = ax.set_xlabel('Cost', size=12) _ = ax.set_ylabel('Total injected distance', size=12) # _ = ax.set_title('Bubble plot of noisy non-first event instances for {}'.format(var)) outdir = './images/svg/' if not os.path.isdir(outdir): os.makedirs(outdir) out_fp = os.path.join(outdir, 'cost-injection-unconform-bubble-epsilon.svg') fig.savefig(out_fp, bbox_inches='tight', rasterized=True) ``` ### Spearman correlation with conforming results ``` noisy = merged_df['cost'] > -1 case_length = merged_df['case_length'] > 0 filtered_df = merged_df.loc[noisy & case_length, :] rho_conf = stats.spearmanr(filtered_df['cost'], filtered_df['finalconf']) rho_mean_conf = stats.spearmanr(filtered_df['cost'], filtered_df['mean_finalconf']) rho_injected_distance = stats.spearmanr(filtered_df['cost'], filtered_df['injected_distance']) rho_completeness = stats.spearmanr(filtered_df['cost'], filtered_df['completeness_hmmconf']) print( 'Final conformance: spearman rho: {:.3f}, p-value: {:.10f}'.format(rho_conf[0], rho_conf[1]), '\nMean conformance: spearman rho: {:.3f}, p-value: {:.10f}'.format(rho_mean_conf[0], rho_mean_conf[1]), '\nInjected distance: spearman rho: {:.3f}, p-value: {:.10f}'.format(rho_injected_distance[0], rho_injected_distance[1]), '\nCompleteness: spearman rho: {:.3f}, p-value: {:.10f}'.format(rho_completeness[0], rho_completeness[1]) ) noisy = merged_df['cost'] > -1 case_length = merged_df['case_length'] >= 1 is_net = merged_df['net_id'] != '32' scatter_df = merged_df.loc[noisy & case_length, :] cost_var = 'cost' var = 'injected_distance' _min = scatter_df[var].min() _max = scatter_df[var].max() bins = np.linspace(_min, _max, 30) scatter_df['binned'] = pd.cut(scatter_df[var], bins=bins) grouped = scatter_df[[cost_var, 'binned', 'caseid']].groupby([cost_var, 'binned']) bubble_df = grouped.count().reset_index(drop=False) bubble_df[var] = bubble_df['binned'].apply(lambda interval: (interval.left + interval.right) / 2) bubble_df.rename(columns={'caseid': 'Count'}, inplace=True) fig, ax = plt.subplots(figsize=(7, 6)) cmap = sns.cubehelix_palette(dark=.3, light=.7, as_cmap=True) sns.scatterplot(x=cost_var, y=var, hue='Count', size='Count', sizes=(20, 200), palette=cmap, data=bubble_df, ax=ax) _ = ax.set_xlabel('Cost', size=12) _ = ax.set_ylabel('Total injected distance', size=12) # _ = ax.set_title('Bubble plot of noisy non-first event instances for {}'.format(var)) outdir = './images/svg/' if not os.path.isdir(outdir): os.makedirs(outdir) out_fp = os.path.join(outdir, 'cost-injection-all-bubble-epsilon.svg') fig.savefig(out_fp, bbox_inches='tight', rasterized=True) del filtered_df del fig del ax ``` ### Confusion matrix for fitting categorization ``` merged_df.head() y_true = merged_df['cost'] == 0 y_pred = (merged_df['finalconf'] > 0.99) & (merged_df['injected_distance'] == 0) from sklearn.metrics import confusion_matrix cnf_mat = confusion_matrix(y_true, y_pred) precision = cnf_mat[1, 1] / cnf_mat[:, 1].sum() recall = cnf_mat[1, 1] / cnf_mat[1, :].sum() f1 = 2 * ((precision * recall) / (precision + recall)) print('Confusion matrix: \n{}\nPrecision: {:.5f}, Recall: {:.5f}, F1: {:.5f}'.format(cnf_mat, precision, recall, f1)) print('Total negatives: {}, Total positives: {}, Predicted negatives: {}, Predicted positives: {}'.format(cnf_mat[0, :].sum(), cnf_mat[1, :].sum(), cnf_mat[:, 0].sum(), cnf_mat[:, 1].sum())) ``` ### Confusion matrix for pattern based ``` second_event_df = merged_df.loc[ (merged_df['case_length'] > 1), : ] second_last_event_df = second_event_df.groupby(['caseid', 'log', 'net']).tail(1).reset_index(drop=True) fitting_case_df = second_event_df.loc[ (second_event_df)['cost'] == 0, : ] fitting_cases = (fitting_case_df['caseid'] + fitting_case_df['log']).unique() fitting_cases.shape[0] 57234/225611 false_negs = second_last_event_df.loc[ (second_last_event_df['cost'] == 0) & # perfectly fitting ((second_last_event_df['finalconf'] <= 0.99) | (second_last_event_df['injected_distance'] > 0)) # not conforming according to HMMConf ] false_neg_cases = (false_negs['caseid'] + '-' + false_negs['log']).unique() false_neg_cases.shape[0] merged_df.loc[ (merged_df['cost'] == 0) & # perfectly fitting (merged_df['case_length'] == 1), 'conformance' ] = 1. merged_df.loc[ (merged_df['cost'] == 0) & # perfectly fitting (merged_df['case_length'] == 1), 'completeness_prefix' ] = 1. last_event_df = merged_df.groupby(['caseid', 'log', 'net']).tail(1).reset_index(drop=True) false_neg_df = merged_df.loc[ (merged_df['cost'] == 0) & # perfectly fitting alignments ~((merged_df['finalconf'] > 0.99) & (merged_df['injected_distance'] == 0)) & # not conforming according to HMMConf # (merged_df['case_length'] < merged_df['Length of the alignment found']) & (merged_df['net'].str.endswith('_32')) , : ] false_neg_df.shape # get the first event that HMMConf got wrong # first_false_neg_df = false_neg_df.groupby(['net', 'log', 'caseid']).head(1) # first_false_neg_df.head() 44028 + 69231 + 910 + 51231 + 203 + 45973 + 121780 9165 + 910 + 18348 + 122 + 5866 + 18351 y_true = merged_df['cost'] > 0 y_pred = ~((merged_df['finalconf'] > 0.99) & (merged_df['injected_distance'] == 0)) from sklearn.metrics import confusion_matrix cnf_mat = confusion_matrix(y_true, y_pred) precision = cnf_mat[1, 1] / cnf_mat[:, 1].sum() recall = cnf_mat[1, 1] / cnf_mat[1, :].sum() f1 = 2 * ((precision * recall) / (precision + recall)) tp = cnf_mat[1, 1].sum() tn = cnf_mat[0, 0].sum() n_predictions = cnf_mat.sum() accuracy = (tp + tn) / n_predictions print('Confusion matrix: \n{}\nPrecision: {:.5f}, Recall: {:.5f}, F1: {:.5f}, Accuracy: {:.5f}'.format(cnf_mat, precision, recall, f1, accuracy)) print('Total negatives: {}, Total positives: {}, Predicted negatives: {}, Predicted positives: {}'.format(cnf_mat[0, :].sum(), cnf_mat[1, :].sum(), cnf_mat[:, 0].sum(), cnf_mat[:, 1].sum())) y_true = last_event_df['cost'] > 0 y_pred = list(map(lambda t: any(t), zip(last_event_df['conformance'] < 1, last_event_df['completeness_prefix'] < 1))) from sklearn.metrics import confusion_matrix cnf_mat = confusion_matrix(y_true, y_pred) precision = cnf_mat[1, 1] / cnf_mat[:, 1].sum() recall = cnf_mat[1, 1] / cnf_mat[1, :].sum() f1 = 2 * ((precision * recall) / (precision + recall)) tp = cnf_mat[1, 1].sum() tn = cnf_mat[0, 0].sum() n_predictions = cnf_mat.sum() accuracy = (tp + tn) / n_predictions print('Confusion matrix: \n{}\nPrecision: {:.5f}, Recall: {:.5f}, F1: {:.5f}, Accuracy: {:.5f}'.format(cnf_mat, precision, recall, f1, accuracy)) print('Total negatives: {}, Total positives: {}, Predicted negatives: {}, Predicted positives: {}'.format(cnf_mat[0, :].sum(), cnf_mat[1, :].sum(), cnf_mat[:, 0].sum(), cnf_mat[:, 1].sum())) ``` ### Dummy classifier as baseline ``` from sklearn.dummy import DummyClassifier from sklearn.model_selection import cross_val_score X = last_event_df['log'].map(str) + '-' + last_event_df['caseid'] y = last_event_df['cost'] > 0 clf = DummyClassifier(strategy='stratified', random_state=123) f1_scores = cross_val_score(clf, X, y, cv=5, scoring='f1') precision_scores = cross_val_score(clf, X, y, cv=5, scoring='precision') recall_scores = cross_val_score(clf, X, y, cv=5, scoring='recall') print('F1 score: {:.3f} +- {:.3f}'.format(np.mean(f1_scores), np.std(f1_scores))) print('Precision: {:.3f} +- {:.3f}'.format(np.mean(precision_scores), np.std(precision_scores))) print('Recall: {:.3f} +- {:.3f}'.format(np.mean(recall_scores), np.std(recall_scores))) ```
github_jupyter
``` import draftfast import pandas as pd df=pd.read_csv('full_old_NFL.csv') del df['Unnamed: 0'] from draftfast import rules from draftfast.optimize import run, run_multi from draftfast.orm import Player from draftfast.csv_parse import salary_download df = df.dropna() # for year, week in zip(df['Year'], df['Week']): # player_pool = [] # segment = df[(df['Year'] == year) & (df['Week'] == week)].as_matrix() # for player in segment: # player_pool.append(Player(name=player[3], cost=player[9], proj=player[8], pos=player[4],average_score=player[11], team=player[5], matchup=player[7].upper())) # roster = run( # rule_set=rules.FD_NFL_RULE_SET, # player_pool=player_pool, # verbose=False, # ) df['Pos'].replace('Def','D', inplace=True) df['Pos'].replace('PK','K', inplace=True) def optimal_count_from_segment(segment): player_pool = [] for player in segment.as_matrix(): player_pool.append(Player(name=player[3], cost=player[9], proj=player[8], pos=player[4],average_score=player[11], team=player[5], matchup=player[7])) counted_list = count_list(get_optimal_roster_list(player_pool)) counted_df = pd.DataFrame(counted_list, index=['count']).T.reset_index() counted_df.rename(columns={'index':'Name'}, inplace=True) segment_ = segment.merge(counted_df, how='left', on ='Name') segment_['count'].fillna(0, inplace=True) return segment_ def get_optimal_roster_list(player_pool): rosters = run_multi( iterations=10, rule_set=rules.FD_NFL_RULE_SET, player_pool=player_pool, verbose=False, ) players = [] for roster in rosters[0]: players += roster.players p_names = [p.name for p in players] return p_names def count_list(list_): counted_list = {} for item in list_: if item in counted_list.keys(): counted_list[item] += 1 else: counted_list[item] = 1 return counted_list incidence_df = pd.DataFrame() for year, week in zip(df.groupby(['Year','Week']).sum()[[]].reset_index().as_matrix().T[0],df.groupby(['Year','Week']).sum()[[]].reset_index().as_matrix().T[1]): segment = df[(df['Year'] == year) & (df['Week'] == week) & (df['Pos'] != 'K')] new_segment = optimal_count_from_segment(segment) incidence_df = incidence_df.append(new_segment) incidence_df.to_csv('old_NFL_with_count.csv')#.groupby(['Name','Pos']).agg({'FD salary':'mean','count':'sum'}).sort_values('count').plot(kind='scatter',x='FD salary',y='count') ```
github_jupyter
# `str` - Karaktere kateak * `str` motako objektuak * Karaktere sekuentzia **ALDAEZINA**. * Propietateak: Indexagarriak, Iteragarriak, Aldaezinak ## Indexagarritasuna Kontatu zenbatetan agertzen den letra bat hitz baten barnean. ``` def kontatzen(hitza,letra): k = 0 i = 0 while i < len(hitza) : if hitza[i] == letra : k += 1 i += 1 return k print(kontatzen('amara','a')) def kontatzen(hitza,letra): k = 0 for i in range(len(hitza)) : if hitza[i] == letra : k += 1 return k print(kontatzen('amara','a')) ``` ## Iteragarritasuna Kontatu zenbatetan agertzen den letra bat hitz baten barnean. ``` def kontatzen(hitza,letra): k = 0 for x in hitza : if x == letra : k += 1 return k print(kontatzen('amara','am')) def kontatzen(hitza,letra): return hitza.count(letra) print(kontatzen('amara','am')) ``` ## Eragileak: `+` , `*` , `in` , `not in` __Ariketa:__ Zenbaki oso bat era bitarrean adierazi karaktere kate baten bidez. ``` def bitarrera(n): s = '' while n > 0 : s = str(n%2) + s n //= 2 return s print(bitarrera(25)) print(bitarrera(1423462)) ``` __Ariketa:__ Zenbaki oso bat edozein oinarritan adierazi karaktere kate baten bidez. ``` def oinarrira(n,oin=10): s = '' while n > 0 : s = str(n%oin) + s n //= oin return s for o in range(2,11) : print(15,o,'oinarrian:',oinarrira(15,o)) print(oinarrira(15,16),"\n\n") ``` __Ariketa:__ 10 Oinarritik gora joan nahi badugu, `[0-9]` gainetik digitu berriak behar ditugu. Adibidez, letrak erabili ditzakegu: `A, B, C...` ``` def num2str(n): n2s = '0123456789ABCDEF' if n < 16 : return n2s[n] raise ValueError('Unsupported value: ' + str(n)) def oinarrira(n,oin): s = '' while n > 0 : s = num2str(n%oin) + s n //= oin return s for o in range(2,17) : print(15,o,'oinarrian:',oinarrira(15,o)) #print(oinarrira(16,17)) ``` __Ariketa:__ Kate bat pantailatik inprimatu, aurretik behar aina hutsune utziz edo haserako karaktereak ezabatuz, idatzitako testuaren luzera 40 izan dadin. Hau da, testua beti eskuinera lerrokatua agertuko da. ``` def idatzi_eskuinean(s, zabalera=40): n = (zabalera - len(s)) if n > 0 : s = n * ' ' + s else : s = s[-zabalera:] print(s) idatzi_eskuinean('kaixo') idatzi_eskuinean('ea nola ateratzen den') idatzi_eskuinean('Orain beste',30) idatzi_eskuinean('zabalera',25) idatzi_eskuinean('batekin',25) idatzi_eskuinean('hau luzeegia denez ezingo da osorik agertu',25) ``` __Ariketa:__ Idatzi funtzio bat, `True` bueltatzen duena emandako hitzak `a` letrarik ez duenean eta `False` kontrako kasuan. ``` def a_rik_ez(hitza): for k in hitza : if k == 'a' : return False return True print(a_rik_ez('ondoloin')) print(a_rik_ez('kaixo')) ``` `in` eragilea erabiliz, funtzioa erreztu dezakegu: ``` def a_rik_ez(hitza): if 'a' in hitza : return False else : return True print(a_rik_ez('ondoloin')) print(a_rik_ez('kaixo')) ``` Ondoko egitura: ```python if xxx : return True else : return False ``` beste modu hontan adieraz daiteke (ez da berdina, baina bai baliokidea): ```python return xxx ``` Guztiz baliokidea litzateke: ```python return bool(xxx) ``` Era berean, ```python if xxx : return False else : return True ``` idatzi ordez (hau guztiz berdina da): ```python return not xxx ``` * `0` (edo `0.0`) ez diren zenbakizko balio guztiak egiak dira. * Hutsik ez dauden sekuentzia guztiak egiak dira. ``` for x in [0,1,0.0,1.0,1e-30,"kaixo","",[1,2,3],[],range(10),range(0)] : if x : print("Egia:",x,bool(x)) else : print("Gezurra:",x,bool(x)) ``` Ariketara bueltatuz: ``` def a_rik_ez(hitza): return not ('a' in hitza) print(a_rik_ez('ondoloin')) print(a_rik_ez('kaixo')) def a_rik_ez(hitza): return 'a' not in hitza print(a_rik_ez('ondoloin')) print(a_rik_ez('kaixo')) ``` ## Formatudun karaktere kateak Demagun bi aldagai ditugula, `a` eta `b`, Ane eta Xabiren adinekin, eta ondoko karaktere katea osatu nahi dugula: `Anek ... urte ditu eta Xabik ...` Karaktere kate baten barnean datuak txertatu naghi ditugunean, aukera ezberdinak ditugu. ### _Eskuz_ eratutako formatudun karaktere katea ``` a = 23 b = 25 print("Anek " + str(a) + " urte ditu eta Xabik " + str(b)) ``` Metodo honek desabantaila asko ditu. * Sortzen den karaktere katearen egitura ez da oso argia. * Edozein datu txertatzeko `str(...)` erabili behar dugu. * Kontu handia eduki behar dugu sortu nahi ditugun hutsuneekin. ### `%` eragilea: _C-style string formatting_ * `%` eragilearen ezkerraldean karaktere kate bat dagoenean, eskuinaldean txertatuko ditugun balioen tupla bat jar dezakegu. * Karaktere katearen barnean `%` agertzen den lekuetan, balioak txertatuko dira. * Balioak txertatzean formatua adieraziko dugu: * `%d` &rarr; zenbaki osoa * `%e` `%f` `%g` &rarr; zenbaki erreala (notazio zientifikoa, dezimala eta nahasia) * `%s` &rarr; karaktere katea * ... ``` a = 23 b = 25 print("Anek %d urte ditu eta Xabik %d" % (a,b)) ``` Metodo honek abantailak ditu aurrekoarekin konparatuz. * Sortzen den karaktere katearen egitura nahiko argia da. * Sintaxia sinpleagoa da (batez ere C lengoaiatik datozenentzat!). ### `str.format()` bidezko formatua * `%` eragilearen hobekuntza bat da * `s.format(...)` metodoko argumentuak `s` karaktere katean txertatuko dira * Karaktere katearen barnean `{}` agertzen den lekuetan, balioak txertatuko dira. ``` a = 23 b = 25 print("Anek {} urte ditu eta Xabik {}".format(a,b)) ``` Gauza berdina badirudi ere, `str.format(...)` metodoa `%` eragilea baina askosaz erabilgarriagoa da. * `{n}`-k Argumentu indizea adierazten du. Posizioz alda daiteke, ala nahi adina aldiz erabili. ``` print("Anek {0} urte ditu eta Xabik {1}".format(a,b)) print("Xabik {1} urte ditu eta Anek {0}".format(a,b)) print("Xabik {1} urte ditu, Anek {0} eta Xabik {1}".format(a,b)) ``` * `*s` espresioak `s` sekuentziarten espantsioa adierazten du eta oso erabilkorra da sekuentzia baten elementuak metodo baten argumentu bilakatzeko. ``` z = [234,938,262,232,432,2] print('lehenengo hiru elementuak {0}, {1} eta {2} dira'.format(*z)) ``` __OHARRA:__ `*s` espresioak ez du zerikusirik `format` metodoarekin, beste edozein lekutan erabil daiteke. * `{keyword}`-k _keyword_ argumentu bat adierazten du. Hau da, izenaren bidez adierazitako argumentua. ``` print("Anek {ane} urte ditu eta Xabik {xabi}".format(ane=23, xabi=25)) ``` * `**d` espresioak `d` hiztegiaren espantsioa adierazten du eta oso erabilkorra da hiztegi batetako sarrerak metodo baten _keyword_ argumentu bilakatzeko. ``` h = {'ane':23, 'xabi':25} print("Anek {ane} urte ditu eta Xabik {xabi}".format(**h)) ``` __OHARRA:__ `**d` espresioak ez du zerikusirik `format` metodoarekin, beste edozein lekutan erabil daiteke. ### _f-Strings_ edo _formatted string literals_ * `format` metodoa baina malguagoa da * `f` letra aurretik daramaten karaktere kate literala: `f'....'` * `format(...)` metodoak txertatzeko balioak argumentoetatik lortzen dituen moduan, _f-String_ -ak exekuzio ingurunetik lortzen ditu zuzenean ``` a = 23 b = 25 print(f"Anek {a} urte ditu eta Xabik {b}") ``` * `{..}` barnean dagoen espresioa ebaluatu egiten da ``` print(f'3 * 4 = {3*4}') print(f'3 ** 4 = {3**4}') print(f'sum(range(1,501)) = {sum(range(1,501))}') filma='hodeiak pintatzeko makina' print(f'Pelikularen izenburua "{filma.title()}" zen') ``` * Lerro anitzetako _f-String_ -ak ere sortu daitezke, `f''' ... '''` erabiliz ``` txt = f''' 1x1 = {1*1} 2x2 = {2*2} 3x3 = {3*3} 4x4 = {4*4} ... ''' print(txt) ``` ## *Slice* notazioa Objetu indexagarrietan elementu bat baina gehiago adierazteko aukera * `a[i:j]` : `a`-ko azpi-sekuentzia, `i` indizetik (barne) `j`-ra (kanpo). * `i` edo `j` negatiboak &rarr; `len(a)-i` , `len(a)-j` * `i` edo `j` > `len(a)` &rarr; `len(a)` * `i` ez adierazia edo `None` &rarr; `0` * `j` ez adierazia edo `None` &rarr; `len(a)` * `a[i:j:k]` : `a`-ko azpi-sekuentzia, `i`-tik (barne) `j`-ra (kanpo), `k` hurratsarekin. * `a[i:j]` eta `a[i:j:k]`-k objektu berriak sortzen dituzte... **ia beti**. Adibideak: ``` [hasi:gelditu:hurratsa] (hurrats negatiboa ere) [hasi:gelditu] [hasi:] [:gelditu] [:] [hasi::hurratsa] (hurrats negatiboa ere) [:gelditu:hurratsa] (hurrats negatiboa ere) [::hurratsa] (hurrats negatiboa ere) ``` ``` a = "abcdefghijklmnñopqrstuvwxyz" print(a[0:5]) print(a[5:len(a)]) print(a[:5]) print(a[5:]) print(a[:]) print(a==a[:],a is a[:]) a = "abcdefghijklmnñopqrstuvwxyz" print(a[0:len(a):2]) print(a[::2]) print('froga1:',a[0:len(a):-1]) print('froga2:',a[len(a):0:-1]) print('froga3:',a[len(a):-len(a)-1:-1]) print('froga4:',a[len(a)::-1]) print('froga5:',a[::-1]) ``` ## Karaktere kateen metodoak (funtzioak) * Objektu mota bakoitzak berezko metodo sorta bat eduki dezake. * Metodo hauek izendatzeko: `objektua.metodo_izena` * Karaktere kateek, [45 bat metodo](https://docs.python.org/3.5/library/stdtypes.html#string-methods) dituzte... Metodo erabilgarri batzuk: * `s.count(a[,i,j])` , `s.find(a[,i,j])` * `s.replace(a,b[,maxreps])` * `s.strip([charset])` , `s.lstrip([charset])` , `s.rstrip([charset])` * `s.split([sep[,maxsplits]])` , `s.splitlines([keepends])` * `s.join(t)` * `s.lower()` , `s.upper()` , `s.capitalize()` , `s.title()` ## `string` moduloa Karaktere multzoak dituzten karaktere kateak definitzen dira `import string` * `string.ascii_letters` &rarr; lowercase + uppercase * `string.ascii_lowercase` &rarr; letra xeheak (a-z) * `string.ascii_uppercase` &rarr; letra larriak (A-Z) * `string.digits` &rarr; digito hamartarrak * `string.hexdigits` &rarr; digito hexadezimalak (0-9a-fA-F) * `string.octdigits` &rarr; digito zortzitarrak * `string.punctuation` &rarr; puntuazio seinuak * `string.printable` &rarr; letrak + dig. + puntuaz. + huts. * `string.whitespace` &rarr; hutsuneak bezala hartzen diren karak. __OHARRA:__ Multzo hauek ingelesean oinarrituak daude.... ``` from string import ascii_lowercase as lowercase print('n' in lowercase , 'ñ' in lowercase , 'u' in lowercase , 'ú' in lowercase) ``` <table border="0" width="100%" style="margin: 0px;"> <tr> <td style="text-align:left"><a href="Python Programazio Lengoaia.ipynb">&lt; &lt; Python Programazio Lengoaia &lt; &lt;</a></td> <td style="text-align:right"><a href="Zerrendak.ipynb">&gt; &gt; Zerrendak &gt; &gt;</a></td> </tr> </table>
github_jupyter
``` import cupy as cp import cusignal from scipy import signal import numpy as np ``` ### Resample ``` start = 0 stop = 10 num = int(1e8) resample_num = int(1e5) cx = np.linspace(start, stop, num, endpoint=False) cy = np.cos(-cx**2/6.0) %%time cf = signal.resample(cy, resample_num, window=('kaiser', 0.5)) gx = cp.linspace(start, stop, num, endpoint=False) gy = cp.cos(-gx**2/6.0) %%time gf = cusignal.resample(gy, resample_num, window=('kaiser',0.5)) ``` ### Resample Poly ``` start = 0 stop = 10 num = int(1e8) resample_up = 2 resample_down = 3 cx = np.linspace(start, stop, num, endpoint=False) cy = np.cos(-cx**2/6.0) %%time cf = signal.resample_poly(cy, resample_up, resample_down, window=('kaiser', 0.5)) gx = cp.linspace(start, stop, num, endpoint=False) gy = cp.cos(-gx**2/6.0) %%time cf = cusignal.resample_poly(gy, resample_up, resample_down, window=('kaiser', 0.5), use_numba=True) %%time cf = cusignal.resample_poly(gy, resample_up, resample_down, window=('kaiser', 0.5), use_numba=False) gpu_signal = cusignal.get_shared_mem(num, dtype=np.complex128) gpu_signal[:] = cy ``` ### FIR Filter Design with Window ``` numtaps = int(1e8) f1, f2 = 0.1, 0.2 %%time cfirwin = signal.firwin(numtaps, [f1, f2], pass_zero=False) %%time gfirwin = cusignal.firwin(numtaps, [f1, f2], pass_zero=False) ``` ### Correlate ``` sig = np.random.rand(int(1e8)) sig_noise = sig + np.random.randn(len(sig)) %%time ccorr = signal.correlate(sig_noise, np.ones(128), mode='same') / 1e6 sig = cp.random.rand(int(1e8)) sig_noise = sig + cp.random.randn(len(sig)) %%time gcorr = cusignal.correlate(sig_noise, cp.ones(128), mode='same') / 1e6 ``` ### Convolve ``` sig = np.random.rand(int(1e8)) win = signal.windows.hann(int(1e3)) %%time cconv = signal.convolve(sig, win, mode='same') / np.sum(sig) %%time sig = cp.random.rand(int(1e8)) win = cusignal.hann(int(1e3)) gconv = cusignal.convolve(sig, win, mode='same') / cp.sum(win) ``` ### Convolution using the FFT Method ``` csig = np.random.randn(int(1e8)) %%time cautocorr = signal.fftconvolve(csig, csig[::-1], mode='full') gsig = cp.random.randn(int(1e8)) %%time gautocorr = cusignal.fftconvolve(gsig, gsig[::-1], mode='full') ``` ### Wiener Filter on N-Dimensional Array ``` csig = np.random.rand(int(1e8)) %%time cfilt = signal.wiener(csig) gsig = cp.random.randn(int(1e8)) %%time gfilt = cusignal.wiener(gsig) ``` ### Perform 1-D Hilbert Transform ``` csig = np.random.rand((int(1e8))) %%time chtrans = signal.hilbert(csig) gsig = cp.random.rand(int(1e8)) %%time ghtrans = cusignal.hilbert(gsig) ``` ### Perform 2-D Hilbert Transform ``` csig = np.random.rand(int(1e4), int(1e4)) %%time chtrans2d = signal.hilbert2(csig) gsig = cp.random.rand(int(1e4), int(1e4)) %%time ghtrans2d = cusignal.hilbert2(gsig) ``` ### Perform 2-D Convolution and Correlation ``` csig = np.random.rand(int(1e4), int(1e4)) filt = np.random.rand(5,5) %%time grad = signal.convolve2d(csig, filt, boundary='symm', mode='same') %%time grad = signal.correlate2d(csig, filt, boundary='symm', mode='same') gsig = cp.random.rand(int(1e4), int(1e4)) gfilt = cp.random.rand(5,5) %%time ggrad = cusignal.convolve2d(csig, filt, boundary='symm', mode='same') %%time ggrad = cusignal.correlate2d(csig, filt, boundary='symm', mode='same') ```
github_jupyter
# The Rise of GitHub GitHub has become the dominant channel that development teams use to collaborate on code. Wikipedia's [Timeline of GitHub](https://en.wikipedia.org/wiki/Timeline_of_GitHub) documents GitHub's rise to dominance as a *business*. We will use a `mirror` crawl to analyze GitHub's rise as a *platform*. ## Dataset Any analysis must start with data. The dataset we use here are the results of a crawl of public GitHub repositories conducted using [`mirror`](https://github.com/simiotics/mirror). You can build the same dataset by using `mirror github crawl` to build up the raw dataset of basic repository information and then `mirror github sync` to create a SQLite database of the type used in this notebook. If you do create your own dataset, change the variable below to point at the SQLite database you generate. ``` from datetime import datetime import json import math import os %matplotlib notebook %matplotlib inline import matplotlib.pyplot as plt import matplotlib.dates as mdates from tqdm import tqdm import requests GITHUB_SQLITE = os.path.expanduser('~/data/mirror/github.sqlite') ``` Let us explore the structure dataset before we dive into our analysis. Basic repository metadata (extracted by crawling the GitHub [`/repositories`](https://developer.github.com/v3/repos/#list-all-public-repositories) endpoint) is stored in the `repositories` table of this database. This is its schema: ``` import sqlite3 conn = sqlite3.connect(GITHUB_SQLITE) c = conn.cursor() r = c.execute('select sql from sqlite_master where name="repositories";') repositories_schema = r.fetchone()[0] print(repositories_schema) ``` These columns do not provide comprehensive repository information, but they already allow us to understand some interesting things. To speed up these preliminary analyses, since `mirror` does not automatically create indices in the database, let us create some of our own: ### Number of public repositories on GitHub ``` r = c.execute('select count(*) from repositories;') result = r.fetchone() print(result[0]) ``` ### Proportion of repositories that are forks ``` results = c.execute('select is_fork from repositories;') total = 0 forks = 0 for row in tqdm(results): total += 1 forks += (row[0] == 1) print('Proportion of repositories that are forks:', forks/total) ``` ### Number of users who have created public repositories ``` results = c.execute('select owner from repositories;') owners = set([]) for row in tqdm(results): owners.add(row[0]) print('Number of users who have created public repositories:', len(owners)) # The owners set takes over a gigabyte of memory on only the first full crawl. # Better to free this memory up. del owners ``` ## Rise The basic repository metadata that GitHub's `/repositories` endpoint provides does not give any information about when a repository was created. We will rely on the [`/repos`](https://developer.github.com/v3/repos/#get) endpoint. The `api_url` column of the Mirror crawl `repositories` table conveniently populates the endpoint URL with repository information, so we will use this column. ### Sampling We have to make GitHub API calls to retrieve creation time information for repositories in our dataset. With over 120 million repositories in the dataset and with a rate limit of 5000 requests per hour (authenticated) to the GitHub API, it would take a long time to collect the creation time for every repository in our database. We will have to sample. For sampling, we use the following parameters: + `NUM_SAMPLES` - Number of total repositories that should be sampled for creation time analysis + `GITHUB_TOKEN_FILE` - File containing the token for authentication against the GitHub API ([instructions](https://github.com/settings/tokens) to create an API token) ``` NUM_SAMPLES = 500 headers = {} # Comment the following lines out if you do not want to make authenticated calls against the GitHub API # If you do authenticate, make sure that the path below points at a file with your token. GITHUB_TOKEN_FILE = os.path.expanduser('~/.secrets/github-mirror.txt') with open(GITHUB_TOKEN_FILE, 'r') as ifp: GITHUB_TOKEN = ifp.read().strip() headers['Authorization'] = f'token {GITHUB_TOKEN}' gap = int(total/NUM_SAMPLES) results = c.execute('select github_id, api_url from repositories;') sample = [] for i, row in tqdm(enumerate(results)): if i % gap == 0: sample.append((i, row[0], row[1])) len(sample) # Change file name as per your use case SAMPLE_METADATA_FILE = os.path.expanduser('~/data/mirror/rise-of-github-sample.jsonl') sample_metadata = [] if os.path.isfile(SAMPLE_METADATA_FILE): with open(SAMPLE_METADATA_FILE, 'r') as ifp: for line in ifp: sample_metadata.append(json.loads(line.strip())) else: for i, github_id, api_url in tqdm(sample): response = requests.get(api_url, headers=headers) entry = { 'position': i, 'github_id': github_id, 'repository': response.json(), } sample_metadata.append(entry) with open(SAMPLE_METADATA_FILE, 'w') as ofp: for entry in sample_metadata: print(json.dumps(entry), file=ofp) ``` This file is available as a Gist. [Download here](https://gist.github.com/nkashy1/c4ca78c5d6c2c2da2b03b4a730f6e194). ### Analysis The sample repository metadata is the subject of this analysis, with the goal of building a simple histogram to visualize the growth of GitHub as a platform. Some of the repositories have bad metadata - either because the repository has been deleted or because access to repository metadata has been blocked. We define "bad" metadata by metadata that doesn't have a `created_at` key. Let us count how many such repositories there are: ``` num_bad = 0 for entry in tqdm(sample_metadata): try: entry.get('repository', {})['created_at'] except KeyError: num_bad += 1 print(num_bad) ``` That still gives us enough samples to build a meaningful histogram out of, if we first filter out the `valid_sample_metadata`: ``` valid_sample_metadata = [entry for entry in sample_metadata if entry.get('repository', {}).get('created_at') is not None] len(valid_sample_metadata) ``` Conveniently, the results of the Mirror crawl are sorted. This allows us to count the number of repositories created between samples by simply taking a difference of the `position` keys in `sample_metadata`. ``` x = [] bins = [] weights = [] labels = [] for i, entry in tqdm(enumerate(valid_sample_metadata[:-1])): created_at = datetime.strptime(entry['repository']['created_at'], '%Y-%m-%dT%H:%M:%SZ') labels.append(created_at.strftime('%Y-%m-%d')) x.append(created_at.timestamp()) bins.append(created_at.timestamp()) weights.append(valid_sample_metadata[i+1]['github_id'] - entry['github_id']) ticks = [bins[0]] ticklabels = [labels[0]] log = int(math.log(len(bins), 2)) for i in range(log-1, 0, -1): idx = int(len(bins)/(2**i)) ticks.append(bins[idx]) ticklabels.append(labels[idx]) ticks.append(bins[-1]) ticklabels.append(labels[-1]) timeline_ticks = [ datetime(2010, 7, 24).timestamp(), datetime(2011, 4, 20).timestamp(), datetime(2012, 1, 17).timestamp(), datetime(2013, 1, 14).timestamp(), datetime(2013, 12, 23).timestamp(), datetime(2014, 3, 17).timestamp(), datetime(2014, 10, 7).timestamp(), datetime(2015, 3, 26).timestamp(), datetime(2015, 9, 1).timestamp(), datetime(2015, 12, 3).timestamp(), ] + ticks[-3:] timeline_ticklabels = [ '1M repos', '2M repos', 'Google joins', '3M users', '10M repos', 'Harassment', 'Student Pack', 'DDoS', '~10M users', 'Apple joins', ] + ticklabels[-3:] fig, ax = plt.subplots(1, 2, figsize=(12,9)) _ = ax[0].hist(x=x, bins=bins, weights=weights, cumulative=False) ax[0].set_xlabel('Date') ax[0].set_ylabel('Number of repositories') ax[0].set_xticks(timeline_ticks) ax[0].set_xticklabels(timeline_ticklabels, rotation='vertical') _ = ax[1].hist(x=x, bins=bins, weights=weights, cumulative=True) ax[1].set_xlabel('Date') ax[1].set_ylabel('Number of repositories (cumulative)') ax[1].set_xticks(ticks) ax[1].set_xticklabels(ticklabels, rotation='vertical') plt.show() fig.savefig('rise-of-github.png') ``` (Timeline events taken from https://en.wikipedia.org/wiki/Timeline_of_GitHub)
github_jupyter
# Lista 02 - Probabilidade + Estatística ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd from numpy.testing import * from scipy import stats as ss plt.style.use('seaborn-colorblind') plt.ion() ``` # Exercício 01: Suponha que a altura de mulheres adultas de algumas regiões seguem uma distribuição normal com $\mu = 162$ centímetros e $\sigma = 8$. Nesse caso, responda às perguntas abaixo: ID: (a) Dado que uma mulher mede 180 centímetros, qual a probabilidade de alguém escolhido ao acaso ser maior que ela? Para responder à questão, crie uma função a(), sem parâmetros, que retorna a resposta da questão com uma precisão de 4 casas decimais. __Dica__: 1. a função round(var, n) retorna o valor da variável var com uma precisão de n casas decimais. 1. a classe `from scipy.stats.distributions import norm` implementa uma normal e já tem um método cdf e um método ppf (inverso da cdf). ``` # Crie aqui a função a() - com esse nome e sem parâmetros - # para retornar a resposta com precisão de 4 casas decimais! import math from scipy.stats.distributions import norm def a(): mu, std = norm.fit(h) return math.sqrt(variancia*(n-1)) a() ``` (b) Uma treinadora dessa região quer montar uma equipe de basquete. Para isso, ela quer delimitar uma altura mínima $h$ que as jogadoras devem ter. Ele quer que $h$ seja maior que pelo menos $90\%$ das alturas de mulheres daquela região. Qual o valor de $h$? Para responder à questão, crie uma função _b()_, sem parâmetros, que retorna a resposta da questão com uma precisão de 4 casas decimais. __Dica:__ a função _round(var, n)_ ou _np.round(var, n)_ retorna o valor da variável var com uma precisão de n casas decimais. ``` #Crie aqui a função b() - com esse nome e sem parâmetros - # para retornar a resposta com precisão de 4 casas decimais! # YOUR CODE HERE raise NotImplementedError() ``` # Exercício 02: As seguintes amostras foram geradas seguindo uma distribuição normal N($\mu$, $\sigma$), onde $\mu$, $\sigma$ não necessariamente são os mesmos para ambas. Nos histogramas gerados é possível visualizar essa distribuição. ``` dados1 = [3.8739066,4.4360658,3.0235970,6.1573843,3.7793704,3.6493491,7.2910457,3.7489513,5.9306145,5.3897872, 5.9091607,5.2491517,7.1163771,4.1930465,-0.1994626,3.2583011,5.9229948,1.8548338,4.8335581,5.2329008, 1.5683191,5.8756518,3.4215138,4.7900996,5.9530234,4.4550699,3.3868535,5.3060581,4.2124300,7.0123823, 4.9790184,2.2368825,3.9182012,5.4449732,5.7594690,5.4159924,3.5914275,3.4382886,4.0706780,6.9489863, 6.3269462,2.8740986,7.4210664,4.6413206,4.2209699,4.2009752,6.2509627,4.9137823,4.9171593,6.3367493] dados2 = [2.291049832,5.092164483,3.287501109,4.152289011,4.534256822,5.513028947,2.696660244,3.270482741, 5.435338467,6.244110011,1.363583509,5.385855994,6.069527998,2.148361858,6.471584096,4.953202949, 6.827787432,4.695468536,2.047598339,8.858080081,5.436394723,7.849470791,4.053545595,3.204185038, 2.400954454,-0.002092845,3.571868529,6.202897955,5.224842718,4.958476608,6.708545254 -0.115002497, 5.106492712,3.343396551,5.984204841,3.552744920,4.041155327,5.709103288,3.137316917,2.100906915, 4.379147487,0.536031040,4.777440348,5.610527663,3.802506385,3.484180306,7.316861806,2.965851553, 3.640560731,4.765175164,7.047545215,5.683723446,5.048988000,6.891720033,3.619091771,8.396155189, 5.317492252,2.376071049,4.383045321,7.386186468,6.554626718,5.020433071,3.577328839,5.534419417, 3.600534876,2.172314745,4.632719037,4.361328042,4.292156420,1.102889101,4.621840612,4.946746104, 6.182937650,5.415993589,4.346608293,2.896446739,3.516568382,6.972384719,3.233811405,4.048606672, 1.663547342,4.607297335 -0.753490459,3.205353052,1.269307121,0.962428478,4.718627886,4.686076530, 2.919118501,6.204058666,4.803050149,4.670632749,2.811395731,7.214950058,3.275492976,2.336357937, 8.494097155,6.473022507,8.525715511,4.364707111] plt.hist(dados1) plt.show() plt.hist(dados2) plt.show() ``` __a)__ A partir dos histogramas, tente aproximar uma normal a cada um deles, desenhando-a sobre o histograma. Para isso, você deve estimar valores de $\mu$ e $\sigma$. Não se esqueça de normalizar os dados, ou seja, o eixo y deve estar um uma escala de 0 a (no máximo) 1! ``` mu1, std1 = norm.fit(dados1) mu2, std2 = norm.fit(dados2) plt.hist(dados1, weights=np.ones(len(dados1)) / len(dados1)) plt.show() plt.hist(dados2, weights=np.ones(len(dados2)) / len(dados2)) plt.show() ``` # Exercício 03: Dado uma tabela com informações sobre uma amostra com 20 alunos contendo a nota desses alunos em algumas disciplinas e os níveis de dificuldade das mesmas, crie uma função que retorne a probabilidade condicional estimada à partir dos dados para dois eventos dados, informando ainda se os eventos são independentes ou não. Ou seja, dado a tabela mostrada no exemplo (lista de listas) e dois eventos A e B, retorne a probabilidade condicional de A dado B (P(A|B)) com uma precisão de 4 casas decimais. O retorno da função, entretanto, deve ser uma frase (string) escrita da seguinte forma: _str: val_ onde _str_ é a string "Independentes" se os eventos A e B são independentes e "Dependentes" caso contrário e _val_ é o valor da probabilidade condicional P(A|B) com uma precisão de 4 casas decimais. __Dica:__ a função format(var, '.nf') retorna uma string com o valor da variável var com uma precisão de exatamente n casas decimais. ``` # Esses dados se referem às notas (A-E) de 20 alunos de acordo com a dificuldade da disciplina (Fácil ou Difícil) # Coluna 1: id do aluno # Coluna 2: dificuldade da disciplina ('Facil' ou 'Dificil') # Coluna 3: nota do aluno (A-E) data = [[1, 'Facil', 'C'], [2, 'Facil', 'A'], [3, 'Dificil', 'E'], [4, 'Dificil', 'B'], [5, 'Dificil', 'B'], [6, 'Dificil', 'A'], [7, 'Facil', 'D'], [8, 'Dificil', 'C'], [9, 'Facil', 'D'], [10, 'Facil', 'C'], [11, 'Facil', 'A'], [12, 'Facil', 'A'], [13, 'Dificil', 'B'], [14, 'Dificil', 'C'], [15, 'Dificil', 'E'], [16, 'Dificil', 'C'], [17, 'Facil', 'A'], [18, 'Dificil', 'D'], [19, 'Facil', 'B'], [20, 'Facil', 'A']] data = pd.DataFrame(data, columns=['id', 'dificuldade', 'nota']) data = data.set_index('id') print(data) def prob_cond(df, valor_nota: 'considere como A no bayes', valor_dificuldade: 'considere como B no bayes'): # YOUR CODE HERE raise NotImplementedError() """Check that prob_cond returns the correct output for several inputs""" assert_equal(prob_cond(data, 'A', 'Facil'), 'Dependentes: 0.5000') assert_equal(prob_cond(data, 'E', 'Facil'), 'Dependentes: 0.0000') assert_equal(prob_cond(data, 'A', 'Dificil'), 'Dependentes: 0.1000') assert_equal(prob_cond(data, 'E', 'Dificil'), 'Dependentes: 0.2000') ``` # Exercício 04: Utilizando os dados de acidentes fatais em companhias aéreas dos Estados Unidos de 1985 a 1999, calcule algumas estatísticas básicas. Você deve retornar uma __lista__ com os valores das estatísticas calculadas, sendo elas, nessa ordem: menor valor, maior valor, média, mediana, variância e desvio-padrão. Para responder à questão, crie uma função _estat(acidentes)_ que retorna a lista com os valores correspondentes às resposta da questão, inteiros quando forem inteiros ou com uma precisão de 4 casas decimais caso contrário. __Teste:__ `assert_equal(estat(acidentes), ans)`, sendo que `ans` é uma lista contendo os valores corretos para as estatísticas que este exercício pede. __Dicas:__ 1) A função round(var, n) retorna o valor da variável var com uma precisão de n casas decimais. 2) Execute o teste `assert_equal(estat(lista_boba), ans_bobo)` para alguma `lista_boba` que você saiba calcular as estatísticas no papel. __Fonte:__ https://aviation-safety.net/ ``` # Crie aqui a função estat(acidentes) - com esse nome e parâmetro - # a função deve retornar a lista com as respostas com precisão de 4 casas decimais! # YOUR CODE HERE raise NotImplementedError() ``` # Exercício 05: Procure encontrar correlações espúrias interessantes e apresente um exemplo encontrado. Ou seja, aprensente dois conjuntos de dados que possuem alta correlação (muito positivas ou muito negativas) sem que um seja de fato o causador do outro. Além disso, deixe resgistrado os gráficos com a distribuição dos dados e um gráfico de dispersão como forma de visualizar a correlação entre os dados. Calcule a covariância e correlação entre os dados e, por fim, se possível, tente explicar qual poderia ser a verdadeira causa da ocorrência das observações. Para isso, utilize a última célula desse notebook. __Observação:__ Para ideias de correlações espúrias, veja os seguintes sites: http://tylervigen.com/spurious-correlations https://en.wikipedia.org/wiki/Spurious_relationship#Other_relationships ``` from IPython.display import SVG, display display(SVG(url='chart.svg')) ```
github_jupyter
``` import pandas as pd medicare = pd.read_csv("/netapp2/home/se197/RPDR/Josh Lin/3_EHR_V2/CMS/final_medicare.csv") medicare.shape train_set = medicare[medicare.Hospital != 'BWH'] # MGH; n = 204014 validation_set = medicare[medicare.Hospital == 'BWH'] # BWH and Neither; n = 115726 import numpy as np fifty_perc_EHR_cont = np.percentile(medicare['Cal_MPEC_R0'],50) train_set_high = train_set[train_set.Cal_MPEC_R0 >= fifty_perc_EHR_cont] train_set_low= train_set[train_set.Cal_MPEC_R0 < fifty_perc_EHR_cont] validation_set_high = validation_set[validation_set.Cal_MPEC_R0 >= fifty_perc_EHR_cont] validation_set_low = validation_set[validation_set.Cal_MPEC_R0 < fifty_perc_EHR_cont] predictor_variable = [ 'Co_CAD_R0', 'Co_Embolism_R0', 'Co_DVT_R0', 'Co_PE_R0', 'Co_AFib_R0', 'Co_Hypertension_R0', 'Co_Hyperlipidemia_R0', 'Co_Atherosclerosis_R0', 'Co_HF_R0', 'Co_HemoStroke_R0', 'Co_IscheStroke_R0', 'Co_OthStroke_R0', 'Co_TIA_R0', 'Co_COPD_R0', 'Co_Asthma_R0', 'Co_Pneumonia_R0', 'Co_Alcoholabuse_R0', 'Co_Drugabuse_R0', 'Co_Epilepsy_R0', 'Co_Cancer_R0', 'Co_MorbidObesity_R0', 'Co_Dementia_R0', 'Co_Depression_R0', 'Co_Bipolar_R0', 'Co_Psychosis_R0', 'Co_Personalitydisorder_R0', 'Co_Adjustmentdisorder_R0', 'Co_Anxiety_R0', 'Co_Generalizedanxiety_R0', 'Co_OldMI_R0', 'Co_AcuteMI_R0', 'Co_PUD_R0', 'Co_UpperGIbleed_R0', 'Co_LowerGIbleed_R0', 'Co_Urogenitalbleed_R0', 'Co_Othbleed_R0', 'Co_PVD_R0', 'Co_LiverDisease_R0', 'Co_MRI_R0', 'Co_ESRD_R0', 'Co_Obesity_R0', 'Co_Sepsis_R0', 'Co_Osteoarthritis_R0', 'Co_RA_R0', 'Co_NeuroPain_R0', 'Co_NeckPain_R0', 'Co_OthArthritis_R0', 'Co_Osteoporosis_R0', 'Co_Fibromyalgia_R0', 'Co_Migraine_R0', 'Co_Headache_R0', 'Co_OthPain_R0', 'Co_GeneralizedPain_R0', 'Co_PainDisorder_R0', 'Co_Falls_R0', 'Co_CoagulationDisorder_R0', 'Co_WhiteBloodCell_R0', 'Co_Parkinson_R0', 'Co_Anemia_R0', 'Co_UrinaryIncontinence_R0', 'Co_DecubitusUlcer_R0', 'Co_Oxygen_R0', 'Co_Mammography_R0', 'Co_PapTest_R0', 'Co_PSATest_R0', 'Co_Colonoscopy_R0', 'Co_FecalOccultTest_R0', 'Co_FluShot_R0', 'Co_PneumococcalVaccine_R0', 'Co_RenalDysfunction_R0', 'Co_Valvular_R0', 'Co_Hosp_Prior30Days_R0', 'Co_RX_Antibiotic_R0', 'Co_RX_Corticosteroid_R0', 'Co_RX_Aspirin_R0', 'Co_RX_Dipyridamole_R0', 'Co_RX_Clopidogrel_R0', 'Co_RX_Prasugrel_R0', 'Co_RX_Cilostazol_R0', 'Co_RX_Ticlopidine_R0', 'Co_RX_Ticagrelor_R0', 'Co_RX_OthAntiplatelet_R0', 'Co_RX_NSAIDs_R0', 'Co_RX_Opioid_R0', 'Co_RX_Antidepressant_R0', 'Co_RX_AAntipsychotic_R0', 'Co_RX_TAntipsychotic_R0', 'Co_RX_Anticonvulsant_R0', 'Co_RX_PPI_R0', 'Co_RX_H2Receptor_R0', 'Co_RX_OthGastro_R0', 'Co_RX_ACE_R0', 'Co_RX_ARB_R0', 'Co_RX_BBlocker_R0', 'Co_RX_CCB_R0', 'Co_RX_Thiazide_R0', 'Co_RX_Loop_R0', 'Co_RX_Potassium_R0', 'Co_RX_Nitrates_R0', 'Co_RX_Aliskiren_R0', 'Co_RX_OthAntihypertensive_R0', 'Co_RX_Antiarrhythmic_R0', 'Co_RX_OthAnticoagulant_R0', 'Co_RX_Insulin_R0', 'Co_RX_Noninsulin_R0', 'Co_RX_Digoxin_R0', 'Co_RX_Statin_R0', 'Co_RX_Lipid_R0', 'Co_RX_Lithium_R0', 'Co_RX_Benzo_R0', 'Co_RX_ZDrugs_R0', 'Co_RX_OthAnxiolytic_R0', 'Co_RX_Barbiturate_R0', 'Co_RX_Dementia_R0', 'Co_RX_Hormone_R0', 'Co_RX_Osteoporosis_R0', 'Co_N_Drugs_R0', 'Co_N_Hosp_R0', 'Co_Total_HospLOS_R0', 'Co_N_MDVisit_R0', 'Co_RX_AnyAspirin_R0', 'Co_RX_AspirinMono_R0', 'Co_RX_ClopidogrelMono_R0', 'Co_RX_AspirinClopidogrel_R0', 'Co_RX_DM_R0', 'Co_RX_Antipsychotic_R0' ] predictor_variable_claims = [ 'Co_CAD_RC0', 'Co_Embolism_RC0', 'Co_DVT_RC0', 'Co_PE_RC0', 'Co_AFib_RC0', 'Co_Hypertension_RC0', 'Co_Hyperlipidemia_RC0', 'Co_Atherosclerosis_RC0', 'Co_HF_RC0', 'Co_HemoStroke_RC0', 'Co_IscheStroke_RC0', 'Co_OthStroke_RC0', 'Co_TIA_RC0', 'Co_COPD_RC0', 'Co_Asthma_RC0', 'Co_Pneumonia_RC0', 'Co_Alcoholabuse_RC0', 'Co_Drugabuse_RC0', 'Co_Epilepsy_RC0', 'Co_Cancer_RC0', 'Co_MorbidObesity_RC0', 'Co_Dementia_RC0', 'Co_Depression_RC0', 'Co_Bipolar_RC0', 'Co_Psychosis_RC0', 'Co_Personalitydisorder_RC0', 'Co_Adjustmentdisorder_RC0', 'Co_Anxiety_RC0', 'Co_Generalizedanxiety_RC0', 'Co_OldMI_RC0', 'Co_AcuteMI_RC0', 'Co_PUD_RC0', 'Co_UpperGIbleed_RC0', 'Co_LowerGIbleed_RC0', 'Co_Urogenitalbleed_RC0', 'Co_Othbleed_RC0', 'Co_PVD_RC0', 'Co_LiverDisease_RC0', 'Co_MRI_RC0', 'Co_ESRD_RC0', 'Co_Obesity_RC0', 'Co_Sepsis_RC0', 'Co_Osteoarthritis_RC0', 'Co_RA_RC0', 'Co_NeuroPain_RC0', 'Co_NeckPain_RC0', 'Co_OthArthritis_RC0', 'Co_Osteoporosis_RC0', 'Co_Fibromyalgia_RC0', 'Co_Migraine_RC0', 'Co_Headache_RC0', 'Co_OthPain_RC0', 'Co_GeneralizedPain_RC0', 'Co_PainDisorder_RC0', 'Co_Falls_RC0', 'Co_CoagulationDisorder_RC0', 'Co_WhiteBloodCell_RC0', 'Co_Parkinson_RC0', 'Co_Anemia_RC0', 'Co_UrinaryIncontinence_RC0', 'Co_DecubitusUlcer_RC0', 'Co_Oxygen_RC0', 'Co_Mammography_RC0', 'Co_PapTest_RC0', 'Co_PSATest_RC0', 'Co_Colonoscopy_RC0', 'Co_FecalOccultTest_RC0', 'Co_FluShot_RC0', 'Co_PneumococcalVaccine_RC0' , 'Co_RenalDysfunction_RC0', 'Co_Valvular_RC0', 'Co_Hosp_Prior30Days_RC0', 'Co_RX_Antibiotic_RC0', 'Co_RX_Corticosteroid_RC0', 'Co_RX_Aspirin_RC0', 'Co_RX_Dipyridamole_RC0', 'Co_RX_Clopidogrel_RC0', 'Co_RX_Prasugrel_RC0', 'Co_RX_Cilostazol_RC0', 'Co_RX_Ticlopidine_RC0', 'Co_RX_Ticagrelor_RC0', 'Co_RX_OthAntiplatelet_RC0', 'Co_RX_NSAIDs_RC0', 'Co_RX_Opioid_RC0', 'Co_RX_Antidepressant_RC0', 'Co_RX_AAntipsychotic_RC0', 'Co_RX_TAntipsychotic_RC0', 'Co_RX_Anticonvulsant_RC0', 'Co_RX_PPI_RC0', 'Co_RX_H2Receptor_RC0', 'Co_RX_OthGastro_RC0', 'Co_RX_ACE_RC0', 'Co_RX_ARB_RC0', 'Co_RX_BBlocker_RC0', 'Co_RX_CCB_RC0', 'Co_RX_Thiazide_RC0', 'Co_RX_Loop_RC0', 'Co_RX_Potassium_RC0', 'Co_RX_Nitrates_RC0', 'Co_RX_Aliskiren_RC0', 'Co_RX_OthAntihypertensive_RC0', 'Co_RX_Antiarrhythmic_RC0', 'Co_RX_OthAnticoagulant_RC0', 'Co_RX_Insulin_RC0', 'Co_RX_Noninsulin_RC0', 'Co_RX_Digoxin_RC0', 'Co_RX_Statin_RC0', 'Co_RX_Lipid_RC0', 'Co_RX_Lithium_RC0', 'Co_RX_Benzo_RC0', 'Co_RX_ZDrugs_RC0', 'Co_RX_OthAnxiolytic_RC0', 'Co_RX_Barbiturate_RC0', 'Co_RX_Dementia_RC0', 'Co_RX_Hormone_RC0', 'Co_RX_Osteoporosis_RC0', 'Co_N_Drugs_RC0', 'Co_N_Hosp_RC0', 'Co_Total_HospLOS_RC0', 'Co_N_MDVisit_RC0', 'Co_RX_AnyAspirin_RC0', 'Co_RX_AspirinMono_RC0', 'Co_RX_ClopidogrelMono_RC0', 'Co_RX_AspirinClopidogrel_RC0', 'Co_RX_DM_RC0', 'Co_RX_Antipsychotic_RC0' ] co_train_gpop = train_set[predictor_variable_claims] co_train_high = train_set_high[predictor_variable_claims] co_train_low = train_set_low[predictor_variable_claims] co_validation_gpop = validation_set[predictor_variable] co_validation_high = validation_set_high[predictor_variable] co_validation_low = validation_set_low[predictor_variable] out_train_death_gpop = train_set['ehr_claims_death'] out_train_death_high = train_set_high['ehr_claims_death'] out_train_death_low = train_set_low['ehr_claims_death'] out_validation_death_gpop = validation_set['ehr_death'] out_validation_death_high = validation_set_high['ehr_death'] out_validation_death_low = validation_set_low['ehr_death'] ``` # Template LR ``` def lr(X_train, y_train): from sklearn.linear_model import Lasso from sklearn.decomposition import PCA from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV from imblearn.over_sampling import SMOTE from sklearn.preprocessing import StandardScaler model = LogisticRegression() param_grid = [ {'C' : np.logspace(-4, 4, 20)} ] clf = GridSearchCV(model, param_grid, cv = 5, verbose = True, n_jobs = -1) best_clf = clf.fit(X_train, y_train) return best_clf def train_scores(X_train,y_train): from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import fbeta_score from sklearn.metrics import roc_auc_score from sklearn.metrics import log_loss pred = best_clf.predict(X_train) actual = y_train print(accuracy_score(actual,pred)) print(f1_score(actual,pred)) print(fbeta_score(actual,pred, average = 'macro', beta = 2)) print(roc_auc_score(actual, best_clf.decision_function(X_train))) print(log_loss(actual,pred)) def test_scores(X_test,y_test): from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import fbeta_score from sklearn.metrics import roc_auc_score from sklearn.metrics import log_loss pred = best_clf.predict(X_test) actual = y_test print(accuracy_score(actual,pred)) print(f1_score(actual,pred)) print(fbeta_score(actual,pred, average = 'macro', beta = 2)) print(roc_auc_score(actual, best_clf.decision_function(X_test))) print(log_loss(actual,pred)) ``` # General Population ``` best_clf = lr(co_train_gpop, out_train_death_gpop) train_scores(co_train_gpop, out_train_death_gpop) print() test_scores(co_validation_gpop, out_validation_death_gpop) comb = [] for i in range(len(predictor_variable_claims)): comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1])) comb ``` # High Continuity ``` best_clf = lr(co_train_high, out_train_death_high) prediction_high = best_clf.predict_proba(co_validation_high) #prediction_high_low = best_clf.predict_proba(co_validation_low) #prediction_high_high = best_clf.predict_proba(co_validation_high) #train_scores(co_train_high, out_train_death_high) #print() #test_scores(co_validation_high, out_validation_death_high) #comb = [] #for i in range(len(predictor_variable_claims)): #comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1])) #comb ``` # Low Continuity ``` best_clf = lr(co_train_low, out_train_death_low) train_scores(co_train_low, out_train_death_low) print() test_scores(co_validation_low, out_validation_death_low) comb = [] for i in range(len(predictor_variable_claims)): comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1])) comb for values in prediction_high: print(values[1]) import sklearn print(sklearn.metrics.roc_auc_score(out_validation_death_high,prediction_high[:,1])) for values in out_validation_death_high: print(values) ```
github_jupyter
This notebook is part of https://github.com/AudioSceneDescriptionFormat/splines, see also https://splines.readthedocs.io/. [back to rotation splines](index.ipynb) # Barry--Goldman Algorithm We can try to use the [Barry--Goldman algorithm for non-uniform Euclidean Catmull--Rom splines](../euclidean/catmull-rom-barry-goldman.ipynb) using [Slerp](slerp.ipynb) instead of linear interpolations, just as we have done with [De Casteljau's algorithm](de-casteljau.ipynb). ``` def slerp(one, two, t): return (two * one.inverse())**t * one def barry_goldman(rotations, times, t): q0, q1, q2, q3 = rotations t0, t1, t2, t3 = times return slerp( slerp( slerp(q0, q1, (t - t0) / (t1 - t0)), slerp(q1, q2, (t - t1) / (t2 - t1)), (t - t0) / (t2 - t0)), slerp( slerp(q1, q2, (t - t1) / (t2 - t1)), slerp(q2, q3, (t - t2) / (t3 - t2)), (t - t1) / (t3 - t1)), (t - t1) / (t2 - t1)) ``` Example: ``` import numpy as np ``` [helper.py](helper.py) ``` from helper import angles2quat, animate_rotations, display_animation q0 = angles2quat(0, 0, 0) q1 = angles2quat(90, 0, 0) q2 = angles2quat(90, 90, 0) q3 = angles2quat(90, 90, 90) t0 = 0 t1 = 1 t2 = 3 t3 = 3.5 frames = 50 ani = animate_rotations({ 'Barry–Goldman (q0, q1, q2, q3)': [ barry_goldman([q0, q1, q2, q3], [t0, t1, t2, t3], t) for t in np.linspace(t1, t2, frames) ], 'Slerp (q1, q2)': slerp(q1, q2, np.linspace(0, 1, frames)), }, figsize=(5, 2)) display_animation(ani, default_mode='once') ``` [splines.quaternion.BarryGoldman](../python-module/splines.quaternion.rst#splines.quaternion.BarryGoldman) class ``` from splines.quaternion import BarryGoldman import numpy as np ``` [helper.py](helper.py) ``` from helper import angles2quat, animate_rotations, display_animation rotations = [ angles2quat(0, 0, 180), angles2quat(0, 45, 90), angles2quat(90, 45, 0), angles2quat(90, 90, -90), angles2quat(180, 0, -180), angles2quat(-90, -45, 180), ] grid = np.array([0, 0.5, 2, 5, 6, 7, 9]) bg = BarryGoldman(rotations, grid) ``` For comparison ... [Catmull--Rom-like quaternion spline](catmull-rom-non-uniform.ipynb) [splines.quaternion.CatmullRom](../python-module/splines.quaternion.rst#splines.quaternion.CatmullRom) class ``` from splines.quaternion import CatmullRom cr = CatmullRom(rotations, grid, endconditions='closed') def evaluate(spline, samples=200): times = np.linspace(spline.grid[0], spline.grid[-1], samples, endpoint=False) return spline.evaluate(times) ani = animate_rotations({ 'Barry–Goldman': evaluate(bg), 'Catmull–Rom-like': evaluate(cr), }, figsize=(5, 2)) display_animation(ani, default_mode='loop') rotations = [ angles2quat(90, 0, -45), angles2quat(179, 0, 0), angles2quat(181, 0, 0), angles2quat(270, 0, -45), angles2quat(0, 90, 90), ] s_uniform = BarryGoldman(rotations) s_chordal = BarryGoldman(rotations, alpha=1) s_centripetal = BarryGoldman(rotations, alpha=0.5) ani = animate_rotations({ 'uniform': evaluate(s_uniform, samples=300), 'chordal': evaluate(s_chordal, samples=300), 'centripetal': evaluate(s_centripetal, samples=300), }, figsize=(7, 2)) display_animation(ani, default_mode='loop') ``` ## Constant Angular Speed Not very efficient, De Casteljau's algorithm is faster because it directly provides the tangent. ``` from splines import ConstantSpeedAdapter class BarryGoldmanWithDerivative(BarryGoldman): delta_t = 0.000001 def evaluate(self, t, n=0): """Evaluate quaternion or angular velocity.""" if not np.isscalar(t): return np.array([self.evaluate(t, n) for t in t]) if n == 0: return super().evaluate(t) elif n == 1: # NB: We move the interval around because # we cannot access times before and after # the first and last time, respectively. fraction = (t - self.grid[0]) / (self.grid[-1] - self.grid[0]) before = super().evaluate(t - fraction * self.delta_t) after = super().evaluate(t + (1 - fraction) * self.delta_t) # NB: Double angle return (after * before.inverse()).log_map() * 2 / self.delta_t else: raise ValueError('Unsupported n: {!r}'.format(n)) s = ConstantSpeedAdapter(BarryGoldmanWithDerivative(rotations, alpha=0.5)) ``` Takes a long time! ``` ani = animate_rotations({ 'non-constant speed': evaluate(s_centripetal), 'constant speed': evaluate(s), }, figsize=(5, 2)) display_animation(ani, default_mode='loop') ```
github_jupyter
``` from urllib.request import urlopen from bs4 import BeautifulSoup def getNgrams(content, n): content = content.split(' ') output = [] for i in range(len(content)-n+1): output.append(content[i:i+n]) return output html = urlopen('http://en.wikipedia.org/wiki/Python_(programming_language)') bs = BeautifulSoup(html, 'html.parser') content = bs.find('div', {'id':'mw-content-text'}).get_text() ngrams = getNgrams(content, 2) print(ngrams) print('2-grams count is: '+str(len(ngrams))) import re def getNgrams(content, n): content = re.sub('\n|[[\d+\]]', ' ', content) content = bytes(content, 'UTF-8') content = content.decode('ascii', 'ignore') content = content.split(' ') content = [word for word in content if word != ''] output = [] for i in range(len(content)-n+1): output.append(content[i:i+n]) return output html = urlopen('http://en.wikipedia.org/wiki/Python_(programming_language)') bs = BeautifulSoup(html, 'html.parser') content = bs.find('div', {'id':'mw-content-text'}).get_text() ngrams = getNgrams(content, 2) print(ngrams) print('2-grams count is: '+str(len(ngrams))) from urllib.request import urlopen from bs4 import BeautifulSoup import re import string def cleanSentence(sentence): sentence = sentence.split(' ') sentence = [word.strip(string.punctuation+string.whitespace) for word in sentence] sentence = [word for word in sentence if len(word) > 1 or (word.lower() == 'a' or word.lower() == 'i')] return sentence def cleanInput(content): content = content.upper() content = re.sub('\n|[[\d+\]]', ' ', content) content = bytes(content, "UTF-8") content = content.decode("ascii", "ignore") sentences = content.split('. ') return [cleanSentence(sentence) for sentence in sentences] def getNgramsFromSentence(content, n): output = [] for i in range(len(content)-n+1): output.append(content[i:i+n]) return output def getNgrams(content, n): content = cleanInput(content) ngrams = [] for sentence in content: ngrams.extend(getNgramsFromSentence(sentence, n)) return(ngrams) html = urlopen('http://en.wikipedia.org/wiki/Python_(programming_language)') bs = BeautifulSoup(html, 'html.parser') content = bs.find('div', {'id':'mw-content-text'}).get_text() print(len(getNgrams(content, 2))) from collections import Counter def getNgrams(content, n): content = cleanInput(content) ngrams = Counter() ngrams_list = [] for sentence in content: newNgrams = [' '.join(ngram) for ngram in getNgramsFromSentence(sentence, n)] ngrams_list.extend(newNgrams) ngrams.update(newNgrams) return(ngrams) print(getNgrams(content, 2)) ```
github_jupyter
# Stroop effect investigation [Cédric Campguilhem](https://github.com/ccampguilhem/Udacity-DataAnalyst), September 2017 <a id='Top'/> ## Table of contents - [Introduction](#Introduction) - [Stroop effect experiment](#Stroop effect experiment) - [Descriptive statistics](#Descriptive statistics) - [Inferential statistics](#Inferential statistics) - [Hypothesis](#Hypothesis) - [Checking assumptions](#Checking assumptions) - [Critical t-value](#Critical t-value) - [t-statistic and decision](#t-statistic) - [Additional information](#Additional information) - [Conclusion](#Conclusion) - [Appendix](#Appendix) <a id='Introduction'/> ## Introduction This project is related to Inferential Statistics course for Udacity Data Analyst Nanodegree program. The purpose of this project is to investigate a phenomenon from the experimental psychology called [Stroop effect](https://en.wikipedia.org/wiki/Stroop_effect). The project aims at investigating experimental results on a sample. We will use both descriptive and inferential statistics (hypothesis formulation and decision). The project makes use of of Python language with [pandas](http://pandas.pydata.org/) for calculation parts and [seaborn](https://seaborn.pydata.org/) library for plotting. <a id='Stroop effect experiment'/> ## Stroop effect experiment *[top](#Top)* The experiment consists in saying out loud the ink color in which a words in a list are printed. The participants are given two different lists of words naming colors. A **congruent** one in which words match the ink color and a **incongruent** list where words are different from ink color: <div style="width: 100%; display: flex; flex-wrap: wrap"> <div style="width: 50%; padding: 5px"> <img alt="Congruent list" src="./stroopa.gif"/> </div> <div style="width: 50%; padding: 5px"> <img alt="Congruent list" src="./stroopb.gif"/> </div> </div> <div style="width: 100%; display: flex; flex-wrap: wrap"> <div style="width: 50%; padding: 5px; text-align: center"> Congruent list </div> <div style="width: 50%; padding: 5px; text-align: center"> Incongruent list </div> </div> <div style="width: 100%; display: flex; flex-wrap: wrap"> <div style="width: 100%; padding: 5px; text-align: center"> <a href="https://faculty.washington.edu/chudler/java/ready.html">Source: faculty.washington.edu</a> </div> </div> The time (in seconds) it takes to each participant to enumerate ink colors is recorded for each list. The type of list (congruent or incongruent) is the **independent** variable in the experiment. The time it takes to go through the list is the **dependent** variable. This experiment is a **dependent** sample experiment: the same participants are given both lists. ``` #Import required libraries for project import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline #Read dataset df = pd.read_csv('./dataset/stroopdata.csv') ``` As an example, here are the first ten participants recorded times for each list: ``` df[:10].transpose().head() ``` <a id='Descriptive statistics'/> ## Descriptive statistics *[top](#Top)* We first make a descriptive analysis of our groups. The following box plot shows distribution of recorded time for each list: ``` #We re-organize table df2 = df.stack().reset_index(level=1, name='time').rename(columns={"level_1": "list"}) df2 = df2.reset_index().rename(columns={"index": "participant"}) figure = plt.figure(figsize=(12, 6)) ax = figure.add_subplot(111) ax = sns.boxplot(x="list", y="time", data=df2, ax=ax); texts = ax.set(xlabel='Type of list', ylabel='Time (seconds)', title='Distribution of recorded times for each list') texts[0].set_fontsize(14) texts[1].set_fontsize(14) texts[2].set_fontsize(20) ``` Each box represents three quartiles (25%, mean and 75%) in addition to 1.5 interquartile range past the low and high quartiles. Participant time out of the interquartile range are reported as outliers (diamond markers). In average, time it takes to go through seems higher in average for incongruent list (below 15 seconds in average for congruent list and above 20 seconds for non-congruent list). At this time, there is no evidence that this observation is statistically significant. To do that, we need to formulate an hypothesis. This will be the object of [next](#Inferential statistics) section. For incongruent list, two recorded times are above the 1.5 interquartile range and are reported as outliers. We can have a closer look at distributions: ``` figure = plt.figure(figsize=(12, 6)) ax = figure.add_subplot(111) ax = sns.distplot(df2[df2["list"] == "Congruent"]["time"], label="Congruent list", ax=ax); ax = sns.distplot(df2[df2["list"] == "Incongruent"]["time"], label="Incongruent list", ax=ax); ax.legend() texts = ax.set(xlabel='Time (seconds)', ylabel='Proportion', title='Distribution of recorded times for each list') texts[0].set_fontsize(14) texts[1].set_fontsize(14) texts[2].set_fontsize(20) ``` If we refer to kernel density estimators, we can see that distribution of recorded times for congruent list looks like a normal distribution. On the contrary, results for incongruent list differs from a normal distribution due to the outliers who have response time for experiment between 32 and 35. The actual values for mean, standard deviation and quartiles are reported in the table below: ``` df2.groupby("list").describe()["time"] ``` In the above table, the standard deviation reported is a sample standard deviation taking into account [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction): \begin{align} \sigma = \sqrt{\frac{\sum_{i=1}^n (x_i - \bar{x})^2}{n - 1}} \end{align} <a id='Inferential statistics'/> ## Inferential statistics *[top](#Top)* <a id='Hypothesis'/> ### Hypothesis *[inferential](#Inferential statistics)* In [previous](#Descriptive statistics) section, we have observed that average response time for incongruent list seems higher than the one recoded for congruent list. In this section, we want to know if this observation is statistically significant. Before doint a t-test, we need to take several assumptions ([source](https://statistics.laerd.com/stata-tutorials/paired-t-test-using-stata.php)): - The dependent variable should be continuous, which is the case in our case because we use time as dependent variable. - The independent variable should consist of two categorical, related groups or matched pairs. In our case we have related groups: each participant is given the two different lists. - There should be no significant outliers in the difference between the two related groups. - The distribution of the differences in the dependent variable between the two related groups should be approximately normally distributed. The last two assumptions need to be checked. Let's formulate the problem this way. Our **null hypothesis** is that population mean response times for congruent ($\mu_{congruent}$) and incongruent ($\mu_{incongruent}$) lists are the same. Our **alternative** is that population mean reponse time for incongruent list is higher. For this one-tailed, dependant sample [t-test](https://en.wikipedia.org/wiki/Student%27s_t-test), we choose to use an alpha level of 0.05: \begin{equation} \mathtt{H}_{0}: \mathtt{\mu}_{congruent} = \mathtt{\mu}_{incongruent} \\ \mathtt{H}_{A}: \mathtt{\mu}_{congruent} < \mathtt{\mu}_{incongruent} \\ \alpha = 0.05 \end{equation} Let's set $\mathtt{\mu}_D = \mathtt{\mu}_{incongruent} - \mathtt{\mu}_{congruent}$ . The hypotheses may be re-written as follow: \begin{equation} \mathtt{H}_{0}: \mathtt{\mu}_{D} = 0 \\ \mathtt{H}_{A}: \mathtt{\mu}_{D} > 0 \\ \alpha = 0.05 \end{equation} The problem is then a one-tailed t-test in positive direction. In this problem, the degree of freedom is $n - 1$: \begin{equation} dof = n - 1 = 23 \end{equation} where $n$ is the sample size (24). <a id='Checking assumptions'/> ### Checking assumptions *[inferential](#Inferential statistics)* Before performing t-test, we need to check the following two remaining assumptions: - There should be no significant outliers in the differences between the two related groups. - The distribution of the differences in the dependent variable between the two groups should be approximately normally distributed. ``` #We first calculate the difference: df['difference'] = df["Incongruent"] - df["Congruent"] df.head() #Now we can make a boxplot to find outliers and a distplot figure = plt.figure(figsize=(12, 6)) figure.suptitle("Assumption check with differences of response time (incongruent - congruent)", fontsize=20) ax = figure.add_subplot(121) ax = sns.boxplot(x="difference", data=df, ax=ax); texts = ax.set(xlabel='Difference of response time (seconds)', title='Outliers in difference') ax = figure.add_subplot(122) ax = sns.distplot(df["difference"], label="Response time (seconds)", ax=ax); texts = ax.set(xlabel='Difference of response time (seconds)', ylabel='Proportion', title='Distribution of response time differences') ``` There is one difference of response time identified as an outlier (above the 1.5 interquartile limit past the high quartile). In the right-hand picture, we can also see that the distribution differs from a uniform distribution because of a bump around 20 seconds. We cannot formally validate all assumptions with this dataset. <a id='Critical t-value'> ### Critical t-value *[inferential](#Inferential statistics)* From [t-table](https://s3.amazonaws.com/udacity-hosted-downloads/t-table.jpg) we can get the critical value for $dof = 23$ and $\alpha = 0.05$ for a one-tailed t-test: \begin{equation} t_{critical} = 1.714 \end{equation} If the t-statistic of our sample is greater than this value, then we may reject the null hypothesis. <a id='t-statistic'/> ## t-statistic and decision *[inferential](#Inferential statistics)* t-statistic may be calculated this way: \begin{equation} t = \frac{\bar{x} - \bar{\mu}_{E}}{\frac{s}{\sqrt{n}}} \end{equation} Where $\bar{x}$ is the difference of the sample means, $s$ is the standard deviation of the samples mean, $n$ is the sample size and $\bar{\mu}_E$ is the expected difference of mean from null hypothesis. In our case we have $\bar{\mu}_E = 0$. The standard deviation of the sample means is calculated with: \begin{equation} s = \sqrt{\frac{\sum_{i=1}^n (x_i - \bar{x})^2}{n-1}} \end{equation} ``` df["difference"].describe() import math t = df["difference"].mean() / (df["difference"].std() / math.sqrt(24)) print t ``` Our sample has the following statitics $M = 7.96$, $SD = 4.86$ (rounded two decimal places). The t-statistic is $t(23) = 8.02, p < .0001$ one-tailed positive direction. As p-value is below the selected alpha level of 0.05, we **reject the null hypothesis**. This means that mean of response times with incongruent list is ** extremely significantly** higher than the mean of response times with congruent list. As this is an experiment, we can also state that incongruent list **causes** higher response times. <a id='Additional information'/> ### Additional information *[inferential](#Inferential statistics)* In this section, we provide a 95% confidence interval for mean of differences of response time, and effect size parameters: Cohen's $d$ and $r^2$. Cohen's d is the ratio of mean differences of response time and standard deviation of sample means: \begin{equation} d = \frac{\bar{x}}{s} \end{equation} $r^2$ parameter is: \begin{equation} r^2 = \frac{t^2}{t^2 + dof} \end{equation} Where $dof$ is the degree of freedom. In our case we have $d = 1.64, r^2 = .74$. This means that incongruent list explains 74% of differences of response time. The margin of error is calculated with: \begin{equation} margin = t_{critical}.s \end{equation} Where $s$ is the standard deviation of differences of response time and $t_{critical}$ is the t value for $\alpha = 0.05$ two-tailed (95% confidence interval): 2.069. Margin of error is 10.07 seconds. The 95% confidence interval: \begin{equation} 95\% CI = \bar{x} \pm margin \end{equation} where $\bar{x}$ is the mean of differences of response time, is then: 95% CI = (-2.10, 18.03) This means that if poupaltion takes the Stroop test then we expect to have an average of differences of response time between incongruent and congruent list between -2.10 and 18.03 seconds. ``` #Cohen's d d = df["difference"].mean() / (df["difference"].std()) print "d = ", d #r2 r2 = t**2 / (t**2 + 23) print "r2 = ", r2 #Margin of error margin = 2.069 * df["difference"].std() print "margin = ", margin #95% confidence interval CI = (df["difference"].mean() - margin, df["difference"].mean() + margin) print "95% CI = ({:.2f}, {:.2f})".format(CI[0], CI[1]) ``` <a id='Conclusion'/> ## Conclusion *[top](#Top)* In this project, we have conducted a t-test with a paired (dependent) sample. As this was an Stroop effect experiment, we have concluded that using an incrogruent list causes slower response time compared to a congruent list. However, two assumptions regarding our sample (distribution should be like a normal distribution and presence of outliers) have been violated. Outlier presence affects both sample mean and sample standard deviation and t-statistic as a consequence. In this case a nonparametric test may be [prefered](http://www.basic.northwestern.edu/statguidefiles/ttest_unpaired_ass_viol.html). The Stroop test highlights how humain brain processes differently words and colors. In his experiment, John Ridley Stroop also had a third neutral list where words naming different things than colors (animals for example). This third list showed faster response time than the incongruent list, proving the reduced level of interference. The Stroop test has multiple applications, one of which is to gauge children brain development (it has been shown that interferences reduce from childhood to adulthood). Higher level of interferences are often associated with brain troubles such as Attention-Deficit Hyperactivity Disorder. Sources: (https://en.wikipedia.org/wiki/Stroop_effect), (https://powersthatbeat.wordpress.com/2012/09/16/what-are-the-different-tests-for-the-stroop-effect-autismaid/), (https://imotions.com/blog/the-stroop-effect/). <a id='Appendix'/> ## Appendix *[top](#Top)* ### References Re-organization of panda dataframe on [StackOverflow](https://stackoverflow.com/questions/38241933/how-to-convert-column-names-into-column-values-in-pandas-python).<hr> Paired t-test [assumptions](https://statistics.laerd.com/stata-tutorials/paired-t-test-using-stata.php).<hr> Hypothesis testing on [Stat Trek](http://stattrek.com/hypothesis-test/hypothesis-testing.aspx).<hr> One-tail and two-tailed tests().<hr> Violation of [t-test assumptions](http://www.basic.northwestern.edu/statguidefiles/ttest_unpaired_ass_viol.html).<hr> Statistical calculations with [GraphPad](http://www.graphpad.com/quickcalcs/).<hr> ``` #Export to html !jupyter nbconvert --to html --template html_minimal.tpl inferential_statistics.ipynb ```
github_jupyter
# Markup Languages XML and its relatives are based on the idea of *marking up* content with labels on its purpose: <name>James</name> is a <job>Programmer</job> One of the easiest ways to make a markup-language based fileformat is the use of a *templating language*. ``` import mako from parsereactions import parser from IPython.display import display, Math system=parser.parse(open('system.tex').read()) display(Math(str(system))) %%writefile chemistry_template.mko <?xml version="1.0" encoding="UTF-8"?> <system> %for reaction in reactions: <reaction> <reactants> %for molecule in reaction.reactants.molecules: <molecule stoichiometry="${reaction.reactants.molecules[molecule]}"> % for element in molecule.elements: <element symbol="${element.symbol}" number="${molecule.elements[element]}"/> % endfor </molecule> %endfor </reactants> <products> %for molecule in reaction.products.molecules: <molecule stoichiometry="${reaction.products.molecules[molecule]}"> % for element in molecule.elements: <element symbol="${element.symbol}" number="${molecule.elements[element]}"/> % endfor </molecule> %endfor </products> </reaction> %endfor </system> from mako.template import Template mytemplate = Template(filename='chemistry_template.mko') with open('system.xml','w') as xmlfile: xmlfile.write((mytemplate.render( **vars(system)))) !cat system.xml ``` Markup languages are verbose (jokingly called the "angle bracket tax") but very clear. ## Data as text The above serialisation specifies all data as XML "Attributes". An alternative is to put the data in the text: ``` %%writefile chemistry_template2.mko <?xml version="1.0" encoding="UTF-8"?> <system> %for reaction in reactions: <reaction> <reactants> %for molecule in reaction.reactants.molecules: <molecule stoichiometry="${reaction.reactants.molecules[molecule]}"> % for element in molecule.elements: <element symbol="${element.symbol}">${molecule.elements[element]}</element> % endfor </molecule> %endfor </reactants> <products> %for molecule in reaction.products.molecules: <molecule stoichiometry="${reaction.products.molecules[molecule]}"> % for element in molecule.elements: <element symbol="${element.symbol}">${molecule.elements[element]}</element> % endfor </molecule> %endfor </products> </reaction> %endfor </system> from mako.template import Template mytemplate = Template(filename='chemistry_template2.mko') with open('system2.xml','w') as xmlfile: xmlfile.write((mytemplate.render( **vars(system)))) !cat system2.xml ``` ## Parsing XML XML is normally parsed by building a tree-structure of all the `tags` in the file, called a `DOM` or Document Object Model. ``` from lxml import etree tree = etree.parse(open('system.xml')) print(etree.tostring(tree, pretty_print=True, encoding=str)) ``` We can navigage the tree, with each **element** being an iterable yielding its children: ``` tree.getroot()[0][0][1].attrib['stoichiometry'] ``` ## Searching XML `xpath` is a sophisticated tool for searching XML DOMs: ``` tree.xpath('//molecule/element[@number="1"]/@symbol') ``` It is useful to understand grammars like these using the "FOR-LET-WHERE-ORDER-RETURN" (Flower) model. The above says: "For element in molecules where number is one, return symbol", roughly equivalent to `[element.symbol for element in molecule for molecule in document if element.number==1]` in Python. ``` etree.parse(open('system2.xml')).xpath('//molecule[element=1]//@symbol') ``` Note how we select on text content rather than attributes by using the element tag directly. The above says "for every moelcule where at least one element is present with just a single atom, return all the symbols of all the elements in that molecule." ## Transforming XML : XSLT Two technologies (XSLT and XQUERY) provide capability to produce text output from an XML tree. We'll look at XSLT as support is more widespread, including in the python library we're using. XQuery is probably easier to use and understand, but with less support. However, XSLT is a beautiful functional declarative language, once you read past the angle-brackets. Here's an XSLT to transform our reaction system into a LaTeX representation: ``` %%writefile xmltotex.xsl <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes" omit-xml-declaration="yes" /> <xsl:template match="//reaction"> <xsl:apply-templates select="reactants"/> <xsl:text> \rightarrow </xsl:text> <xsl:apply-templates select="products"/> <xsl:text>\\&#xa;</xsl:text> </xsl:template> <xsl:template match="//molecule[position()!=1]"> <xsl:text> + </xsl:text> <xsl:apply-templates select="@stoichiometry"/> <xsl:apply-templates/> </xsl:template> <xsl:template match="@stoichiometry[.='1']"/> <!-- do not copy 1-stoichiometries --> <!-- Otherwise, use the default template for attributes, which is just to copy value --> <xsl:template match="//molecule[position()=1]"> <xsl:apply-templates select="@* | *"/> </xsl:template> <xsl:template match="//element"> <xsl:value-of select="@symbol"/> <xsl:apply-templates select="@number"/> </xsl:template> <xsl:template match="@number[.=1]"/> <!-- do not copy 1-numbers --> <xsl:template match="@number[.!=1][10>.]"> <xsl:text>_</xsl:text> <xsl:value-of select="."/> </xsl:template> <xsl:template match="@number[.!=1][.>9]"> <xsl:text>_{</xsl:text> <xsl:value-of select="."/> <xsl:text>}</xsl:text> </xsl:template> <xsl:template match="text()" /> <!-- Do not copy input whitespace to output --> </xsl:stylesheet> transform=etree.XSLT(etree.XML(open("xmltotex.xsl").read())) print(str(transform(tree))) display(Math(str(transform(tree)))) ``` ## Validating XML : Schema XML Schema is a way to define how an XML file is allowed to be: which attributes and tags should exist where. You should always define one of these when using an XML file format. ``` %%writefile reactions.xsd <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="element"> <xs:complexType> <xs:attribute name="symbol" type="xs:string"/> <xs:attribute name="number" type="xs:integer"/> </xs:complexType> </xs:element> <xs:element name="molecule"> <xs:complexType> <xs:sequence> <xs:element ref="element" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="stoichiometry" type="xs:integer"/> </xs:complexType> </xs:element> <xs:element name="reaction"> <xs:complexType> <xs:sequence> <xs:element name="reactants"> <xs:complexType> <xs:sequence> <xs:element ref="molecule" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="products"> <xs:complexType> <xs:sequence> <xs:element ref="molecule" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="system"> <xs:complexType> <xs:sequence> <xs:element ref="reaction" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> schema = etree.XMLSchema(etree.XML(open("reactions.xsd").read())) parser = etree.XMLParser(schema = schema) tree = etree.parse(open('system.xml'),parser) ``` Compare parsing something that is not valid under the schema: ``` %%writefile invalid_system.xml <system> <reaction> <reactants> <molecule stoichiometry="two"> <element symbol="H" number="2"/> </molecule> <molecule stoichiometry="1"> <element symbol="O" number="2"/> </molecule> </reactants> <products> <molecule stoichiometry="2"> <element symbol="H" number="2"/> <element symbol="O" number="1"/> </molecule> </products> </reaction> </system> tree = etree.parse(open('invalid_system.xml'),parser) ```
github_jupyter
# LogisticRegression ``` import os import warnings import numpy as np import pandas as pd import xgboost as xgb import seaborn as sns import matplotlib.pyplot as plt import optuna import shap import time import json import config as cfg from category_encoders import WOEEncoder from mlxtend.feature_selection import SequentialFeatureSelector from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs from sklearn import metrics from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier, plot_tree from sklearn.ensemble import RandomForestClassifier from imblearn.under_sampling import RandomUnderSampler from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline from optuna.visualization import plot_optimization_history from optuna.visualization import plot_param_importances from sklearn.model_selection import ( RepeatedStratifiedKFold, StratifiedKFold, cross_validate, cross_val_score ) cv_dev = StratifiedKFold(n_splits=cfg.N_SPLITS, shuffle=True, random_state=cfg.SEED) cv_test = RepeatedStratifiedKFold(n_splits=cfg.N_SPLITS, n_repeats=cfg.N_REPEATS, random_state=cfg.SEED) np.set_printoptions(formatter={"float": lambda x: "{0:0.4f}".format(x)}) pd.set_option("display.max_columns", None) warnings.filterwarnings("ignore") sns.set_context("paper", font_scale=1.4) sns.set_style("darkgrid") MODEL_NAME = 'LogisticRegression' # Load data X_train = pd.read_csv(os.path.join("Data", "data_preprocessed_binned", "X_train.csv")) X_test = pd.read_csv(os.path.join("Data", "data_preprocessed_binned", "X_test.csv")) y_train = pd.read_csv(os.path.join("Data", "data_preprocessed_binned", "y_train.csv")) y_test = pd.read_csv(os.path.join("Data", "data_preprocessed_binned", "y_test.csv")) X_train ``` ### Test performance ``` fig, axs = plt.subplots(1, 2, figsize=(16, 6)) model = Pipeline([("encoder", WOEEncoder()), ("lr", LogisticRegression(random_state=cfg.SEED))]) model.fit(X = X_train, y = np.ravel(y_train)) # Calculate metrics preds = model.predict_proba(X_test)[::,1] test_gini = metrics.roc_auc_score(y_test, preds)*2-1 test_ap = metrics.average_precision_score(y_test, preds) print(f"test_gini:\t {test_gini:.4}") print(f"test_ap:\t {test_ap:.4}") # ROC test_auc = metrics.roc_auc_score(y_test, preds) fpr, tpr, _ = metrics.roc_curve(y_test, preds) lw = 2 axs[0].plot(fpr, tpr, lw=lw, label="ROC curve (GINI = %0.3f)" % test_gini) axs[0].plot([0, 1], [0, 1], color="red", lw=lw, linestyle="--") axs[0].set_xlim([-0.05, 1.0]) axs[0].set_ylim([0.0, 1.05]) axs[0].set_xlabel("False Positive Rate") axs[0].set_ylabel("True Positive Rate") axs[0].legend(loc="lower right") # PR precision, recall, _ = metrics.precision_recall_curve(y_test, preds) lw = 2 axs[1].plot(recall, precision, lw=lw, label="PR curve (AP = %0.3f)" % test_ap) axs[1].set_xlabel("Recall") axs[1].set_ylabel("Precision") axs[1].legend(loc="lower right") plt.savefig(os.path.join("Graphs", f"ROC_PRC_{MODEL_NAME}.png"), facecolor="w", dpi=100, bbox_inches = "tight") # Cross-validation GINI scores_gini = cross_validate( model, X_train, np.ravel(y_train), scoring="roc_auc", cv=cv_test, return_train_score=True, n_jobs=-1 ) mean_train_gini = (scores_gini["train_score"]*2-1).mean() mean_test_gini = (scores_gini["test_score"]*2-1).mean() std_test_gini = (scores_gini["test_score"]*2-1).std() print(f"mean_train_gini:\t {mean_train_gini:.4}") print(f"mean_dev_gini:\t\t {mean_test_gini:.4} (+-{std_test_gini:.1})") ``` ### Model analysis ``` rus = RandomUnderSampler(sampling_strategy=cfg.SAMPLING_STRATEGY) X_sub, y_sub = rus.fit_resample(X_test, y_test) print(y_sub.mean()) preds = model.predict_proba(X_sub)[::,1] preds_calibrated = pd.DataFrame(np.round(28.85*np.log(preds/(1-preds))+765.75), columns=["preds_calibrated"]) fig, axs = plt.subplots(1, 1, figsize=(10,7)) palette ={0: "C0", 1: "C1"} sns.histplot(data=preds_calibrated, x="preds_calibrated", hue=y_sub['BAD'], palette=palette, ax=axs, bins='auto') plt.savefig(os.path.join("Graphs", f"Score_distr_{MODEL_NAME}.png"), facecolor="w", dpi=100, bbox_inches = "tight") # Logistic regression coefficients coefs = pd.DataFrame( zip(X_train.columns, model["lr"].coef_[0]), columns=["Variable", "Coef"] ) coefs_sorted = coefs.reindex(coefs["Coef"].abs().sort_values(ascending=False).index) coefs_sorted # Save results for final summary results = { "test_gini": test_gini, "test_ap": test_ap, "optimization_time": 0, "fpr": fpr.tolist(), "tpr": tpr.tolist(), "precision": precision.tolist(), "recall": recall.tolist(), "mean_train_gini": scores_gini["train_score"].tolist(), "mean_test_gini": scores_gini["test_score"].tolist(), } with open(os.path.join("Results", f"Results_{MODEL_NAME}.json"), 'w') as fp: json.dump(results, fp) ```
github_jupyter
# Kotelite dataset maker **Kotlite** (Angkot Elite) is an application that allows drivers to get passengers who have the same lane. This application is expected to parse existing congestion using the concept of ridesharing, in which passengers will get the experience of driving using a private car or taxi, but get a fairly cheap price similar to the price of public transportation. By using the machine learning algorithm, it is possible to match drivers and passengers who have the same routes. in this case the dataset used is NYC Taxi trip duration obtained from [Kaggle](https://www.kaggle.com/debanjanpaul/new-york-city-taxi-trip-distance-matrix). In this dataset, there are pickup locations and dropoff locations that will try to be used to match drivers and passengers. Existing data will be manipulated and will be separated as driver data and passenger data. ``` import requests import json import pandas as pd import numpy as np import random from datetime import datetime import warnings warnings.filterwarnings('ignore') df = pd.read_csv('/content/drive/Shareddrives/Brillante Workspace/ML Corner/Dataset/NYC_dataset/train_distance_matrix.csv') df.describe(include='all') def filtering_dataset(dataframe): selected_ft = ['id', 'pickup_datetime', 'pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude'] dataframe = dataframe[selected_ft] return dataframe dfs = filtering_dataset(df) dfs def change_second_dt(dataframe): for i in range(len(dataframe)): dtime = datetime.strptime(dataframe.loc[i,'pickup_datetime'], "%m/%D/%Y %H:%M") _delta = dtime - datetime(1970, 1, 1) dataframe.loc[i,'datetime'] = _delta.total_seconds() return dataframe def driver_and_passanger(dataframe, driver_sum=None, passanger_sum=None): df = dataframe.copy() if driver_sum == None: total = random.randint(10,500) driver_data = df.sample(total) else: driver_data = df.sample(driver_sum) if passanger_sum == None: total = random.randint(10,500) driver_data = df.sample(total) else: passanger_data = df.sample(passanger_sum) driver_data = driver_data.reset_index(drop=True) passanger_data = passanger_data.reset_index(drop=True) return driver_data, passanger_data driver_dump, passanger_dump = driver_and_passanger(dfs, driver_sum=100, passanger_sum=1000) driver_dump.describe(include='all') passanger_dump.describe(include='all') def route_parsing(response): data = response.json() routes = data['routes'][0]['legs'][0]['steps'] lat_routes = [] long_routes = [] for route in routes: lat_start = route['start_location']['lat'] long_start = route['start_location']['lng'] lat_end = route['end_location']['lat'] long_end = route['end_location']['lng'] if lat_start not in lat_routes: lat_routes.append(lat_start) if long_start not in long_routes: long_routes.append(long_start) if lat_end not in lat_routes: lat_routes.append(lat_end) if long_end not in long_routes: long_routes.append(long_end) routes_pair = [[lat_routes[i], long_routes[i]] for i in range(len(lat_routes))] return routes_pair def get_routes(dataframe, API_KEY): df = dataframe.copy() for i in range(len(df)): start_lat = df.loc[i,'pickup_latitude'] start_long = df.loc[i,'pickup_longitude'] end_lat = df.loc[i,'dropoff_latitude'] end_long = df.loc[i,'dropoff_longitude'] # request data from Direction API response = requests.get(f'https://maps.googleapis.com/maps/api/directions/json?origin={start_lat},{start_long}&destination={end_lat},{end_long}&key={API_KEY}') # parse response to get routes data routes = route_parsing(response) # change list to string routes = json.dumps(routes) # pop data to dataframe df.loc[i, 'routes'] = routes return df API_KEY = 'AIzaSyC9rKUqSrytIsC7QrPExD8v7oLNB3eOr5k' driver_dump = get_routes(driver_dump, API_KEY) driver_dump driver_dump.to_csv('/content/drive/Shareddrives/Brillante Workspace/ML Corner/Dataset/kotlite_dataset/kotlite_driver_dataset.csv', index=False, header=True) passanger_dump.to_csv('/content/drive/Shareddrives/Brillante Workspace/ML Corner/Dataset/kotlite_dataset/kotlite_passanger_dataset.csv', index=False, header=True) ```
github_jupyter
# Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. ## Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) #target_text ``` ## Explore the Data Play around with view_sentence_range to view different parts of the data. ``` #view_sentence_range = (0, 10) view_sentence_range = (31, 40) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ``` ## Implement Preprocessing Function ### Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `<EOS>` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end. You can get the `<EOS>` word id by doing: ```python target_vocab_to_int['<EOS>'] ``` You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ``` def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_sentences = source_text.split('\n') target_sentences = target_text.split('\n') #print(source_vocab_to_int) source_id_text = [] for sentence in source_sentences: words = sentence.split() mysentence = [] for word in words: mysentence.append(source_vocab_to_int.get(word,0)) # return 0 if not in the dd #mysentence.append(source_vocab_to_int[word]) #print(source_vocab_to_int[word]) #print(source_vocab_to_int.get(word,0)) source_id_text.append(mysentence) target_id_text = [] for sentence in target_sentences: words = sentence.split() mysentence = [] for word in words: mysentence.append(target_vocab_to_int.get(word,0)) # return 0 is the word doesn't exit in the dd mysentence.append(target_vocab_to_int['<EOS>']) target_id_text.append(mysentence) # print(source_id_text[0]) # print(target_id_text[0]) # # use list comprehension is more efficient # #target_ids = [[target_vocab_to_int.get(word) for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ``` ### Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ``` # Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ``` ### Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ``` ## Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - `model_inputs` - `process_decoding_input` - `encoding_layer` - `decoding_layer_train` - `decoding_layer_infer` - `decoding_layer` - `seq2seq_model` ### Input Implement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. - Targets placeholder with rank 2. - Learning rate placeholder with rank 0. - Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) ``` def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function inputs = tf.placeholder(dtype = tf.int32, shape=(None, None), name='input') targets = tf.placeholder(dtype = tf.int32, shape=(None, None), name='targets') learning_rate = tf.placeholder(dtype = tf.float32, name='learning_rate') keep_prob = tf.placeholder(dtype = tf.float32, name='keep_prob') return (inputs, targets, learning_rate, keep_prob) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ``` ### Process Decoding Input Implement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the beginning of each batch. ``` def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for decoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function newbatch = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) newtarget = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), newbatch], 1) return newtarget """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) ``` ### Encoding Implement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). ``` def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) # lstm cell cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob = keep_prob) cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers) output, state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32) return state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ``` ### Decoding - Training Create training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs. ``` def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) outputs, state, context = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder_fn, inputs = dec_embed_input, sequence_length=sequence_length, scope=decoding_scope) training_logits = output_fn(outputs) # add additional dropout # tf.nn.dropout(training_logits, keep_prob) return training_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ``` ### Decoding - Inference Create inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ``` def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function infer_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, num_decoder_symbols = vocab_size, dtype = tf.int32) dp_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob = keep_prob) outputs, state, context = tf.contrib.seq2seq.dynamic_rnn_decoder(dp_cell, infer_fn, sequence_length=maximum_length, scope=decoding_scope) return outputs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ``` ### Build the Decoding Layer Implement `decoding_layer()` to create a Decoder RNN layer. - Create RNN cell for decoding using `rnn_size` and `num_layers`. - Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions) to transform it's input, logits, to class logits. - Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits. - Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits. Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ``` def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function start_symb, end_symb = target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'] lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) stack_lstm = tf.contrib.rnn.MultiRNNCell([dropout] * num_layers) output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, activation_fn=None,scope = decoding_scope) with tf.variable_scope('decoding') as decoding_scope: training_logits = decoding_layer_train(encoder_state, stack_lstm, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope('decoding', reuse=True) as decoding_scope: infer_logits = decoding_layer_infer(encoder_state, stack_lstm, dec_embeddings, start_symb, end_symb, sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) # option 2: more concise # decoding_scope.reuse_variables() # infer_logits = decoding_layer_infer(encoder_state, # stack_lstm, # dec_embeddings, # start_symb, # end_symb, # sequence_length, # vocab_size, # decoding_scope, # output_fn, # keep_prob) return (training_logits, infer_logits) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ``` ### Build the Neural Network Apply the functions you implemented above to: - Apply embedding to the input data for the encoder. - Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`. - Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function. - Apply embedding to the target data for the decoder. - Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`. ``` def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function enc_embed = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encode = encoding_layer(enc_embed, rnn_size, num_layers, keep_prob) dec_process = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_input = tf.nn.embedding_lookup(dec_embed, dec_process) train_logits, infer_logits = decoding_layer(dec_input, dec_embed, encode, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return (train_logits, infer_logits) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ``` ## Neural Network Training ### Hyperparameters Tune the following parameters: - Set `epochs` to the number of epochs. - Set `batch_size` to the batch size. - Set `rnn_size` to the size of the RNNs. - Set `num_layers` to the number of layers. - Set `encoding_embedding_size` to the size of the embedding for the encoder. - Set `decoding_embedding_size` to the size of the embedding for the decoder. - Set `learning_rate` to the learning rate. - Set `keep_probability` to the Dropout keep probability ``` # Number of Epochs epochs = 4 # Batch Size batch_size = 128 # RNN Size rnn_size = 256 # Number of Layers num_layers = 3 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.8 ``` ### Build the Graph Build the graph using the neural network you implemented. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ``` ### Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ``` ### Save Parameters Save the `batch_size` and `save_path` parameters for inference. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ``` # Checkpoint ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ``` ## Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences. - Convert the sentence to lowercase - Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `<UNK>` word id. ``` def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function wid_list = [] for word in sentence.lower().split(): wid_list.append(vocab_to_int.get(word, vocab_to_int['<UNK>'])) return wid_list """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ``` ## Translate This will translate `translate_sentence` from English to French. ``` translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) ``` ## Imperfect Translation You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data. You can train on the [WMT10 French-English corpus](http://www.statmt.org/wmt10/training-giga-fren.tar). This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project. ## Submitting This Project When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
github_jupyter
# Exploring Data with Python A significant part of a data scientist's role is to explore, analyze, and visualize data. There's a wide range of tools and programming languages that they can use to do this, and of the most popular approaches is to use Jupyter notebooks (like this one) and Python. Python is a flexible programming language that is used in a wide range of scenarios; from web applications to device programming. It's extremely popular in the data science and machine learning community because of the many packages it supports for data analysis and visualization. In this notebook, we'll explore some of these packages, and apply basic techniques to analyze data. This is not intended to be a comprehensive Python programming exercise; or even a deep dive into data analysis. Rather, it's intended as a crash course in some of the common ways in which data scientists can use Python to work with data. > **Note**: If you've never used the Jupyter Notebooks environment before, there are a few things you should be aware of: > > - Notebooks are made up of *cells*. Some cells (like this one) contain *markdown* text, while others (like the one beneath this one) contain code. > - The notebook is connected to a Python *kernel* (you can see which one at the top right of the page - if you're running this notebook in an Azure Machine Learning compute instance it should be connected to the **Python 3.6 - AzureML** kernel). If you stop the kernel or disconnect from the server (for example, by closing and reopening the notebook, or ending and resuming your session), the output from cells that have been run will still be displayed; but any variables or functions defined in those cells will have been lost - you must rerun the cells before running any subsequent cells that depend on them. > - You can run each code cell by using the **&#9658; Run** button. The **&#9711;** symbol next to the kernel name at the top right will briefly turn to **&#9899;** while the cell runs before turning back to **&#9711;**. > - The output from each code cell will be displayed immediately below the cell. > - Even though the code cells can be run individually, some variables used in the code are global to the notebook. That means that you should run all of the code cells <u>**in order**</u>. There may be dependencies between code cells, so if you skip a cell, subsequent cells might not run correctly. ## Exploring data arrays with NumPy Let's start by looking at some simple data. Suppose a college takes a sample of student grades for a data science class. Run the code in the cell below by clicking the **&#9658; Run** button to see the data. ``` data = [50,50,47,97,49,3,53,42,26,74,82,62,37,15,70,27,36,35,48,52,63,64] print(data) ``` The data has been loaded into a Python **list** structure, which is a good data type for general data manipulation, but not optimized for numeric analysis. For that, we're going to use the **NumPy** package, which includes specific data types and functions for working with *Num*bers in *Py*thon. Run the cell below to load the data into a NumPy **array**. ``` import numpy as np grades = np.array(data) print(grades) ``` Just in case you're wondering about the differences between a **list** and a NumPy **array**, let's compare how these data types behave when we use them in an expression that multiplies them by 2. ``` print (type(data),'x 2:', data * 2) print('---') print (type(grades),'x 2:', grades * 2) ``` Note that multiplying a list by 2 creates a new list of twice the length with the original sequence of list elements repeated. Multiplying a NumPy array on the other hand performs an element-wise calculation in which the array behaves like a *vector*, so we end up with an array of the same size in which each element has been multiplied by 2. The key takeaway from this is that NumPy arrays are specifically designed to support mathematical operations on numeric data - which makes them more useful for data analysis than a generic list. You might have spotted that the class type for the numpy array above is a **numpy.ndarray**. The **nd** indicates that this is a structure that can consists of multiple *dimensions* (it can have *n* dimensions). Our specific instance has a single dimension of student grades. Run the cell below to view the **shape** of the array. ``` grades.shape ``` The shape confirms that this array has only one dimension, which contains 22 elements (there are 22 grades in the original list). You can access the individual elements in the array by their zero-based ordinal position. Let's get the first element (the one in position 0). ``` grades[0] ``` Alright, now you know your way around a NumPy array, it's time to perform some analysis of the grades data. You can apply aggregations across the elements in the array, so let's find the simple average grade (in other words, the *mean* grade value). ``` grades.mean() ``` So the mean grade is just around 50 - more or less in the middle of the possible range from 0 to 100. Let's add a second set of data for the same students, this time recording the typical number of hours per week they devoted to studying. ``` # Define an array of study hours study_hours = [10.0,11.5,9.0,16.0,9.25,1.0,11.5,9.0,8.5,14.5,15.5, 13.75,9.0,8.0,15.5,8.0,9.0,6.0,10.0,12.0,12.5,12.0] # Create a 2D array (an array of arrays) student_data = np.array([study_hours, grades]) # display the array student_data ``` Now the data consists of a 2-dimensional array - an array of arrays. Let's look at its shape. ``` # Show shape of 2D array student_data.shape ``` The **student_data** array contains two elements, each of which is an array containing 22 elements. To navigate this structure, you need to specify the position of each element in the hierarchy. So to find the first value in the first array (which contains the study hours data), you can use the following code. ``` # Show the first element of the first element student_data[0][0] ``` Now you have a multidimensional array containing both the student's study time and grade information, which you can use to compare data. For example, how does the mean study time compare to the mean grade? ``` # Get the mean value of each sub-array avg_study = student_data[0].mean() avg_grade = student_data[1].mean() # print(f'Average study hours: {:.2f}\nAverage grade: {:.2f}'.format(avg_study, avg_grade)) print(f"Average study hours: {avg_study:.2f}\nAverage grade: {avg_grade:.2f}") ``` ## Exploring tabular data with Pandas While NumPy provides a lot of the functionality you need to work with numbers, and specifically arrays of numeric values; when you start to deal with two-dimensional tables of data, the **Pandas** package offers a more convenient structure to work with - the **DataFrame**. Run the following cell to import the Pandas library and create a DataFrame with three columns. The first column is a list of student names, and the second and third columns are the NumPy arrays containing the study time and grade data. ``` import pandas as pd df_students = pd.DataFrame({ 'Name': [ 'Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny', 'Jakeem','Helena','Ismat','Anila','Skye','Daniel','Aisha' ] ,'StudyHours':student_data[0] ,'Grade':student_data[1] }) df_students ``` Note that in addition to the columns you specified, the DataFrame includes an *index* to unique identify each row. We could have specified the index explicitly, and assigned any kind of appropriate value (for example, an email address); but because we didn't specify an index, one has been created with a unique integer value for each row. ### Finding and filtering data in a DataFrame You can use the DataFrame's **loc** method to retrieve data for a specific index value, like this. ``` # Get the data for index value 5 df_students.loc[5] ``` You can also get the data at a range of index values, like this: ``` # Get the rows with index values from 0 to 5 df_students.loc[0:5] ``` In addition to being able to use the **loc** method to find rows based on the index, you can use the **iloc** method to find rows based on their ordinal position in the DataFrame (regardless of the index): ``` # Get data in the first five rows df_students.iloc[0:5] ``` Look carefully at the `iloc[0:5]` results, and compare them to the `loc[0:5]` results you obtained previously. Can you spot the difference? The **loc** method returned rows with index *label* in the list of values from *0* to *5* - which includes *0*, *1*, *2*, *3*, *4*, and *5* (six rows). However, the **iloc** method returns the rows in the *positions* included in the range 0 to 5, and since integer ranges don't include the upper-bound value, this includes positions *0*, *1*, *2*, *3*, and *4* (five rows). **iloc** identifies data values in a DataFrame by *position*, which extends beyond rows to columns. So for example, you can use it to find the values for the columns in positions 1 and 2 in row 0, like this: ``` df_students.iloc[0,[1,2]] ``` Let's return to the **loc** method, and see how it works with columns. Remember that **loc** is used to locate data items based on index values rather than positions. In the absence of an explicit index column, the rows in our dataframe are indexed as integer values, but the columns are identified by name: ``` df_students.loc[0,'Grade'] ``` Here's another useful trick. You can use the **loc** method to find indexed rows based on a filtering expression that references named columns other than the index, like this: ``` df_students.loc[df_students['Name']=='Aisha'] ``` Actually, you don't need to explicitly use the **loc** method to do this - you can simply apply a DataFrame filtering expression, like this: ``` df_students[df_students['Name']=='Aisha'] ``` And for good measure, you can achieve the same results by using the DataFrame's **query** method, like this: ``` df_students.query('Name=="Aisha"') ``` The three previous examples underline an occassionally confusing truth about working with Pandas. Often, there are multiple ways to achieve the same results. Another example of this is the way you refer to a DataFrame column name. You can specify the column name as a named index value (as in the `df_students['Name']` examples we've seen so far), or you can use the column as a property of the DataFrame, like this: ``` df_students[df_students.Name == 'Aisha'] ``` ### Loading a DataFrame from a file We constructed the DataFrame from some existing arrays. However, in many real-world scenarios, data is loaded from sources such as files. Let's replace the student grades DataFrame with the contents of a text file. ``` df_students = pd.read_csv('data/grades.csv',delimiter=',',header='infer') df_students.head() ``` The DataFrame's **read_csv** method is used to load data from text files. As you can see in the example code, you can specify options such as the column delimiter and which row (if any) contains column headers (in this case, the delimiter is a comma and the first row contains the column names - these are the default settings, so the parameters could have been omitted). ### Handling missing values One of the most common issues data scientists need to deal with is incomplete or missing data. So how would we know that the DataFrame contains missing values? You can use the **isnull** method to identify which individual values are null, like this: ``` df_students.isnull() ``` Of course, with a larger DataFrame, it would be inefficient to review all of the rows and columns individually; so we can get the sum of missing values for each column, like this: ``` df_students.isnull().sum() ``` So now we know that there's one missing **StudyHours** value, and two missing **Grade** values. To see them in context, we can filter the dataframe to include only rows where any of the columns (axis 1 of the DataFrame) are null. ``` df_students[df_students.isnull().any(axis=1)] ``` When the DataFrame is retrieved, the missing numeric values show up as **NaN** (*not a number*). So now that we've found the null values, what can we do about them? One common approach is to *impute* replacement values. For example, if the number of study hours is missing, we could just assume that the student studied for an average amount of time and replace the missing value with the mean study hours. To do this, we can use the **fillna** method, like this: ``` df_students.StudyHours = df_students.StudyHours.fillna(df_students.StudyHours.mean()) df_students ``` Alternatively, it might be important to ensure that you only use data you know to be absolutely correct; so you can drop rows or columns that contains null values by using the **dropna** method. In this case, we'll remove rows (axis 0 of the DataFrame) where any of the columns contain null values. ``` df_students = df_students.dropna(axis=0, how='any') df_students ``` ### Explore data in the DataFrame Now that we've cleaned up the missing values, we're ready to explore the data in the DataFrame. Let's start by comparing the mean study hours and grades. ``` # Get the mean study hours using the column name as an index mean_study = df_students['StudyHours'].mean() # Get the mean grade using the column name as a property (just to make the point!) mean_grade = df_students.Grade.mean() # Print the mean study hours and mean grade print('Average weekly study hours: {:.2f}\nAverage grade: {:.2f}'.format(mean_study, mean_grade)) ``` OK, let's filter the DataFrame to find only the students who studied for more than the average amount of time. ``` # Get students who studied for the mean or more hours df_students[df_students.StudyHours > mean_study] ``` Note that the filtered result is itself a DataFrame, so you can work with its columns just like any other DataFrame. For example, let's find the average grade for students who undertook more than the average amount of study time. ``` # What was their mean grade? df_students[df_students.StudyHours > mean_study].Grade.mean() ``` Let's assume that the passing grade for the course is 60. We can use that information to add a new column to the DataFrame, indicating whether or not each student passed. First, we'll create a Pandas **Series** containing the pass/fail indicator (True or False), and then we'll concatenate that series as a new column (axis 1) in the DataFrame. ``` passes = pd.Series(df_students['Grade'] >= 60) df_students = pd.concat([df_students, passes.rename("Pass")], axis=1) df_students ``` DataFrames are designed for tabular data, and you can use them to perform many of the kinds of data analytics operation you can do in a relational database; such as grouping and aggregating tables of data. For example, you can use the **groupby** method to group the student data into groups based on the **Pass** column you added previously, and count the number of names in each group - in other words, you can determine how many students passed and failed. ``` print(df_students.groupby(df_students.Pass).Name.count()) ``` You can aggregate multiple fields in a group using any available aggregation function. For example, you can find the mean study time and grade for the groups of students who passed and failed the course. ``` print(df_students.groupby(df_students.Pass)['StudyHours', 'Grade'].mean()) ``` DataFrames are amazingly versatile, and make it easy to manipulate data. Many DataFrame operations return a new copy of the DataFrame; so if you want to modify a DataFrame but keep the existing variable, you need to assign the result of the operation to the existing variable. For example, the following code sorts the student data into descending order of Grade, and assigns the resulting sorted DataFrame to the original **df_students** variable. ``` # Create a DataFrame with the data sorted by Grade (descending) df_students = df_students.sort_values('Grade', ascending=False) # Show the DataFrame df_students ``` ## Visualizing data with Matplotlib DataFrames provide a great way to explore and analyze tabular data, but sometimes a picture is worth a thousand rows and columns. The **Matplotlib** library provides the foundation for plotting data visualizations that can greatly enhance your ability to analyze the data. Let's start with a simple bar chart that shows the grade of each student. ``` # Ensure plots are displayed inline in the notebook %matplotlib inline from matplotlib import pyplot as plt # Create a bar plot of name vs grade plt.bar(x=df_students.Name, height=df_students.Grade) # Display the plot plt.show() ``` Well, that worked; but the chart could use some improvements to make it clearer what we're looking at. Note that you used the **pyplot** class from Matplotlib to plot the chart. This class provides a whole bunch of ways to improve the visual elements of the plot. For example, the following code: - Specifies the color of the bar chart. - Adds a title to the chart (so we know what it represents) - Adds labels to the X and Y (so we know which axis shows which data) - Adds a grid (to make it easier to determine the values for the bars) - Rotates the X markers (so we can read them) ``` # Create a bar plot of name vs grade plt.bar(x=df_students.Name, height=df_students.Grade, color='orange') # Customize the chart plt.title('Student Grades') plt.xlabel('Student') plt.ylabel('Grade') plt.grid(color='#95a5a6', linestyle='--', linewidth=2, axis='y', alpha=0.7) plt.xticks(rotation=90) # Display the plot plt.show() ``` A plot is technically contained with a **Figure**. In the previous examples, the figure was created implicitly for you; but you can create it explicitly. For example, the following code creates a figure with a specific size. ``` # Create a Figure fig = plt.figure(figsize=(8,3)) # Create a bar plot of name vs grade plt.bar(x=df_students.Name, height=df_students.Grade, color='orange') # Customize the chart plt.title('Student Grades') plt.xlabel('Student') plt.ylabel('Grade') plt.grid(color='#95a5a6', linestyle='--', linewidth=2, axis='y', alpha=0.7) plt.xticks(rotation=90) # Show the figure plt.show() ``` A figure can contain multiple subplots, each on its own *axis*. For example, the following code creates a figure with two subplots - one is a bar chart showing student grades, and the other is a pie chart comparing the number of passing grades to non-passing grades. ``` # Create a figure for 2 subplots (1 row, 2 columns) fig, ax = plt.subplots(1, 2, figsize = (10,4)) # Create a bar plot of name vs grade on the first axis ax[0].bar(x=df_students.Name, height=df_students.Grade, color='orange') ax[0].set_title('Grades') ax[0].set_xticklabels(df_students.Name, rotation=90) # Create a pie chart of pass counts on the second axis pass_counts = df_students['Pass'].value_counts() ax[1].pie(pass_counts, labels=pass_counts) ax[1].set_title('Passing Grades') ax[1].legend(pass_counts.keys().tolist()) # Add a title to the Figure fig.suptitle('Student Data') # Show the figure fig.show() ``` Until now, you've used methods of the Matplotlib.pyplot object to plot charts. However, Matplotlib is so foundational to graphics in Python that many packages, including Pandas, provide methods that abstract the underlying Matplotlib functions and simplify plotting. For example, the DataFrame provides its own methods for plotting data, as shown in the following example to plot a bar chart of study hours. ``` df_students.plot.bar(x='Name', y='StudyHours', color='teal', figsize=(6,4)) ``` ## Getting started with statistical analysis Now that you know how to use Python to manipulate and visualize data, you can start analyzing it. A lot of data science is rooted in *statistics*, so we'll explore some basic statistical techniques. > **Note**: This is not intended to teach you statistics - that's much too big a topic for this notebook. It will however introduce you to some statistical concepts and techniques that data scientists use as they explore data in preparation for machine learning modeling. ### Descriptive statistics and data distribution When examining a *variable* (for example a sample of student grades), data scientists are particularly interested in its *distribution* (in other words, how are all the different grade values spread across the sample). The starting point for this exploration is often to visualize the data as a histogram, and see how frequently each value for the variable occurs. ``` # Get the variable to examine var_data = df_students['Grade'] # Create a Figure fig = plt.figure(figsize=(10,4)) # Plot a histogram plt.hist(var_data) # Add titles and labels plt.title('Data Distribution') plt.xlabel('Value') plt.ylabel('Frequency') # Show the figure fig.show() ``` The histogram for grades is a symmetric shape, where the most frequently occurring grades tend to be in the middle of the range (around 50), with fewer grades at the extreme ends of the scale. #### Measures of central tendency To understand the distribution better, we can examine so-called *measures of central tendency*; which is a fancy way of describing statistics that represent the "middle" of the data. The goal of this is to try to find a "typical" value. Common ways to define the middle of the data include: - The *mean*: A simple average based on adding together all of the values in the sample set, and then dividing the total by the number of samples. - The *median*: The value in the middle of the range of all of the sample values. - The *mode*: The most commonly occuring value in the sample set<sup>\*</sup>. Let's calculate these values, along with the minimum and maximum values for comparison, and show them on the histogram. > <sup>\*</sup>Of course, in some sample sets , there may be a tie for the most common value - in which case the dataset is described as *bimodal* or even *multimodal*. ``` # Get the variable to examine var = df_students['Grade'] # Get statistics min_val = var.min() max_val = var.max() mean_val = var.mean() med_val = var.median() mod_val = var.mode()[0] print('Minimum:{:.2f}\nMean:{:.2f}\nMedian:{:.2f}\nMode:{:.2f}\nMaximum:{:.2f}\n'.format(min_val, mean_val, med_val, mod_val, max_val)) # Create a Figure fig = plt.figure(figsize=(10,4)) # Plot a histogram plt.hist(var) # Add lines for the statistics plt.axvline(x=min_val, color = 'gray', linestyle='dashed', linewidth = 2) plt.axvline(x=mean_val, color = 'cyan', linestyle='dashed', linewidth = 2) plt.axvline(x=med_val, color = 'red', linestyle='dashed', linewidth = 2) plt.axvline(x=mod_val, color = 'yellow', linestyle='dashed', linewidth = 2) plt.axvline(x=max_val, color = 'gray', linestyle='dashed', linewidth = 2) # Add titles and labels plt.title('Data Distribution') plt.xlabel('Value') plt.ylabel('Frequency') # Show the figure fig.show() ``` For the grade data, the mean, median, and mode all seem to be more or less in the middle of the minimum and maximum, at around 50. Another way to visualize the distribution of a variable is to use a *box* plot (sometimes called a *box-and-whiskers* plot). Let's create one for the grade data. ``` # Get the variable to examine var = df_students['Grade'] # Create a Figure fig = plt.figure(figsize=(10,4)) # Plot a histogram plt.boxplot(var) # Add titles and labels plt.title('Data Distribution') # Show the figure fig.show() ``` The box plot shows the distribution of the grade values in a different format to the histogram. The *box* part of the plot shows where the inner two *quartiles* of the data reside - so in this case, half of the grades are between approximately 36 and 63. The *whiskers* extending from the box show the outer two quartiles; so the other half of the grades in this case are between 0 and 36 or 63 and 100. The line in the box indicates the *median* value. It's often useful to combine histograms and box plots, with the box plot's orientation changed to align it with the histogram (in some ways, it can be helpful to think of the histogram as a "front elevation" view of the distribution, and the box plot as a "plan" view of the distribution from above.) ``` # Create a function that we can re-use def show_distribution(var_data): from matplotlib import pyplot as plt # Get statistics min_val = var_data.min() max_val = var_data.max() mean_val = var_data.mean() med_val = var_data.median() mod_val = var_data.mode()[0] print('Minimum:{:.2f}\nMean:{:.2f}\nMedian:{:.2f}\nMode:{:.2f}\nMaximum:{:.2f}\n'.format(min_val, mean_val, med_val, mod_val, max_val)) # Create a figure for 2 subplots (2 rows, 1 column) fig, ax = plt.subplots(2, 1, figsize = (10,4)) # Plot the histogram ax[0].hist(var_data) ax[0].set_ylabel('Frequency') # Add lines for the mean, median, and mode ax[0].axvline(x=min_val, color = 'gray', linestyle='dashed', linewidth = 2) ax[0].axvline(x=mean_val, color = 'cyan', linestyle='dashed', linewidth = 2) ax[0].axvline(x=med_val, color = 'red', linestyle='dashed', linewidth = 2) ax[0].axvline(x=mod_val, color = 'yellow', linestyle='dashed', linewidth = 2) ax[0].axvline(x=max_val, color = 'gray', linestyle='dashed', linewidth = 2) # Plot the boxplot ax[1].boxplot(var_data, vert=False) ax[1].set_xlabel('Value') # Add a title to the Figure fig.suptitle('Data Distribution') # Show the figure fig.show() # Get the variable to examine col = df_students['Grade'] # Call the function show_distribution(col) ``` All of the measurements of central tendency are right in the middle of the data distribution, which is symmetric with values becoming progressively lower in both directions from the middle. To explore this distribution in more detail, you need to understand that statistics is fundamentally about taking *samples* of data and using probability functions to extrapolate information about the full *population* of data. For example, the student data consists of 22 samples, and for each sample there is a grade value. You can think of each sample grade as a variable that's been randomly selected from the set of all grades awarded for this course. With enough of these random variables, you can calculate something called a *probability density function*, which estimates the distribution of grades for the full population. The Pandas DataFrame class provides a helpful plot function to show this density. ``` def show_density(var_data): from matplotlib import pyplot as plt fig = plt.figure(figsize=(10,4)) # Plot density var_data.plot.density() # Add titles and labels plt.title('Data Density') # Show the mean, median, and mode plt.axvline(x=var_data.mean(), color = 'cyan', linestyle='dashed', linewidth = 2) plt.axvline(x=var_data.median(), color = 'red', linestyle='dashed', linewidth = 2) plt.axvline(x=var_data.mode()[0], color = 'yellow', linestyle='dashed', linewidth = 2) # Show the figure plt.show() # Get the density of Grade col = df_students['Grade'] show_density(col) ``` As expected from the histogram of the sample, the density shows the characteristic 'bell curve" of what statisticians call a *normal* distribution with the mean and mode at the center and symmetric tails. Now let's take a look at the distribution of the study hours data. ``` # Get the variable to examine col = df_students['StudyHours'] # Call the function show_distribution(col) ``` The distribution of the study time data is significantly different from that of the grades. Note that the whiskers of the box plot only extend to around 6.0, indicating that the vast majority of the first quarter of the data is above this value. The minimum is marked with an **o**, indicating that it is statistically an *outlier* - a value that lies significantly outside the range of the rest of the distribution. Outliers can occur for many reasons. Maybe a student meant to record "10" hours of study time, but entered "1" and missed the "0". Or maybe the student was abnormally lazy when it comes to studying! Either way, it's a statistical anomaly that doesn't represent a typical student. Let's see what the distribution looks like without it. ``` # Get the variable to examine col = df_students[df_students.StudyHours>1]['StudyHours'] # Call the function show_distribution(col) ``` In this example, the dataset is small enough to clearly see that the value **1** is an outlier for the **StudyHours** column, so you can exclude it explicitly. In most real-world cases, it's easier to consider outliers as being values that fall below or above percentiles within which most of the data lie. For example, the following code uses the Pandas **quantile** function to exclude observations below the 0.01th percentile (the value above which 99% of the data reside). ``` q01 = df_students.StudyHours.quantile(0.01) # Get the variable to examine col = df_students[df_students.StudyHours>q01]['StudyHours'] # Call the function show_distribution(col) ``` > **Tip**: You can also eliminate outliers at the upper end of the distribution by defining a threshold at a high percentile value - for example, you could use the **quantile** function to find the 0.99 percentile below which 99% of the data reside. With the outliers removed, the box plot shows all data within the four quartiles. Note that the distribution is not symmetric like it is for the grade data though - there are some students with very high study times of around 16 hours, but the bulk of the data is between 7 and 13 hours; The few extremely high values pull the mean towards the higher end of the scale. Let's look at the density for this distribution. ``` # Get the density of StudyHours show_density(col) ``` This kind of distribution is called *right skewed*. The mass of the data is on the left side of the distribution, creating a long tail to the right because of the values at the extreme high end; which pull the mean to the right. #### Measures of variance So now we have a good idea where the middle of the grade and study hours data distributions are. However, there's another aspect of the distributions we should examine: how much variability is there in the data? Typical statistics that measure variability in the data include: - **Range**: The difference between the maximum and minimum. There's no built-in function for this, but it's easy to calculate using the **min** and **max** functions. - **Variance**: The average of the squared difference from the mean. You can use the built-in **var** function to find this. - **Standard Deviation**: The square root of the variance. You can use the built-in **std** function to find this. ``` for col_name in ['Grade','StudyHours']: col = df_students[col_name] rng = col.max() - col.min() var = col.var() std = col.std() print('\n{}:\n - Range: {:.2f}\n - Variance: {:.2f}\n - Std.Dev: {:.2f}'.format(col_name, rng, var, std)) ``` Of these statistics, the standard deviation is generally the most useful. It provides a measure of variance in the data on the same scale as the data itself (so grade points for the Grade distribution and hours for the StudyHours distribution). The higher the standard deviation, the more variance there is when comparing values in the distribution to the distribution mean - in other words, the data is more spread out. When working with a *normal* distribution, the standard deviation works with the particular characteristics of a normal distribution to provide even greater insight. Run the cell below to see the relationship between standard deviations and the data in the normal distribution. ``` import scipy.stats as stats # Get the Grade column col = df_students['Grade'] # get the density density = stats.gaussian_kde(col) # Plot the density col.plot.density() # Get the mean and standard deviation s = col.std() m = col.mean() # Annotate 1 stdev x1 = [m-s, m+s] y1 = density(x1) plt.plot(x1,y1, color='magenta') plt.annotate('1 std (68.26%)', (x1[1],y1[1])) # Annotate 2 stdevs x2 = [m-(s*2), m+(s*2)] y2 = density(x2) plt.plot(x2,y2, color='green') plt.annotate('2 std (95.45%)', (x2[1],y2[1])) # Annotate 3 stdevs x3 = [m-(s*3), m+(s*3)] y3 = density(x3) plt.plot(x3,y3, color='orange') plt.annotate('3 std (99.73%)', (x3[1],y3[1])) # Show the location of the mean plt.axvline(col.mean(), color='cyan', linestyle='dashed', linewidth=1) plt.axis('off') plt.show() ``` The horizontal lines show the percentage of data within 1, 2, and 3 standard deviations of the mean (plus or minus). In any normal distribution: - Approximately 68.26% of values fall within one standard deviation from the mean. - Approximately 95.45% of values fall within two standard deviations from the mean. - Approximately 99.73% of values fall within three standard deviations from the mean. So, since we know that the mean grade is 49.18, the standard deviation is 21.74, and distribution of grades is approximately normal; we can calculate that 68.26% of students should achieve a grade between 27.44 and 70.92. The descriptive statistics we've used to understand the distribution of the student data variables are the basis of statistical analysis; and because they're such an important part of exploring your data, there's a built-in **Describe** method of the DataFrame object that returns the main descriptive statistics for all numeric columns. ``` df_students.describe() ``` ## Comparing data Now that you know something about the statistical distribution of the data in your dataset, you're ready to examine your data to identify any apparent relationships between variables. First of all, let's get rid of any rows that contain outliers so that we have a sample that is representative of a typical class of students. We identified that the StudyHours column contains some outliers with extremely low values, so we'll remove those rows. ``` df_sample = df_students[df_students['StudyHours']>1] df_sample ``` ### Comparing numeric and categorical variables The data includes two *numeric* variables (**StudyHours** and **Grade**) and two *categorical* variables (**Name** and **Pass**). Let's start by comparing the numeric **StudyHours** column to the categorical **Pass** column to see if there's an apparent relationship between the number of hours studied and a passing grade. To make this comparison, let's create box plots showing the distribution of StudyHours for each possible Pass value (true and false). ``` df_sample.boxplot(column='StudyHours', by='Pass', figsize=(8,5)) ``` Comparing the StudyHours distributions, it's immediately apparent (if not particularly surprising) that students who passed the course tended to study for more hours than students who didn't. So if you wanted to predict whether or not a student is likely to pass the course, the amount of time they spend studying may be a good predictive feature. ### Comparing numeric variables Now let's compare two numeric variables. We'll start by creating a bar chart that shows both grade and study hours. ``` # Create a bar plot of name vs grade and study hours df_sample.plot(x='Name', y=['Grade','StudyHours'], kind='bar', figsize=(8,5)) ``` The chart shows bars for both grade and study hours for each student; but it's not easy to compare because the values are on different scales. Grades are measured in grade points, and range from 3 to 97; while study time is measured in hours and ranges from 1 to 16. A common technique when dealing with numeric data in different scales is to *normalize* the data so that the values retain their proportional distribution, but are measured on the same scale. To accomplish this, we'll use a technique called *MinMax* scaling that distributes the values proportionally on a scale of 0 to 1. You could write the code to apply this transformation; but the **Scikit-Learn** library provides a scaler to do it for you. ``` from sklearn.preprocessing import MinMaxScaler # Get a scaler object scaler = MinMaxScaler() # Create a new dataframe for the scaled values df_normalized = df_sample[['Name', 'Grade', 'StudyHours']].copy() # Normalize the numeric columns df_normalized[['Grade','StudyHours']] = scaler.fit_transform(df_normalized[['Grade','StudyHours']]) # Plot the normalized values df_normalized.plot(x='Name', y=['Grade','StudyHours'], kind='bar', figsize=(8,5)) ``` With the data normalized, it's easier to see an apparent relationship between grade and study time. It's not an exact match, but it definitely seems like students with higher grades tend to have studied more. So there seems to be a correlation between study time and grade; and in fact, there's a statistical *correlation* measurement we can use to quantify the relationship between these columns. ``` df_normalized.Grade.corr(df_normalized.StudyHours) ``` The correlation statistic is a value between -1 and 1 that indicates the strength of a relationship. Values above 0 indicate a *positive* correlation (high values of one variable tend to coincide with high values of the other), while values below 0 indicate a *negative* correlation (high values of one variable tend to coincide with low values of the other). In this case, the correlation value is close to 1; showing a strongly positive correlation between study time and grade. > **Note**: Data scientists often quote the maxim "*correlation* is not *causation*". In other words, as tempting as it might be, you shouldn't interpret the statistical correlation as explaining *why* one of the values is high. In the case of the student data, the statistics demonstrates that students with high grades tend to also have high amounts of study time; but this is not the same as proving that they achieved high grades *because* they studied a lot. The statistic could equally be used as evidence to support the nonsensical conclusion that the students studied a lot *because* their grades were going to be high. Another way to visualise the apparent correlation between two numeric columns is to use a *scatter* plot. ``` # Create a scatter plot df_sample.plot.scatter(title='Study Time vs Grade', x='StudyHours', y='Grade') ``` Again, it looks like there's a discernible pattern in which the students who studied the most hours are also the students who got the highest grades. We can see this more clearly by adding a *regression* line (or a *line of best fit*) to the plot that shows the general trend in the data. To do this, we'll use a statistical technique called *least squares regression*. > **Warning - Math Ahead!** > > Cast your mind back to when you were learning how to solve linear equations in school, and recall that the *slope-intercept* form of a linear equation looks like this: > > \begin{equation}y = mx + b\end{equation} > > In this equation, *y* and *x* are the coordinate variables, *m* is the slope of the line, and *b* is the y-intercept (where the line goes through the Y-axis). > > In the case of our scatter plot for our student data, we already have our values for *x* (*StudyHours*) and *y* (*Grade*), so we just need to calculate the intercept and slope of the straight line that lies closest to those points. Then we can form a linear equation that calculates a new *y* value on that line for each of our *x* (*StudyHours*) values - to avoid confusion, we'll call this new *y* value *f(x)* (because it's the output from a linear equation ***f***unction based on *x*). The difference between the original *y* (*Grade*) value and the *f(x)* value is the *error* between our regression line and the actual *Grade* achieved by the student. Our goal is to calculate the slope and intercept for a line with the lowest overall error. > > Specifically, we define the overall error by taking the error for each point, squaring it, and adding all the squared errors together. The line of best fit is the line that gives us the lowest value for the sum of the squared errors - hence the name *least squares regression*. Fortunately, you don't need to code the regression calculation yourself - the **SciPy** package includes a **stats** class that provides a **linregress** method to do the hard work for you. This returns (among other things) the coefficients you need for the slope equation - slope (*m*) and intercept (*b*) based on a given pair of variable samples you want to compare. ``` from scipy import stats # df_regression = df_sample[['Grade', 'StudyHours']].copy() # Get the regression slope and intercept m, b, r, p, se = stats.linregress(df_regression['StudyHours'], df_regression['Grade']) print('slope: {:.4f}\ny-intercept: {:.4f}'.format(m,b)) print('so...\n f(x) = {:.4f}x + {:.4f}'.format(m,b)) # Use the function (mx + b) to calculate f(x) for each x (StudyHours) value df_regression['fx'] = (m * df_regression['StudyHours']) + b # Calculate the error between f(x) and the actual y (Grade) value df_regression['error'] = df_regression['fx'] - df_regression['Grade'] # Create a scatter plot of Grade vs StudyHours df_regression.plot.scatter(x='StudyHours', y='Grade') # Plot the regression line plt.plot(df_regression['StudyHours'],df_regression['fx'], color='cyan') # Display the plot plt.show() ``` Note that this time, the code plotted two distinct things - the scatter plot of the sample study hours and grades is plotted as before, and then a line of best fit based on the least squares regression coefficients is plotted. The slope and intercept coefficients calculated for the regression line are shown above the plot. The line is based on the ***f*(x)** values calculated for each **StudyHours** value. Run the following cell to see a table that includes the following values: - The **StudyHours** for each student. - The **Grade** achieved by each student. - The ***f(x)*** value calculated using the regression line coefficients. - The *error* between the calculated ***f(x)*** value and the actual **Grade** value. Some of the errors, particularly at the extreme ends, are quite large (up to over 17.5 grade points); but in general, the line is pretty close to the actual grades. ``` # Show the original x,y values, the f(x) value, and the error df_regression[['StudyHours', 'Grade', 'fx', 'error']] ``` ### Using the regression coefficients for prediction Now that you have the regression coefficients for the study time and grade relationship, you can use them in a function to estimate the expected grade for a given amount of study. ``` # Define a function based on our regression coefficients def f(x): m = 6.3134 b = -17.9164 return m*x + b study_time = 14 # Get f(x) for study time prediction = f(study_time) # Grade can't be less than 0 or more than 100 expected_grade = max(0,min(100,prediction)) #Print the estimated grade print ('Studying for {} hours per week may result in a grade of {:.0f}'.format(study_time, expected_grade)) ``` So by applying statistics to sample data, you've determined a relationship between study time and grade; and encapsulated that relationship in a general function that can be used to predict a grade for a given amount of study time. This technique is in fact the basic premise of machine learning. You can take a set of sample data that includes one or more *features* (in this case, the number of hours studied) and a known *label* value (in this case, the grade achieved) and use the sample data to derive a function that calculates predicted label values for any given set of features. ## Further Reading To learn more about the Python packages you explored in this notebook, see the following documentation: - [NumPy](https://numpy.org/doc/stable/) - [Pandas](https://pandas.pydata.org/pandas-docs/stable/) - [Matplotlib](https://matplotlib.org/contents.html) ## Challenge: Analyze Flight Data If this notebook has inspired you to try exploring data for yourself, why not take on the challenge of a real-world dataset containing flight records from the US Department of Transportation? You'll find the challenge in the [/challenges/01 - Flights Challenge.ipynb](./challenges/01%20-%20Flights%20Challenge.ipynb) notebook! > **Note**: The time to complete this optional challenge is not included in the estimated time for this exercise - you can spend as little or as much time on it as you like!
github_jupyter
<center> <img src="../../img/ods_stickers.jpg"> ## Открытый курс по машинному обучению. Сессия № 2 Автор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала. # <center> Тема 5. Композиции алгоритмов, случайный лес ## <center>Практика. Деревья решений и случайный лес в соревновании Kaggle Inclass по кредитному скорингу Тут веб-формы для ответов нет, ориентируйтесь на рейтинг [соревнования](https://inclass.kaggle.com/c/beeline-credit-scoring-competition-2), [ссылка](https://www.kaggle.com/t/115237dd8c5e4092a219a0c12bf66fc6) для участия. Решается задача кредитного скоринга. Признаки клиентов банка: - Age - возраст (вещественный) - Income - месячный доход (вещественный) - BalanceToCreditLimit - отношение баланса на кредитной карте к лимиту по кредиту (вещественный) - DIR - Debt-to-income Ratio (вещественный) - NumLoans - число заемов и кредитных линий - NumRealEstateLoans - число ипотек и заемов, связанных с недвижимостью (натуральное число) - NumDependents - число членов семьи, которых содержит клиент, исключая самого клиента (натуральное число) - Num30-59Delinquencies - число просрочек выплат по кредиту от 30 до 59 дней (натуральное число) - Num60-89Delinquencies - число просрочек выплат по кредиту от 60 до 89 дней (натуральное число) - Delinquent90 - были ли просрочки выплат по кредиту более 90 дней (бинарный) - имеется только в обучающей выборке ``` import numpy as np import pandas as pd from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV from sklearn.metrics import roc_auc_score %matplotlib inline ``` **Загружаем данные.** ``` train_df = pd.read_csv('../../data/credit_scoring_train.csv', index_col='client_id') test_df = pd.read_csv('../../data/credit_scoring_test.csv', index_col='client_id') y = train_df['Delinquent90'] train_df.drop('Delinquent90', axis=1, inplace=True) train_df.head() ``` **Посмотрим на число пропусков в каждом признаке.** ``` train_df.info() test_df.info() ``` **Заменим пропуски медианными значениями.** ``` train_df['NumDependents'].fillna(train_df['NumDependents'].median(), inplace=True) train_df['Income'].fillna(train_df['Income'].median(), inplace=True) test_df['NumDependents'].fillna(test_df['NumDependents'].median(), inplace=True) test_df['Income'].fillna(test_df['Income'].median(), inplace=True) ``` ### Дерево решений без настройки параметров **Обучите дерево решений максимальной глубины 3, используйте параметр random_state=17 для воспроизводимости результатов.** ``` first_tree = # Ваш код здесь first_tree.fit # Ваш код здесь ``` **Сделайте прогноз для тестовой выборки.** ``` first_tree_pred = first_tree # Ваш код здесь ``` **Запишем прогноз в файл.** ``` def write_to_submission_file(predicted_labels, out_file, target='Delinquent90', index_label="client_id"): # turn predictions into data frame and save as csv file predicted_df = pd.DataFrame(predicted_labels, index = np.arange(75000, predicted_labels.shape[0] + 75000), columns=[target]) predicted_df.to_csv(out_file, index_label=index_label) write_to_submission_file(first_tree_pred, 'credit_scoring_first_tree.csv') ``` **Если предсказывать вероятности дефолта для клиентов тестовой выборки, результат будет намного лучше.** ``` first_tree_pred_probs = first_tree.predict_proba(test_df)[:, 1] write_to_submission_file # Ваш код здесь ``` ## Дерево решений с настройкой параметров с помощью GridSearch **Настройте параметры дерева с помощью `GridSearhCV`, посмотрите на лучшую комбинацию параметров и среднее качество на 5-кратной кросс-валидации. Используйте параметр `random_state=17` (для воспроизводимости результатов), не забывайте про распараллеливание (`n_jobs=-1`).** ``` tree_params = {'max_depth': list(range(3, 8)), 'min_samples_leaf': list(range(5, 13))} locally_best_tree = GridSearchCV # Ваш код здесь locally_best_tree.fit # Ваш код здесь locally_best_tree.best_params_, round(locally_best_tree.best_score_, 3) ``` **Сделайте прогноз для тестовой выборки и пошлите решение на Kaggle.** ``` tuned_tree_pred_probs = locally_best_tree # Ваш код здесь write_to_submission_file # Ваш код здесь ``` ### Случайный лес без настройки параметров **Обучите случайный лес из деревьев неограниченной глубины, используйте параметр `random_state=17` для воспроизводимости результатов.** ``` first_forest = # Ваш код здесь first_forest.fit # Ваш код здесь first_forest_pred = first_forest # Ваш код здесь ``` **Сделайте прогноз для тестовой выборки и пошлите решение на Kaggle.** ``` write_to_submission_file # Ваш код здесь ``` ### Случайный лес c настройкой параметров **Настройте параметр `max_features` леса с помощью `GridSearhCV`, посмотрите на лучшую комбинацию параметров и среднее качество на 5-кратной кросс-валидации. Используйте параметр random_state=17 (для воспроизводимости результатов), не забывайте про распараллеливание (n_jobs=-1).** ``` %%time forest_params = {'max_features': np.linspace(.3, 1, 7)} locally_best_forest = GridSearchCV # Ваш код здесь locally_best_forest.fit # Ваш код здесь locally_best_forest.best_params_, round(locally_best_forest.best_score_, 3) tuned_forest_pred = locally_best_forest # Ваш код здесь write_to_submission_file # Ваш код здесь ``` **Посмотрите, как настроенный случайный лес оценивает важность признаков по их влиянию на целевой. Представьте результаты в наглядном виде с помощью `DataFrame`.** ``` pd.DataFrame(locally_best_forest.best_estimator_.feature_importances_ # Ваш код здесь ``` **Обычно увеличение количества деревьев только улучшает результат. Так что напоследок обучите случайный лес из 300 деревьев с найденными лучшими параметрами. Это может занять несколько минут.** ``` %%time final_forest = RandomForestClassifier # Ваш код здесь final_forest.fit(train_df, y) final_forest_pred = final_forest.predict_proba(test_df)[:, 1] write_to_submission_file(final_forest_pred, 'credit_scoring_final_forest.csv') ``` **Сделайте посылку на Kaggle.**
github_jupyter
# High-level Chainer Example ``` # Parameters EPOCHS = 10 N_CLASSES=10 BATCHSIZE = 64 LR = 0.01 MOMENTUM = 0.9 GPU = True LOGGER_URL='msdlvm.southcentralus.cloudapp.azure.com' LOGGER_USRENAME='admin' LOGGER_PASSWORD='password' LOGGER_DB='gpudata' LOGGER_SERIES='gpu' import os from os import path import sys import numpy as np import math import chainer import chainer.functions as F import chainer.links as L from chainer import optimizers from chainer import cuda from utils import cifar_for_library, yield_mb, create_logger, Timer from gpumon.influxdb import log_context from influxdb import InfluxDBClient client = InfluxDBClient(LOGGER_URL, 8086, LOGGER_USRENAME, LOGGER_PASSWORD, LOGGER_DB) node_id = os.getenv('AZ_BATCH_NODE_ID', default='node') task_id = os.getenv('AZ_BATCH_TASK_ID', default='chainer') job_id = os.getenv('AZ_BATCH_JOB_ID', default='chainer') logger = create_logger(client, node_id=node_id, task_id=task_id, job_id=job_id) print("OS: ", sys.platform) print("Python: ", sys.version) print("Chainer: ", chainer.__version__) print("Numpy: ", np.__version__) data_path = path.join(os.getenv('AZ_BATCHAI_INPUT_DATASET'), 'cifar-10-batches-py') class SymbolModule(chainer.Chain): def __init__(self): super(SymbolModule, self).__init__( conv1=L.Convolution2D(3, 50, ksize=(3,3), pad=(1,1)), conv2=L.Convolution2D(50, 50, ksize=(3,3), pad=(1,1)), conv3=L.Convolution2D(50, 100, ksize=(3,3), pad=(1,1)), conv4=L.Convolution2D(100, 100, ksize=(3,3), pad=(1,1)), # feature map size is 8*8 by pooling fc1=L.Linear(100*8*8, 512), fc2=L.Linear(512, N_CLASSES), ) def __call__(self, x): h = F.relu(self.conv2(F.relu(self.conv1(x)))) h = F.max_pooling_2d(h, ksize=(2,2), stride=(2,2)) h = F.dropout(h, 0.25) h = F.relu(self.conv4(F.relu(self.conv3(h)))) h = F.max_pooling_2d(h, ksize=(2,2), stride=(2,2)) h = F.dropout(h, 0.25) h = F.dropout(F.relu(self.fc1(h)), 0.5) return self.fc2(h) def init_model(m): optimizer = optimizers.MomentumSGD(lr=LR, momentum=MOMENTUM) optimizer.setup(m) return optimizer def to_chainer(array, **kwargs): return chainer.Variable(cuda.to_gpu(array), **kwargs) %%time # Data into format for library x_train, x_test, y_train, y_test = cifar_for_library(data_path, channel_first=True) print(x_train.shape, x_test.shape, y_train.shape, y_test.shape) print(x_train.dtype, x_test.dtype, y_train.dtype, y_test.dtype) %%time # Create symbol sym = SymbolModule() if GPU: chainer.cuda.get_device(0).use() # Make a specified GPU current sym.to_gpu() # Copy the model to the GPU %%time optimizer = init_model(sym) with Timer() as t: with log_context(LOGGER_URL, LOGGER_USRENAME, LOGGER_PASSWORD, LOGGER_DB, LOGGER_SERIES, node_id=node_id, task_id=task_id, job_id=job_id): for j in range(EPOCHS): for data, target in yield_mb(x_train, y_train, BATCHSIZE, shuffle=True): # Get samples optimizer.update(L.Classifier(sym), to_chainer(data), to_chainer(target)) # Log print(j) print('Training took %.03f sec.' % t.interval) logger('training duration', value=t.interval) %%time n_samples = (y_test.shape[0]//BATCHSIZE)*BATCHSIZE y_guess = np.zeros(n_samples, dtype=np.int) y_truth = y_test[:n_samples] c = 0 with chainer.using_config('train', False): for data, target in yield_mb(x_test, y_test, BATCHSIZE): # Forwards pred = chainer.cuda.to_cpu(sym(to_chainer(data)).data.argmax(-1)) # Collect results y_guess[c*BATCHSIZE:(c+1)*BATCHSIZE] = pred c += 1 acc=sum(y_guess == y_truth)/len(y_guess) print("Accuracy: ", acc) logger('accuracy', value=acc) ```
github_jupyter
# Is Alexander Ovechkin the NHL's best goal-scorer? ## Since his arrival to the NHL in the 2005-2006 season, Ovechkin has been one of the NHL's marquee players. Largely known for his ability to score goals and for having a vibrant personality contrary to most players in the league, Ovechkin has also been one of the league's most discussed players. One of those discussions is if he is truly the best goal-scorer in the league. This notebook seeks to resolve that discussion. To resolve it, we will be using a dataset of NHL player performance metrics dating from 1940 to 2018 to evaluate Ovechkin's goal scoring rates compared to his peers and from earlier decades. As an extension, we will also be exploring shooting percentage and shots to see further examine the source of Ovechkin's goals per game. This dataset is being imported from [inalitic.com](http://inalitic.com/datasets/nhl%20player%20data.html), which sourced its dataset from [Hockey-reference](http://hockey-reference.com), a repository of NHL data. We will use the information from this dataset to create a comparison of goals per game and shooting percentages of Ovechkin and other NHL players. ## Limitations A limitation of the data is that not all statistics have been historically tracked. While data metrics such as shots have been tracked for the last few decades (and the entirety of Ovechkin's career), they were not historically tracked in the early years of the NHL. Moreover, a number of rules changed at the start of Ovechkin's career. This makes comparisons between Ovechkin and some of hockey's all-time greats more difficult to make. Instead, this will focus more on Ovechkin and his peers, particularly other star players, which are easier comparisons to make. An additional limitation is that an assumption within the data set regarding null values was made. While it is easy to infer that the intention of the null value is a 0, given the null value was a result of converting a value of ' - ' in columns that were full of numerical values and that many players do not score goals or meet other metrics, it is still an assumption that was made. ## Summary Some of the key findings are that Ovechkin scores approximately 0.462 more goals per game than his peers and 0.439 more goals per game than the average NHL player since 1940. We also found that Ovechkin's shooting percentage is 2.544% higher than his peer forwards, but in line with other NHL superstar players, so it is reasonable. Instead, where Ovechkin far exceeds other players is in his total shots. Ovechkin is able to shoot the puck far more often than even other NHL superstars. This leads us to the conclusion that yes, Ovechkin is the NHL's best goal-scorer because he is the best at generating shots while maintaining an above average shooring percentage. ## Terminology To familiarize yourself with some hockey statistical terms: <b>Season</b> = NHL season in which the other statistics took place, serving as a timestamp<br> <b>Player</b> = the name of the NHL athlete<br> <b>Age</b> = age of a given player during that season<br> <b>Tm</b> = NHL team / club a given player played for during that season<br> <b>Pos</b> = position a given player plays<br> <b>GP</b> = number of games played by a given player during that season<br> <b>G</b> = number of goals scored by a given player during that season<br> <b>GPG</b> = average (mean) goals per game of a given player during that season, equivalent to G/GP<br> <b>S</b> = number of shots taken by a given player during that season<br> <b>S%</b> = percentage of shots taken by a given player that were goals during that season, equivalent to G/S * 100<br> <b>To start, we need to import our tools and csv file, then examine and clean the data.</b> ``` # import tools / libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import chardet # use Chardet to check encoding of CSV file with open('skater_stats.csv', 'rb') as fraw: file_content = fraw.read(-1) chardet.detect(file_content) ## Import csv and check headers, need to include encoding value since CSV is encoded as Windows-1252 df = pd.read_csv('skater_stats.csv', encoding = 'Windows-1252') df.head() # check data types and null-values df.info() ``` Since we are analyzing shots and goals, we can remove many of the columns, as they are unnecessary in determining Ovechkin's goal-scoring ability. ``` ## Remove unnecessary columns # Since we are analyzing shots and goals, we can remove most of the columns df.drop(['Unnamed: 0', 'BLK','A', 'PTS','+/-', 'PIM', 'EVG', 'PPG', 'SHG', 'GWG', 'EVA', 'PPA', 'SHA', 'TOI', 'ATOI', 'BLK', 'HIT', 'FOwin', 'FOloss', 'FO%'], axis = 1, inplace = True) # Convert most statline columns from object to int or float df['G'] = pd.to_numeric(df['G'], errors = 'coerce') df['S'] = pd.to_numeric(df['S'], errors = 'coerce') df['S%'] = pd.to_numeric(df['S%'], errors = 'coerce') ``` Since we have removed unnecessary columns, we should also add some columns that will help us better categorize some data. Particularly, we should categorize positions between a 'forward' group and a 'defenseman' group. We should also compare Ovechkin to some of the NHL's other superstar athletes. ``` # Create a new column in the dataframe that distinguishes between forwards and defensemen # Define an if-else function def Position_group(row): if row['Pos'] == ' D ': return 'Defense' elif row['Pos'] == ' C ' or ' LW ' or ' RW ': return 'Forward' else: return 'Unknown' # incorporate this into a new column df['Pos Group'] = df.apply(lambda row: Position_group(row), axis = 1) # Create a new column in the dataframe that distinguishes between Ovechkin, other NHL superstars, and other NHL players # Define an if-else function that will identify and separate Ovechkin, other star players, and their peers def Identity(row): if row['Player'] == 'Alex Ovechkin': return 'Ovechkin' elif row['Player'] == 'Sidney Crosby': return 'Crosby' elif row['Player'] == 'Steven Stamkos': return 'Stamkos' elif row['Player'] == 'Patrick Kane': return 'Kane' elif row['Player'] == 'Connor McDavid': return 'McDavid' elif row['Player'] == 'Evgeni Malkin': return 'Malkin' else: return 'Others' # incorporate the results of that function into a new column df['Identity'] = df.apply(lambda row: Identity(row), axis = 1) # check results df.head(10) # Check value counts for 'Pos Group' df['Pos Group'].value_counts() # Check value counts for 'Identity' df['Identity'].value_counts() ``` Let's now get a look at some of the metadata. ``` df.describe() ``` We see that some players on the list only played one game in a season, which could distort the data. Knowing that there have been some shortened seasons (most are 82 games, but early seasons were shorter and a few were shortened due to labor disputes) and injuries can shorten a player's season, we will only include players at the 25th percentile of games played (in this case, 20 games). ``` # Create updated dataframe for players who played 20+ games df2 = df[df['GP']>= 20] df2 ## Check for null values def check_nulls(): print('Number of NaN values for the column Season:', df2['Season'].isnull().sum()) print('Number of NaN values for the column Player:', df2['Player'].isnull().sum()) print('Number of NaN values for the column Age:', df2['Age'].isnull().sum()) print('Number of NaN values for the column Tm:', df2['Tm'].isnull().sum()) print('Number of NaN values for the column Pos:', df2['Pos'].isnull().sum()) print('Number of NaN values for the column GP:', df2['GP'].isnull().sum()) print('Number of NaN values for the column G:', df2['G'].isnull().sum()) print('Number of NaN values for the column GPG:', df2['GPG'].isnull().sum()) print('Number of NaN values for the column S:', df2['S'].isnull().sum()) print('Number of NaN values for the column S%:', df2['S%'].isnull().sum()) print('Number of NaN values for the column Pos Group:', df2['Pos Group'].isnull().sum()) print('Number of NaN values for the column Identity:', df2['Identity'].isnull().sum()) check_nulls() ``` We see that there are many null values. While not ideal, we can make the assumption that for these cases, the given athlete earned a 0. We are basing this assumption on the CSV file's original values of " - ", which were converted to null values. Moreover, none of these null values are in places where the datatype is a string or object, which means all the null values refer to actual results in hockey games. In hockey, not all players contribute to scoring. Therefore, this result can be expected. ``` # Replace null values of hockey stats with 0 df2.replace(np.nan, 0, inplace = True) # Check number of null values check_nulls() ## Now that there are no null values, we should look at the data again df2.info() ``` ## We can now look at Ovechkin's data and begin to compare it to the rest of the league and throughout NHL history. To do this, we first need to isolate Ovechkin's career stats from those of other players. ``` ## Use loc to find Ovechkin Ovechkin_career = df2.loc[df2['Player'] == 'Alex Ovechkin'] Ovechkin_career ## We can look at his career stats in more depth Ovechkin_career.describe() ``` Now that we've isolated Ovechkin's career with a variable, we should create variables for all other players, both modern (played while Ovechkin has played, or 2006-2018) and historic (played prior to Ovechkin playing, or 1940-2004). By doing this, we can compare Ovechkin's stats to other players. ``` ## Let's look at all players throughout NHL history that are not Alex Ovechkin Other_players = df2.loc[df2['Player'] != 'Alex Ovechkin'] Other_players ## We can explore this data further Other_players.describe() ``` We have our historic data, so now we need to create a subset of modern players who have played during Ovechkin's career. ``` ## We also want to isolate Ovechkin's peers since he started in the NHL # Create a subset that begins when Ovechkin entered the NHL Modern_players = df2[df2['Season'] >= 2006] # Separate Ovechkin from his peers within that subset Ovechkin_peers = Modern_players[Modern_players['Player'] != 'Alex Ovechkin'] Ovechkin_peers ## We can explore this data further Ovechkin_peers.describe() ``` <b>Now that we have established these variables, we can use them to compare Ovechkin's goal scoring ability to that of other players. </b> We will be using mean as a measurement here, as we want to compare the average goals per game (GPG) of Ovechkin against other players. Mean is preferred here, because outlier values are acceptable when examining the rate of goals over the course of a career. We also wish to compare Ovechkin to all players, including those who perform at levels outside the norm. ``` ## Let's compare Ovechkin's goals per game historically and to his peers Ovechkin_career_GPG = Ovechkin_career['GPG'].mean() Other_players_GPG = Other_players['GPG'].mean() Ovechkin_peers_GPG = Ovechkin_peers['GPG'].mean() print("Ovechkin's goals per game:", Ovechkin_career_GPG) print("Average goals per game of Ovechkin's peers:", Ovechkin_peers_GPG) print("Average goals per game of all players since 1940:", Other_players_GPG) print('') print("Difference in GPG between Ovechkin and his peers:", Ovechkin_career_GPG - Ovechkin_peers_GPG) print("Difference in GPG between Ovechkin and all players since 1940:", Ovechkin_career_GPG - Other_players_GPG) ## Let's graph this difference # Create variables objects = ('Ovechkin', "Ovechkin's Peers", 'NHL Since 1940') y_pos = np.arange(len(objects)) performance = [Ovechkin_career_GPG, Other_players_GPG, Ovechkin_peers_GPG] # Create graph plt.bar(y_pos, performance, align='center', color = ['red', 'black', 'black'], alpha=0.5) plt.xticks(y_pos, objects) plt.ylabel('Goals Per Game') plt.title('Comparison of Goals Per Game') plt.show() ``` We see from the graph above that Ovechkin's goal scoring far exceeds that of both his peers and the historic average of NHL players. To determine why this might be, we will look at the shooting percentages of Ovechkin and his peers. We unfortunately cannot calculate the shooting percentages of all NHL players since 1940, as shots (and thus, shooting percentage) is a more recently tracked statistic. ``` ## Let's compare Ovechkin's shooting percentage with his peers Ovechkin_career_SP = Ovechkin_career['S%'].mean() Ovechkin_peers_SP = Ovechkin_peers['S%'].mean() SP_Difference = Ovechkin_career_SP - Ovechkin_peers_SP print("Ovechkin's career shooting percentage:", Ovechkin_career_SP,'%') print("Shooting percentage of Ovechkin's peers:", Ovechkin_peers_SP,'%') print("Ovechkin vs Peers shooting percentage spread:", SP_Difference,'%') ## Let's graph this difference # Create variables objects1 = ('Ovechkin', "Ovechkin's Peers") y_pos1 = np.arange(len(objects1)) performance1 = [Ovechkin_career_SP, Ovechkin_peers_SP] # Create graphs plt.bar(y_pos1, performance1, align='center', color = ['red', 'black'], alpha=0.5) plt.xticks(y_pos1, objects1) plt.ylabel('Shooting Percentage') plt.title('Comparison of Shooting Percentage') plt.show() ``` We see that Ovechkin is far ahead of his peers in both goals per game and shooting percentage. But let's isolate this further. As explained earlier, not all players are expected to contribute to scoring. Generally speaking, players at forward positions (e.g. C, LW, RW) are expected to contribute to scoring far more than those playing defense. We should explore how Ovechkin compares to other forwards. To do this, we will use the 'Pos Group' column. ``` ## Check shooting percentage and goals per game of forwards # Create subset of peers who played at a 'Forward' position Ovechkin_peers_fwd = Ovechkin_peers[Ovechkin_peers['Pos Group'] == 'Forward'] # Find shooting percentage and goals per game Peers_fwd_SP = Ovechkin_peers_fwd['S%'].mean() Peers_fwd_GPG = Ovechkin_peers_fwd['GPG'].mean() SP_Difference_fwd = Ovechkin_career_SP - Peers_fwd_SP GPG_Difference_fwd = Ovechkin_career_GPG - Peers_fwd_GPG # Compare print("Ovechkin's career shooting percentage:", Ovechkin_career_SP,'%') print("Shooting percentage of other forwards in Ovechkin's peer group:", Peers_fwd_SP,'%') print("Ovechkin vs Peer Forwards shooting percentage spread:", SP_Difference_fwd,'%') print("") print("Ovechkin's goals per game:", Ovechkin_career_GPG) print("Average goals per game of forwards since Ovechkin joined the NHL:", Peers_fwd_GPG) print("Ovechkin vs Peer Forwards goals per game spread:", GPG_Difference_fwd, 'goals per game') # Graph shooting percentage difference # Create variables objects2 = ('Ovechkin', 'Peer Forwards') y_pos2 = np.arange(len(objects2)) performance2 = [Ovechkin_career_SP, Peers_fwd_SP] # Create graph plt.bar(y_pos2, performance2, align='center', color = ['red', 'black'], alpha=0.5) plt.xticks(y_pos2, objects2) plt.ylabel('Shooting Percentage') plt.title('Comparison of Shooting Percentage') plt.show() # Graph GPG difference # Create variables objects3 = ('Ovechkin', 'Peer Forwards') y_pos3 = np.arange(len(objects3)) performance3 = [Ovechkin_career_GPG, Peers_fwd_GPG] # Create graph plt.bar(y_pos3, performance3, align='center', color = ['red', 'black'], alpha=0.5) plt.xticks(y_pos3, objects3) plt.ylabel('Goals Per Game') plt.title('Comparison of Goals Per Game') plt.show() ``` <b>We still see that Ovechkin is more accurate and provides more goal scoring than other forwards in the league. This should be expected of one of the NHL's superstars. To determine if Ovechkin really is one of the best, then we must compare his performance to other NHL superstars.</b> To do this, we will create box plots to examine both goals per game (GPG) and shooting percentage (S%). ``` # create box plot comparing goals per game sns.boxplot(x = Modern_players['Identity'], y = Modern_players['GPG'], data = Modern_players) ``` We see that Ovechkin has a higher goals per game than his superstar peers, which suggests that he is the best goal-scorer in the league. We should check to see if his shooting percentage is above his peers. ``` ## Create box plot comparing shooting percentage sns.boxplot(x = Modern_players['Identity'], y = Modern_players['S%'], data = Modern_players) ``` It's clear that Ovechkin's shooting percentage is in line with that of his superstar peers. It is not an outlier. If Ovechkin's goals per game is higher than others in the league, but his shooting percentage is not, we can deduce that Ovechkin must shoot the puck more often than his peers. Let's compare his shots. ``` ## Create box plot comparing total number of shots per season sns.boxplot(x = Modern_players['Identity'], y = Modern_players['S'], data = Modern_players) ``` And here is what separates Ovechkin from everyone else in the NHL: he has an exceptional ability to shoot the puck at a rate far greater than other players, generating far more offense and goal-scoring. # We see from the data and graphs above that Ovechkin is the best goal-scorer in the NHL. ## His goals per game is greater than that of his peers. He is an accurate shooter. We can deduce that this is not luck, as his shooting percentage is in line with other elite NHL athletes. Instead, we see that Ovechkin shoots the puck at an amazing rate, contributing to his best-in-the-league goals per game.
github_jupyter
# KEN 3140 Semantic Web: Lab 5 🧪 ### Writing and executing "complex" SPARQL queries on RDF graphs **Reference specifications: https://www.w3.org/TR/sparql11-query/** We will use the **DBpedia SPARQL endpoint**: >**https://dbpedia.org/sparql** And **SPARQL query editor YASGUI**: > **https://yasgui.triply.cc** # Install the SPARQL kernel This notebook uses the SPARQL Kernel to define and **execute SPARQL queries in the notebook** codeblocks. You can **install the SPARQL Kernel** locally (or with Conda): ```shell pip install sparqlkernel --user jupyter sparqlkernel install --user ``` Or use a Docker image (similar to the one for Java): ```shell docker run -it --rm -p 8888:8888 -v $(pwd):/home/jovyan -e JUPYTER_ENABLE_LAB=yes -e JUPYTER_TOKEN=YOURPASSWORD umids/jupyterlab:sparql ``` To start running SPARQL query in this notebook, we need to define the **SPARQL kernel parameters**: ``` # Define the SPARQL endpoint to query %endpoint http://dbpedia.org/sparql # This is optional, it would increase the log level %log debug # Uncomment the next line to return label in english and avoid duplicates # %lang en ``` # Perform an arithmetic operation Calculate the GDP per capita of countries from `dbp:gdpNominal` and `dbo:populationTotal` Starting from this query: ```sparql PREFIX dbo: <http://dbpedia.org/ontology/> PREFIX dbp: <http://dbpedia.org/property/> SELECT ?country ?gdpValue ?population WHERE { ?country dbp:gdpNominal ?gdpValue ; dbo:populationTotal ?population . } LIMIT 10 ``` **Impossible due to different datatypes** 🚫 The GDP is in `http://dbpedia.org/datatype/usDollar`, and the population is a `xsd:nonNegativeInteger`: ``` PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> PREFIX dbo: <http://dbpedia.org/ontology/> PREFIX dbp: <http://dbpedia.org/property/> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> SELECT ?country ?gdpValue datatype(?gdpValue) AS ?gdpType ?population datatype(?population) AS ?populationType (?gdpValue / ?population AS ?gdpPerCapita) WHERE { ?country dbp:gdpNominal ?gdpValue ; dbo:populationTotal ?population . } LIMIT 10 ``` # Cast a variable to a specific datatype Especially useful when **comparing or performing an arithmetical operations on 2 variables**. Use the `xsd:` prefix for standard datatypes Here we divide a value in `usDollar` by a `nonNegativeInteger` casting the 2 to `xsd:integer` to calculate the GDP per capita of each country 💶 ``` PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> PREFIX dbo: <http://dbpedia.org/ontology/> PREFIX dbp: <http://dbpedia.org/property/> SELECT ?country ?gdpValue ?population (xsd:integer(?gdpValue) / xsd:integer(?population) AS ?gdpPerCapita) WHERE { ?country dbp:gdpNominal ?gdpValue ; dbo:populationTotal ?population . } LIMIT 10 ``` # Bind a new variable * Use the `concat()` function to add "http://country.org/" at the start of a country ISO code. * Use `BIND` to bind the produced string to a variable * Make this string an URI using the `uri()` function Start from this query: ``` SELECT * WHERE { ?country a dbo:Country ; dbp:iso31661Alpha ?isoCode . } LIMIT 10 PREFIX dbo: <http://dbpedia.org/ontology/> PREFIX dbp: <http://dbpedia.org/property/> SELECT * WHERE { ?country a dbo:Country ; dbp:iso31661Alpha ?isoCode BIND(uri(concat("http://country.org/", ?isoCode)) AS ?isoUri) } LIMIT 10 ``` # Count aggregated results Count the number of books for each author 📚 Start from this query: ```sparql PREFIX dbo:<http://dbpedia.org/ontology/> SELECT ?author WHERE { ?book a dbo:Book ; dbo:author ?author . } LIMIT 10 ``` ``` PREFIX dbo:<http://dbpedia.org/ontology/> SELECT ?author (count(?book) as ?book_count) WHERE { ?book a dbo:Book ; dbo:author ?author . } LIMIT 10 ``` # Count depend on the aggregated results of a row Here we select also the book, hence getting a count of 1 book for each row 📘 ``` PREFIX dbo:<http://dbpedia.org/ontology/> SELECT ?book ?author (count(?book) as ?book_count) WHERE { ?book a dbo:Book ; dbo:author ?author . } LIMIT 10 ``` # Group by Group solutions by variable value. Get the average GDP for all countries grouped by the currency they use. Start from: ```sparql PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> PREFIX dbp: <http://dbpedia.org/property/> PREFIX dbo: <http://dbpedia.org/ontology/> SELECT ?currency WHERE { ?country dbo:currency ?currency ; dbp:gdpPppPerCapita ?gdp . } ``` # Group by solution Use the `AVG()` function to calculate the average of the GDPs grouped by currency: ``` PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> PREFIX dbp: <http://dbpedia.org/property/> PREFIX dbo: <http://dbpedia.org/ontology/> SELECT ?currency (AVG(xsd:integer(?gdp)) AS ?avgGdp) WHERE { ?country dbo:currency ?currency ; dbp:gdpPppPerCapita ?gdp . } GROUP BY ?currency ORDER BY DESC(?avgGdp) LIMIT 15 ``` # Make a pattern optional We can define optional patterns that will be retrieved when available. Put a statement in a `OPTIONAL { }` block to make it optional (it will not used to filter the statements returned) With this query we get all the books, and their authors. **Change it to define the author property as optional**, so we retrieve books even if no author is defined. ```sparql PREFIX dbo: <http://dbpedia.org/ontology/> SELECT * WHERE { ?book a dbo:Book ; dbo:author ?author . } ``` ``` PREFIX dbo: <http://dbpedia.org/ontology/> SELECT * WHERE { ?book a dbo:Book . OPTIONAL { ?book rdfs:label ?author . } } ``` # Get the graph Most triplestores supports graphs todays which enable to add a 4th object to the triple (usually to classify it in a larger graph of triples). ```turtle <http://subject> <http://predicate> <http://object> <http://graph> . ``` Also known as: context ``` PREFIX dbo:<http://dbpedia.org/ontology/> SELECT ?author ?graph WHERE { GRAPH ?graph { ?book a dbo:Book ; dbo:author ?author . } } LIMIT 10 ``` We can also query the triples `FROM` a specific graph: ``` PREFIX dbo:<http://dbpedia.org/ontology/> SELECT ?author ?graph FROM <http://dbpedia.org> WHERE { GRAPH ?graph { ?book a dbo:Book ; dbo:author ?author . } } LIMIT 10 ``` # Or get all graphs This query takes time in big datasets, but is usually cached in Virtuoso triplestores. ``` SELECT DISTINCT ?g WHERE { GRAPH ?g { ?s ?p ?o . } } ``` # Subqueries A query inside a query 🤯 Resolve: order the first 10 countries to have been dissolved by date of creation. * Select all countries that have been dissolved * Order them by dissolution date (oldest to newest) * Limit to 10 * Finally, order the results (countries) from the most recently created to the oldest created Start from: ``` SELECT * WHERE { ?country a dbo:Country ; dbo:dissolutionDate ?dissolutionDate ; dbo:foundingYear ?foundingYear . } LIMIT 5 ``` * Order countries by dissolution date and keep the 10 first * Order them from the most recently created to the oldest created ``` SELECT * WHERE { { SELECT ?country ?dissolutionDate WHERE { ?country a dbo:Country ; dbo:dissolutionDate ?dissolutionDate . } order by ?dissolutionDate limit 10 } ?country dbo:foundingYear ?foundingYear . } order by desc(?foundingYear) ``` # Federated query Same as a subquery, a federated query enable to query another SPARQL endpoint directly We will need to execute the query on **https://graphdb.dumontierlab.com/repositories/KEN3140_SemanticWeb** (dbpedia blocks federated queries) * [P688](https://www.wikidata.org/wiki/Property:P688): encodes (the product of a gene) * [P352](https://www.wikidata.org/wiki/Property:P352): identifier for a protein per the UniProt database. ``` %endpoint https://graphdb.dumontierlab.com/repositories/KEN3140_SemanticWeb PREFIX wdt: <http://www.wikidata.org/prop/direct/> SELECT * WHERE { SERVICE <https://query.wikidata.org/sparql> { ?gene wdt:P688 ?encodedProtein . ?encodedProtein wdt:P352 ?uniprotId . } } LIMIT 5 ``` # Construct 🧱 Return a graph specified by a template (build triples) Generate 2 triples: * Author is of type `schema:Person` * The `schema:countryOfOrigin` of the author Starting from this query: ``` %endpoint http://dbpedia.org/sparql PREFIX dbo: <http://dbpedia.org/ontology/> PREFIX schema: <http://schema.org/> SELECT * WHERE { ?book a dbo:Book ; dbo:author ?author . ?author dbo:birthPlace ?birthPlace . ?birthPlace dbo:country ?country . } LIMIT 5 ``` You can define the pattern in the `CONSTRUCT { }` block ``` %endpoint http://dbpedia.org/sparql PREFIX dbo: <http://dbpedia.org/ontology/> PREFIX schema: <http://schema.org/> CONSTRUCT { ?author a schema:Person ; schema:countryOfOrigin ?country . } WHERE { ?book a dbo:Book ; dbo:author ?author . ?author dbo:birthPlace ?birthPlace . ?birthPlace dbo:country ?country . } LIMIT 5 ``` # Insert 📝 Same as a `construct` but directly insert triples into your triplestore. You can define in which graph the triples will be inserted ```sparql PREFIX dbo: <http://dbpedia.org/ontology/> PREFIX schema: <http://schema.org/> INSERT { GRAPH <http://my-graph> { ?author a schema:Person ; schema:countryOfOrigin ?country . } } WHERE { ?book a dbo:Book ; dbo:author ?author . ?author dbo:birthPlace ?birthPlace . ?birthPlace dbo:country ?country . } ``` # Insert data Use SPARQL to insert data into your triplestore (**not possible on public endpoints**) ```sparql PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> INSERT DATA { GRAPH <http://my-graph> { <my-subject> rdfs:label "inserted object" . } } ``` # Delete ❌ To delete particular statements retrieved from a pattern using `WHERE` Here we delete the `bl:name` statements for the genes we just created: ```sparql DELETE { GRAPH <http://graph> { ?geneUri bl:name ?geneLabel. } } WHERE { ?geneUri a bl:Gene . ?geneUri bl:name ?geneLabel . } ``` # Delete data Directly provide the statements to delete ```sparql PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> DELETE DATA { GRAPH <http://my-graph> { <http://my-subject> rdfs:label "inserted object" . } } ``` # Search on DBpedia 🔎 Use **[https://yasgui.triply.cc](https://yasgui.triply.cc)** to write and run SPARQL query on DBpedia 1. Calculate countries density. Density = `dbo:populationTotal` / `dbo:PopulatedPlace/areaTotal` 2. Construct a new triple to define the previously created country density 3. Order by birthDate of authors who wrote at least 3 books with more than 500 pages. ## 1. Calculate countries density ``` SELECT ?country ?area ?population (xsd:float(?population)/xsd:float(?area) AS ?density) WHERE { ?country a dbo:Country ; dbo:populationTotal ?population ; <http://dbpedia.org/ontology/PopulatedPlace/areaTotal> ?area . FILTER(?area != 0) } ``` # 2. Construct density triple ``` CONSTRUCT { ?country dbo:density ?density . } WHERE { ?country a dbo:Country ; dbo:populationTotal ?population ; <http://dbpedia.org/ontology/PopulatedPlace/areaTotal> ?area . BIND(xsd:float(?population)/xsd:float(?area) AS ?density) FILTER(?area != 0) } LIMIT 10 ``` # Useful links 🔧 * Use **[prefix.cc](http://prefix.cc/)** to resolve mysterious prefixes. * Search for functions in the specifications: **https://www.w3.org/TR/sparql11-query** * How do I find vocabulary to use in my SPARQL query from DBpedia? Search in **[DBpedia ontology classes](http://mappings.dbpedia.org/server/ontology/classes/)**, or on google, e.g., search for: "**[dbpedia capital](https://www.google.com/search?&q=dbpedia+capital)**" # Public SPARQL endpoints 🔗 * Wikidata, facts powering Wikipedia infobox: https://query.wikidata.org/sparql * Bio2RDF, linked data for the life sciences: https://bio2rdf.org/sparql * Disgenet, gene-disease association: http://rdf.disgenet.org/sparql * PathwayCommons, resource for biological pathways analysis: http://rdf.pathwaycommons.org/sparql
github_jupyter
``` def foobar(a: int, b: str, c: float = 3.2) -> tuple: pass import collections import functools import inspect from typing import List Vector = List[float] def formatannotation(annotation, base_module=None): if getattr(annotation, '__module__', None) == 'typing': return repr(annotation).replace('typing.', '') if isinstance(annotation, type): if annotation.__module__ in ('builtins', base_module): return annotation.__qualname__ return annotation.__module__+'.'+annotation.__qualname__ return repr(annotation) def check(func): # 获取函数定义的参数 sig = inspect.signature(func) parameters = sig.parameters # 参数有序字典 arg_keys = tuple(parameters.keys()) # 参数名称 for k, v in sig.parameters.items(): print('{k}: {a!r}'.format(k=k, a=v.annotation)) print("\t", formatannotation(v.annotation)) print("➷", sig.return_annotation) check(foobar) def foobar2(a: int, b: str, c: Vector) -> tuple: pass check(foobar2) fun=exec('def foobar_g(a: int, b: str, c: float = 3.2) -> tuple: pass') print(foobar_g.__annotations__) from typing import Mapping, Sequence def Employee(object): pass def notify_by_email(employees: Sequence[Employee], overrides: Mapping[str, str]) -> None: pass check(notify_by_email) from typing import List Vector = List[float] def scale(scalar: float, vector: Vector) -> Vector: return [scalar * num for num in vector] # typechecks; a list of floats qualifies as a Vector. new_vector = scale(2.0, [1.0, -4.2, 5.4]) def foo(a, b, *, c, d=10): pass sig = inspect.signature(foo) for param in sig.parameters.values(): if (param.kind == param.KEYWORD_ONLY and param.default is param.empty): print('Parameter:', param) help(param.annotation) from io import StringIO from mako.template import Template from mako.lookup import TemplateLookup from mako.runtime import Context # The contents within the ${} tag are evaluated by Python directly, so full expressions are OK: def render_template(file, ctx): mylookup = TemplateLookup(directories=['./'], output_encoding='utf-8', encoding_errors='replace') mytemplate = Template(filename='./templates/'+file, module_directory='/tmp/mako_modules', lookup=mylookup) mytemplate.render_context(ctx) return (buf.getvalue()) buf = StringIO() ctx = Context(buf, form_name="some_form", slots=["some_slot", "some_other_slot"]) print(render_template('custom_form_action.mako', ctx)) from typing import Dict, Text, Any, List, Union from rasa_core_sdk import ActionExecutionRejection from rasa_core_sdk import Tracker from rasa_core_sdk.events import SlotSet from rasa_core_sdk.executor import CollectingDispatcher from rasa_core_sdk.forms import FormAction, REQUESTED_SLOT def build_slots(func): # 获取函数定义的参数 sig = inspect.signature(func) parameters = sig.parameters # 参数有序字典 arg_keys = tuple(parameters.keys()) # 参数名称 for k, v in sig.parameters.items(): print('{k}: {a!r}, {t}'.format(k=k, a=v.annotation, t=formatannotation(v.annotation))) print("➷", sig.return_annotation) return func.__name__, list(parameters.keys()) def simple(a: int, b: str, c: Vector) -> tuple: pass form_name, slots=build_slots(simple) print(form_name, str(slots)) buf = StringIO() ctx = Context(buf, form_name=form_name, slots=slots) clsdef=render_template('custom_form_action.mako', ctx) print(clsdef) exec(clsdef) exec("form=CustomFormAction()") print(form.required_slots(None)) ```
github_jupyter
# Working with data Overview of today's learning goals: 1. Introduce pandas 2. Load data files 3. Clean and process data 4. Select, filter, and slice data from a dataset 5. Descriptive stats: central tendency and dispersion 6. Merging and concatenating datasets 7. Grouping and summarizing data ``` # something new: import these packages to work with data import numpy as np import pandas as pd ``` ## 1. Introducing pandas https://pandas.pydata.org/ ``` # review: a python list is a built-in data type my_list = [8, 6, 4, 2] my_list # a numpy array is like a list # but faster, more compact, and lots more features my_array = np.array(my_list) my_array ``` pandas has two primary data structures we will work with: Series and DataFrames ### 1a. pandas Series ``` # a pandas series is based on a numpy array: it's fast, compact, and has more functionality # perhaps most notably, it has an index which allows you to work naturally with tabular data my_series = pd.Series(my_list) my_series # look at a list-representation of the index my_series.index.tolist() # look at the series' values themselves my_series.values # what's the data type of the series' values? type(my_series.values) # what's the data type of the individual values themselves? my_series.dtype ``` ### 1b. pandas DataFrames ``` # a dict can contain multiple lists and label them my_dict = {"hh_income": [75125, 22075, 31950, 115400], "home_value": [525000, 275000, 395000, 985000]} my_dict # a pandas dataframe can contain one or more columns # each column is a pandas series # each row is a pandas series # you can create a dataframe by passing in a list, array, series, or dict df = pd.DataFrame(my_dict) df # the row labels in the index are accessed by the .index attribute of the DataFrame object df.index.tolist() # the column labels are accessed by the .columns attribute of the DataFrame object df.columns # the data values are accessed by the .values attribute of the DataFrame object # this is a numpy (two-dimensional) array df.values ``` ## 2. Loading data In practice, you'll work with data by loading a dataset file into pandas. CSV is the most common format. But pandas can also ingest tab-separated data, JSON, and proprietary file formats like Excel .xlsx files, Stata, SAS, and SPSS. Below, notice what pandas's `read_csv` function does: 1. recognize the header row and get its variable names 1. read all the rows and construct a pandas DataFrame (an assembly of pandas Series rows and columns) 1. construct a unique index, beginning with zero 1. infer the data type of each variable (ie, column) ``` # load a data file # note the relative filepath! where is this file located? # note the dtype argument! always specify that fips codes are strings, otherwise pandas guesses int df = pd.read_csv("../../data/census_tracts_data_la.csv", dtype={"GEOID10": str}) # dataframe shape as rows, columns df.shape # or use len to just see the number of rows len(df) # view the dataframe's "head" df.head() # view the dataframe's "tail" df.tail() ``` #### What are these data? I gathered them from the census bureau (2017 5-year tract-level ACS) for you, then gave them meaningful variable names. It's a set of socioeconomic variables across all LA County census tracts: |column|description| |------|-----------| |total_pop|Estimate!!SEX AND AGE!!Total population| |median_age|Estimate!!SEX AND AGE!!Total population!!Median age (years)| |pct_hispanic|Percent Estimate!!HISPANIC OR LATINO AND RACE!!Total population!!Hispanic or Latino (of any race)| |pct_white|Percent Estimate!!HISPANIC OR LATINO AND RACE!!Total population!!Not Hispanic or Latino!!White alone| |pct_black|Percent Estimate!!HISPANIC OR LATINO AND RACE!!Total population!!Not Hispanic or Latino!!Black or African American alone| |pct_asian|Estimate!!HISPANIC OR LATINO AND RACE!!Total population!!Not Hispanic or Latino!!Asian alone| |pct_male|Percent Estimate!!SEX AND AGE!!Total population!!Male| |pct_single_family_home|Percent Estimate!!UNITS IN STRUCTURE!!Total housing units!!1-unit detached| |med_home_value|Estimate!!VALUE!!Owner-occupied units!!Median (dollars)| |med_rooms_per_home|Estimate!!ROOMS!!Total housing units!!Median rooms| |pct_built_before_1940|Percent Estimate!!YEAR STRUCTURE BUILT!!Total housing units!!Built 1939 or earlier| |pct_renting|Percent Estimate!!HOUSING TENURE!!Occupied housing units!!Renter-occupied| |rental_vacancy_rate|Estimate!!HOUSING OCCUPANCY!!Total housing units!!Rental vacancy rate| |avg_renter_household_size|Estimate!!HOUSING TENURE!!Occupied housing units!!Average household size of renter-occupied unit| |med_gross_rent|Estimate!!GROSS RENT!!Occupied units paying rent!!Median (dollars)| |med_household_income|Estimate!!INCOME AND BENEFITS (IN 2017 INFLATION-ADJUSTED DOLLARS)!!Total households!!Median household income (dollars)| |mean_commute_time|Estimate!!COMMUTING TO WORK!!Workers 16 years and over!!Mean travel time to work (minutes)| |pct_commute_drive_alone|Percent Estimate!!COMMUTING TO WORK!!Workers 16 years and over!!Car truck or van drove alone| |pct_below_poverty|Percent Estimate!!PERCENTAGE OF FAMILIES AND PEOPLE WHOSE INCOME IN THE PAST 12 MONTHS IS BELOW THE POVERTY LEVEL!!All people| |pct_college_grad_student|Percent Estimate!!SCHOOL ENROLLMENT!!Population 3 years and over enrolled in school!!College or graduate school| |pct_same_residence_year_ago|Percent Estimate!!RESIDENCE 1 YEAR AGO!!Population 1 year and over!!Same house| |pct_bachelors_degree|Percent Estimate!!EDUCATIONAL ATTAINMENT!!Population 25 years and over!!Percent bachelor's degree or higher| |pct_english_only|Percent Estimate!!LANGUAGE SPOKEN AT HOME!!Population 5 years and over!!English only| |pct_foreign_born|Percent Estimate!!PLACE OF BIRTH!!Total population!!Foreign born| ## 3. Clean and process data ``` df.head(10) # data types of the columns df.dtypes # access a single column like df['col_name'] df["med_gross_rent"].head(10) # pandas uses numpy's nan to represent null (missing) values print(np.nan) print(type(np.nan)) # convert rent from string -> float df["med_gross_rent"].astype(float) ``` Didn't work! We need to clean up the stray alphabetical characters to get a numerical value. You can do string operations on pandas Series to clean up their values ``` # do a string replace and assign back to that column, then change type to float df["med_gross_rent"] = df["med_gross_rent"].str.replace(" (USD)", "", regex=False) df["med_gross_rent"] = df["med_gross_rent"].astype(float) # now clean up the income column then convert it from string -> float # do a string replace and assign back to that column df["med_household_income"] = df["med_household_income"].str.replace("$", "", regex=False) df["med_household_income"] = df["med_household_income"].astype(float) # convert rent from float -> int df["med_gross_rent"].astype(int) ``` You cannot store null values as type `int`, only as type `float`. You have three basic options: 1. Keep the column as float to retain the nulls - they are often important! 2. Drop all the rows that contain nulls if we need non-null data for our analysis 3. Fill in all the nulls with another value if we know a reliable default value ``` df.shape # drop rows that contain nulls # this doesn't save the result, because we didn't reassign! (in reality, want to keep the nulls here) df.dropna(subset=["med_gross_rent"]).shape # fill in rows that contain nulls # this doesn't save the result, because we didn't reassign! (in reality, want to keep the nulls here) df["med_gross_rent"].fillna(value=0).head(10) # more string operations: slice state fips and county fips out of the tract fips string # assign them to new dataframe columns df["state"] = df["GEOID10"].str.slice(0, 2) df["county"] = df["GEOID10"].str.slice(2, 5) df.head() # dict that maps state fips code -> state name fips = {"04": "Arizona", "06": "California", "41": "Oregon"} # replace fips code with state name with the replace() method df["state"] = df["state"].replace(fips) # you can rename columns with the rename() method # remember to reassign to save the result df = df.rename(columns={"state": "state_name"}) # you can drop columns you don't need with the drop() method # remember to reassign to save the result df = df.drop(columns=["county"]) # inspect the cleaned-up dataframe df.head() # save it to disk as a "clean" copy # note the relative filepath df.to_csv("../../data/census_tracts_data_la-clean.csv", index=False, encoding="utf-8") ``` ## 4. Selecting and slicing data from a DataFrame ``` # CHEAT SHEET OF COMMON TASKS # Operation Syntax Result # ------------------------------------------------------------ # Select column by name df[col] Series # Select columns by name df[col_list] DataFrame # Select row by label df.loc[label] Series # Select row by integer location df.iloc[loc] Series # Slice rows by label df.loc[a:c] DataFrame # Select rows by boolean vector df[mask] DataFrame ``` ### 4a. Select DataFrame's column(s) by name We saw some of this a minute ago. Let's look in a bit more detail and break down what's happening. ``` # select a single column by column name # this is a pandas series df["total_pop"] # select multiple columns by a list of column names # this is a pandas dataframe that is a subset of the original df[["total_pop", "median_age"]] # create a new column by assigning df['new_col'] to some set of values # you can do math operations on any numeric columns df["monthly_income"] = df["med_household_income"] / 12 df["rent_burden"] = df["med_gross_rent"] / df["monthly_income"] # inspect the results df[["med_household_income", "monthly_income", "med_gross_rent", "rent_burden"]].head() ``` ### 4b. Select row(s) by label ``` # use .loc to select by row label # returns the row as a series whose index is the dataframe column names df.loc[0] # use .loc to select single value by row label, column name df.loc[0, "pct_below_poverty"] # slice of rows from label 5 to label 7, inclusive # this returns a pandas dataframe df.loc[5:7] # slice of rows from label 1 to label 3, inclusive # slice of columns from pct_hispanic to pct_asian, inclusive df.loc[1:3, "pct_hispanic":"pct_asian"] # subset of rows from with labels in list # subset of columns with names in list df.loc[[1, 3], ["pct_hispanic", "pct_asian"]] # you can use a column of unique identifiers as the index # fips codes uniquely identify each row (but verify!) df = df.set_index("GEOID10") df.index.is_unique df.head() # .loc works by label, not by position in the dataframe df.loc[0] # the index now contains fips codes, so you have to use .loc accordingly to select by row label df.loc["06037137201"] ``` ### 4c. Select by (integer) position ``` # get the row in the zero-th position in the dataframe df.iloc[0] # you can slice as well # note, while .loc[] is inclusive, .iloc[] is not # get the rows from position 0 up to but not including position 3 (ie, rows 0, 1, and 2) df.iloc[0:3] # get the value from the row in position 3 and the column in position 2 (zero-indexed) df.iloc[3, 2] ``` ### 4d. Select/filter by value You can subset or filter a dataframe for based on the values in its rows/columns. ``` # filter the dataframe by rows with 30%+ rent burden df[df["rent_burden"] > 0.3] # what exactly did that do? let's break it out. df["rent_burden"] > 0.3 # essentially a true/false mask that filters by value mask = df["rent_burden"] > 0.3 df[mask] # you can chain multiple conditions together # pandas logical operators are: | for or, & for and, ~ for not # these must be grouped by using parentheses due to order of operations # question: which tracts are both rent-burdened and majority-Black? mask = (df["rent_burden"] > 0.3) & (df["pct_black"] > 50) df[mask].shape # which tracts are both rent-burdened and either majority-Black or majority-Hispanic? mask1 = df["rent_burden"] > 0.3 mask2 = df["pct_black"] > 50 mask3 = df["pct_hispanic"] > 50 mask = mask1 & (mask2 | mask3) df[mask].shape # see the mask mask # ~ means not... it essentially flips trues to falses and vice-versa ~mask # which rows are in a state that begins with "Cal"? # all of them... because we're looking only at LA county mask = df["state_name"].str.startswith("Cal") df[mask].shape # now it's your turn # create a new subset dataframe containing all the rows with median home values above $800,000 and percent-White above 60% # how many rows did you get? ``` ## 5. Descriptive stats ``` # what share of majority-White tracts are rent burdened? mask1 = df["pct_white"] > 50 mask2 = mask1 & (df["rent_burden"] > 0.3) len(df[mask2]) / len(df[mask1]) # what share of majority-Hispanic tracts are rent burdened? mask1 = df["pct_hispanic"] > 50 mask2 = mask1 & (df["rent_burden"] > 0.3) len(df[mask2]) / len(df[mask1]) # you can sort the dataframe by values in some column df.sort_values("pct_below_poverty", ascending=False).dropna().head() # use the describe() method to pull basic descriptive stats for some column df["med_household_income"].describe() ``` #### Or if you need the value of a single stat, call it directly Key measures of central tendency: mean and median ``` # the mean, or "average" value df["med_household_income"].mean() # the median, or "typical" (ie, 50th percentile) value df["med_household_income"].median() # now it's your turn # create a new subset dataframe containing rows with median household income above the (tract) average in LA county # what is the median median home value across this subset of tracts? ``` Key measures of dispersion or variability: range, IQR, variance, standard deviation ``` df["med_household_income"].min() # which tract has the lowest median household income? df["med_household_income"].idxmin() df["med_household_income"].max() # what is the 90th-percentile value? df["med_household_income"].quantile(0.90) # calculate the distribution's range df["med_household_income"].max() - df["med_household_income"].min() # calculate its IQR df["med_household_income"].quantile(0.75) - df["med_household_income"].quantile(0.25) # calculate its variance... rarely used in practice df["med_household_income"].var() # calculate its standard deviation # this is the sqrt of the variance... putting it into same units as the variable itself df["med_household_income"].std() # now it's your turn # what's the average (mean) median home value across majority-White tracts? And across majority-Black tracts? ``` ## 6. Merge and concatenate ### 6a. Merging DataFrames ``` # create a subset dataframe with only race/ethnicity variables race_cols = ["pct_asian", "pct_black", "pct_hispanic", "pct_white"] df_race = df[race_cols] df_race.head() # create a subset dataframe with only economic variables econ_cols = ["med_home_value", "med_household_income"] df_econ = df[econ_cols].sort_values("med_household_income") df_econ.head() # merge them together, aligning rows based on their labels in the index df_merged = pd.merge(left=df_econ, right=df_race, how="inner", left_index=True, right_index=True) df_merged.head() # reset df_econ's index df_econ = df_econ.reset_index() df_econ.head() # merge them together, aligning rows based on their labels in the index # doesn't work! their indexes do not share any labels to match/align the rows df_merged = pd.merge(left=df_econ, right=df_race, how="inner", left_index=True, right_index=True) df_merged # now it's your turn # change the "how" argument: what happens if you try an "outer" join? or a "left" join? or a "right" join? # instead merge where df_race index matches df_econ GEOID10 column df_merged = pd.merge(left=df_econ, right=df_race, how="inner", left_on="GEOID10", right_index=True) df_merged.head() ``` ### 6b. Concatenating DataFrames ``` # load the orange county tracts data oc = pd.read_csv("../../data/census_tracts_data_oc.csv", dtype={"GEOID10": str}) oc = oc.set_index("GEOID10") oc.shape oc.head() # merging joins data together aligned by the index, but concatenating just smushes it together along some axis df_all = pd.concat([df, oc], sort=False) df_all ``` ## 7. Grouping and summarizing ``` # extract county fips from index then replace with friendly name df_all["county"] = df_all.index.str.slice(2, 5) df_all["county"] = df_all["county"].replace({"037": "LA", "059": "OC"}) df_all["county"] # group the rows by county counties = df_all.groupby("county") # what is the median pct_white across the tracts in each county? counties["pct_white"].median() # look at several columns' medians by county counties[["pct_bachelors_degree", "pct_foreign_born", "pct_commute_drive_alone"]].median() # now it's your turn # group the tracts by county and find the highest/lowest tract percentages that speak English-only ```
github_jupyter
##### Copyright 2019 Google LLC ``` # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Online Prediction with scikit-learn on AI Platform This notebook uses the [Census Income Data Set](https://archive.ics.uci.edu/ml/datasets/Census+Income) to create a simple model, train the model, upload the model to Ai Platform, and lastly use the model to make predictions. # How to bring your model to AI Platform Getting your model ready for predictions can be done in 5 steps: 1. Save your model to a file 1. Upload the saved model to [Google Cloud Storage](https://cloud.google.com/storage) 1. Create a model resource on AI Platform 1. Create a model version (linking your scikit-learn model) 1. Make an online prediction # Prerequisites Before you jump in, let’s cover some of the different tools you’ll be using to get online prediction up and running on AI Platform. [Google Cloud Platform](https://cloud.google.com/) lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure. [AI Platform](https://cloud.google.com/ml-engine/) is a managed service that enables you to easily build machine learning models that work on any type of data, of any size. [Google Cloud Storage](https://cloud.google.com/storage/) (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving. [Cloud SDK](https://cloud.google.com/sdk/) is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is [installed](https://cloud.google.com/sdk/downloads) in the same environment as your Jupyter kernel. # Part 0: Setup * [Create a project on GCP](https://cloud.google.com/resource-manager/docs/creating-managing-projects) * [Create a Google Cloud Storage Bucket](https://cloud.google.com/storage/docs/quickstart-console) * [Enable AI Platform Training and Prediction and Compute Engine APIs](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component&_ga=2.217405014.1312742076.1516128282-1417583630.1516128282) * [Install Cloud SDK](https://cloud.google.com/sdk/downloads) * [Install scikit-learn](http://scikit-learn.org/stable/install.html) * [Install NumPy](https://docs.scipy.org/doc/numpy/user/install.html) * [Install pandas](https://pandas.pydata.org/pandas-docs/stable/install.html) * [Install Google API Python Client](https://github.com/google/google-api-python-client) These variables will be needed for the following steps. ** Replace: ** * `PROJECT_ID <YOUR_PROJECT_ID>` - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project. * `BUCKET_NAME <YOUR_BUCKET_NAME>` - with the bucket id you created above. * `MODEL_NAME <YOUR_MODEL_NAME>` - with your model name, such as '`census`' * `VERSION <YOUR_VERSION>` - with your version name, such as '`v1`' * `REGION <REGION>` - [select a region](https://cloud.google.com/ml-engine/docs/tensorflow/regions#available_regions) or use the default '`us-central1`'. The region is where the model will be deployed. ``` %env PROJECT_ID PROJECT_ID %env BUCKET_NAME BUCKET_NAME %env MODEL_NAME census %env VERSION_NAME v1 %env REGION us-central1 ``` ## Download the data The [Census Income Data Set](https://archive.ics.uci.edu/ml/datasets/Census+Income) that this sample uses for training is hosted by the [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/). * Training file is `adult.data` * Evaluation file is `adult.test` ### Disclaimer This dataset is provided by a third party. Google provides no representation, warranty, or other guarantees about the validity or any other aspects of this dataset. ``` # Create a directory to hold the data ! mkdir census_data # Download the data ! curl https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data --output census_data/adult.data ! curl https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test --output census_data/adult.test ``` # Part 1: Train/Save the model First, the data is loaded into a pandas DataFrame that can be used by scikit-learn. Then a simple model is created and fit against the training data. Lastly, sklearn's built in version of joblib is used to save the model to a file that can be uploaded to AI Platform. ``` import googleapiclient.discovery import json import numpy as np import os import pandas as pd import pickle from sklearn.ensemble import RandomForestClassifier from sklearn.externals import joblib from sklearn.feature_selection import SelectKBest from sklearn.pipeline import FeatureUnion from sklearn.pipeline import Pipeline from sklearn.preprocessing import LabelBinarizer # Define the format of your input data including unused columns (These are the columns from the census data files) COLUMNS = ( 'age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income-level' ) # Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn CATEGORICAL_COLUMNS = ( 'workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country' ) # Load the training census dataset with open('./census_data/adult.data', 'r') as train_data: raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS) # Remove the column we are trying to predict ('income-level') from our features list # Convert the Dataframe to a lists of lists train_features = raw_training_data.drop('income-level', axis=1).as_matrix().tolist() # Create our training labels list, convert the Dataframe to a lists of lists train_labels = (raw_training_data['income-level'] == ' >50K').as_matrix().tolist() # Load the test census dataset with open('./census_data/adult.test', 'r') as test_data: raw_testing_data = pd.read_csv(test_data, names=COLUMNS, skiprows=1) # Remove the column we are trying to predict ('income-level') from our features list # Convert the Dataframe to a lists of lists test_features = raw_testing_data.drop('income-level', axis=1).values.tolist() # Create our training labels list, convert the Dataframe to a lists of lists test_labels = (raw_testing_data['income-level'] == ' >50K.').values.tolist() # Since the census data set has categorical features, we need to convert # them to numerical values. We'll use a list of pipelines to convert each # categorical column and then use FeatureUnion to combine them before calling # the RandomForestClassifier. categorical_pipelines = [] # Each categorical column needs to be extracted individually and converted to a numerical value. # To do this, each categorical column will use a pipeline that extracts one feature column via # SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one. # A scores array (created below) will select and extract the feature column. The scores array is # created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN. for i, col in enumerate(COLUMNS[:-1]): if col in CATEGORICAL_COLUMNS: # Create a scores array to get the individual categorical column. # Example: # data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical', # 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States'] # scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] # # Returns: [['State-gov']] # Build the scores array. scores = [0] * len(COLUMNS[:-1]) # This column is the categorical column we want to extract. scores[i] = 1 skb = SelectKBest(k=1) skb.scores_ = scores # Convert the categorical column to a numerical value lbn = LabelBinarizer() r = skb.transform(train_features) lbn.fit(r) # Create the pipeline to extract the categorical feature categorical_pipelines.append( ('categorical-{}'.format(i), Pipeline([ ('SKB-{}'.format(i), skb), ('LBN-{}'.format(i), lbn)]))) # Create pipeline to extract the numerical features skb = SelectKBest(k=6) # From COLUMNS use the features that are numerical skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0] categorical_pipelines.append(('numerical', skb)) # Combine all the features using FeatureUnion preprocess = FeatureUnion(categorical_pipelines) # Create the classifier classifier = RandomForestClassifier() # Transform the features and fit them to the classifier classifier.fit(preprocess.transform(train_features), train_labels) # Create the overall model as a single pipeline pipeline = Pipeline([ ('union', preprocess), ('classifier', classifier) ]) # Export the model to a file joblib.dump(pipeline, 'model.joblib') print('Model trained and saved') ``` # Part 2: Upload the model Next, you'll need to upload the model to your project's storage bucket in GCS. To use your model with AI Platform, it needs to be uploaded to Google Cloud Storage (GCS). This step takes your local ‘model.joblib’ file and uploads it GCS via the Cloud SDK using gsutil. Before continuing, make sure you're [properly authenticated](https://cloud.google.com/sdk/gcloud/reference/auth/) and have [access to the bucket](https://cloud.google.com/storage/docs/access-control/). This next command sets your project to the one specified above. Note: If you get an error below, make sure the Cloud SDK is installed in the kernel's environment. ``` ! gcloud config set project $PROJECT_ID ``` Note: The exact file name of of the exported model you upload to GCS is important! Your model must be named “model.joblib”, “model.pkl”, or “model.bst” with respect to the library you used to export it. This restriction ensures that the model will be safely reconstructed later by using the same technique for import as was used during export. ``` ! gsutil cp ./model.joblib gs://$BUCKET_NAME/model.joblib ``` # Part 3: Create a model resource AI Platform organizes your trained models using model and version resources. An AI Platform model is a container for the versions of your machine learning model. For more information on model resources and model versions look [here](https://cloud.google.com/ml-engine/docs/deploying-models#creating_a_model_version). At this step, you create a container that you can use to hold several different versions of your actual model. ``` ! gcloud ml-engine models create $MODEL_NAME --regions $REGION ``` # Part 4: Create a model version Now it’s time to get your model online and ready for predictions. The model version requires a few components as specified [here](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions#Version). * __name__ - The name specified for the version when it was created. This will be the `VERSION_NAME` variable you declared at the beginning. * __model__ - The name of the model container we created in Part 3. This is the `MODEL_NAME` variable you declared at the beginning. * __deployment Uri__ - The Google Cloud Storage location of the trained model used to create the version. This is the bucket that you uploaded the model to with your `BUCKET_NAME` * __runtime version__ - [Select Google Cloud ML runtime version](https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list) to use for this deployment. This is set to 1.4 * __framework__ - The framework specifies if you are using: `TENSORFLOW`, `SCIKIT_LEARN`, `XGBOOST`. This is set to `SCIKIT_LEARN` * __pythonVersion__ - This specifies whether you’re using Python 2.7 or Python 3.5. The default value is set to `“2.7”`, if you are using Python 3.5, set the value to `“3.5”` Note: If you require a feature of scikit-learn that isn’t available in the publicly released version yet, you can specify “runtimeVersion”: “HEAD” instead, and that would get the latest version of scikit-learn available from the github repo. Otherwise the following versions will be used: * scikit-learn: 0.19.0 First, we need to create a YAML file to configure our model version. __REPLACE:__ `PREVIOUSLY_SPECIFIED_BUCKET_NAME` with your `BUCKET_NAME` ``` %%writefile ./config.yaml deploymentUri: "gs://BUCKET_NAME/" runtimeVersion: '1.4' framework: "SCIKIT_LEARN" pythonVersion: "3.5" ``` Use the created YAML file to create a model version. Note: It can take several minutes for you model to be available. ``` ! gcloud ml-engine versions create $VERSION_NAME \ --model $MODEL_NAME \ --config config.yaml ``` # Part 5: Make an online prediction It’s time to make an online prediction with your newly deployed model. Before you begin, you'll need to take some of the test data and prepare it, so that the test data can be used by the deployed model. ``` # Get one person that makes <=50K and one that makes >50K to test our model. print('Show a person that makes <=50K:') print('\tFeatures: {0} --> Label: {1}\n'.format(test_features[0], test_labels[0])) with open('less_than_50K.json', 'w') as outfile: json.dump(test_features[0], outfile) print('Show a person that makes >50K:') print('\tFeatures: {0} --> Label: {1}'.format(test_features[3], test_labels[3])) with open('more_than_50K.json', 'w') as outfile: json.dump(test_features[3], outfile) ``` ## Use gcloud to make online predictions Use the two people (as seen in the table) gathered in the previous step for the gcloud predictions. | **Person** | age | workclass | fnlwgt | education | education-num | marital-status | occupation | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-: | **1** | 25| Private | 226802 | 11th | 7 | Never-married | Machine-op-inspect | | **2** | 44| Private | 160323 | Some-college | 10 | Married-civ-spouse | Machine-op-inspct | | **Person** | relationship | race | sex | capital-gain | capital-loss | hours-per-week | native-country || (Label) income-level| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:||:-: | **1** | Own-child | Black | Male | 0 | 0 | 40 | United-States || False (<=50K) | | **2** | Huasband | Black | Male | 7688 | 0 | 40 | United-States || True (>50K) | Test the model with an online prediction using the data of a person who makes <=50K. Note: If you see an error, the model from Part 4 may not be created yet as it takes several minutes for a new model version to be created. ``` ! gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances less_than_50K.json ``` Test the model with an online prediction using the data of a person who makes >50K. ``` ! gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances more_than_50K.json ``` ## Use Python to make online predictions Test the model with the entire test set and print out some of the results. Note: If running notebook server on Compute Engine, make sure to ["allow full access to all Cloud APIs".](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes) ``` import googleapiclient.discovery import os import pandas as pd PROJECT_ID = os.environ['PROJECT_ID'] VERSION_NAME = os.environ['VERSION_NAME'] MODEL_NAME = os.environ['MODEL_NAME'] service = googleapiclient.discovery.build('ml', 'v1') name = 'projects/{}/models/{}'.format(PROJECT_ID, MODEL_NAME) name += '/versions/{}'.format(VERSION_NAME) # Due to the size of the data, it needs to be split in 2 first_half = test_features[:int(len(test_features)/2)] second_half = test_features[int(len(test_features)/2):] complete_results = [] for data in [first_half, second_half]: responses = service.projects().predict( name=name, body={'instances': data} ).execute() if 'error' in responses: print(response['error']) else: complete_results.extend(responses['predictions']) # Print the first 10 responses for i, response in enumerate(complete_results[:10]): print('Prediction: {}\tLabel: {}'.format(response, test_labels[i])) ``` # [Optional] Part 6: Verify Results Use a confusion matrix to create a visualization of the online predicted results from AI Platform. ``` actual = pd.Series(test_labels, name='actual') online = pd.Series(complete_results, name='online') pd.crosstab(actual,online) ``` Use a confusion matrix create a visualization of the predicted results from the local model. These results should be identical to the results above. ``` local_results = pipeline.predict(test_features) local = pd.Series(local_results, name='local') pd.crosstab(actual,local) ``` Directly compare the two results ``` identical = 0 different = 0 for i in range(len(complete_results)): if complete_results[i] == local_results[i]: identical += 1 else: different += 1 print('identical: {}, different: {}'.format(identical,different)) ``` If all results are identical, it means you've successfully uploaded your local model to AI Platform and performed online predictions correctly.
github_jupyter
# Programming Exercise 5: # Regularized Linear Regression and Bias vs Variance ## Introduction In this exercise, you will implement regularized linear regression and use it to study models with different bias-variance properties. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics. All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below). Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, [`matplotlib`](https://matplotlib.org/) for plotting, and [`scipy`](https://docs.scipy.org/doc/scipy/reference/) for scientific and numerical computation functions and tools. You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments). ``` # used for manipulating directory paths import os # Scientific and vector computation for python import numpy as np # Plotting library from matplotlib import pyplot # Optimization module in scipy from scipy import optimize # will be used to load MATLAB mat datafile format from scipy.io import loadmat # library written for this exercise providing additional functions for assignment submission, and others import utils # define the submission/grader object for this exercise grader = utils.Grader() # tells matplotlib to embed plots within the notebook %matplotlib inline ``` ## Submission and Grading After completing each part of the assignment, be sure to submit your solutions to the grader. The following is a breakdown of how each part of this exercise is scored. | Section | Part | Submitted Function | Points | | :- |:- |:- | :-: | | 1 | [Regularized Linear Regression Cost Function](#section1) | [`linearRegCostFunction`](#linearRegCostFunction) | 25 | | 2 | [Regularized Linear Regression Gradient](#section2) | [`linearRegCostFunction`](#linearRegCostFunction) |25 | | 3 | [Learning Curve](#section3) | [`learningCurve`](#func2) | 20 | | 4 | [Polynomial Feature Mapping](#section4) | [`polyFeatures`](#polyFeatures) | 10 | | 5 | [Cross Validation Curve](#section5) | [`validationCurve`](#validationCurve) | 20 | | | Total Points | |100 | You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration. <div class="alert alert-block alert-warning"> At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once. </div> <a id="section1"></a> ## 1 Regularized Linear Regression In the first half of the exercise, you will implement regularized linear regression to predict the amount of water flowing out of a dam using the change of water level in a reservoir. In the next half, you will go through some diagnostics of debugging learning algorithms and examine the effects of bias v.s. variance. ### 1.1 Visualizing the dataset We will begin by visualizing the dataset containing historical records on the change in the water level, $x$, and the amount of water flowing out of the dam, $y$. This dataset is divided into three parts: - A **training** set that your model will learn on: `X`, `y` - A **cross validation** set for determining the regularization parameter: `Xval`, `yval` - A **test** set for evaluating performance. These are “unseen” examples which your model did not see during training: `Xtest`, `ytest` Run the next cell to plot the training data. In the following parts, you will implement linear regression and use that to fit a straight line to the data and plot learning curves. Following that, you will implement polynomial regression to find a better fit to the data. ``` # Load from ex5data1.mat, where all variables will be store in a dictionary data = loadmat(os.path.join('Data', 'ex5data1.mat')) # Extract train, test, validation data from dictionary # and also convert y's form 2-D matrix (MATLAB format) to a numpy vector X, y = data['X'], data['y'][:, 0] Xtest, ytest = data['Xtest'], data['ytest'][:, 0] Xval, yval = data['Xval'], data['yval'][:, 0] # m = Number of examples m = y.size # Plot training data pyplot.plot(X, y, 'ro', ms=10, mec='k', mew=1) pyplot.xlabel('Change in water level (x)') pyplot.ylabel('Water flowing out of the dam (y)'); ``` ### 1.2 Regularized linear regression cost function Recall that regularized linear regression has the following cost function: $$ J(\theta) = \frac{1}{2m} \left( \sum_{i=1}^m \left( h_\theta\left( x^{(i)} \right) - y^{(i)} \right)^2 \right) + \frac{\lambda}{2m} \left( \sum_{j=1}^n \theta_j^2 \right)$$ where $\lambda$ is a regularization parameter which controls the degree of regularization (thus, help preventing overfitting). The regularization term puts a penalty on the overall cost J. As the magnitudes of the model parameters $\theta_j$ increase, the penalty increases as well. Note that you should not regularize the $\theta_0$ term. You should now complete the code in the function `linearRegCostFunction` in the next cell. Your task is to calculate the regularized linear regression cost function. If possible, try to vectorize your code and avoid writing loops. <a id="linearRegCostFunction"></a> ``` def linearRegCostFunction(X, y, theta, lambda_=0.0): """ Compute cost and gradient for regularized linear regression with multiple variables. Computes the cost of using theta as the parameter for linear regression to fit the data points in X and y. Parameters ---------- X : array_like The dataset. Matrix with shape (m x n + 1) where m is the total number of examples, and n is the number of features before adding the bias term. y : array_like The functions values at each datapoint. A vector of shape (m, ). theta : array_like The parameters for linear regression. A vector of shape (n+1,). lambda_ : float, optional The regularization parameter. Returns ------- J : float The computed cost function. grad : array_like The value of the cost function gradient w.r.t theta. A vector of shape (n+1, ). Instructions ------------ Compute the cost and gradient of regularized linear regression for a particular choice of theta. You should set J to the cost and grad to the gradient. """ # Initialize some useful values m = y.size # number of training examples # You need to return the following variables correctly J = 0 grad = np.zeros(theta.shape) # ====================== YOUR CODE HERE ====================== # ============================================================ return J, grad ``` When you are finished, the next cell will run your cost function using `theta` initialized at `[1, 1]`. You should expect to see an output of 303.993. ``` theta = np.array([1, 1]) J, _ = linearRegCostFunction(np.concatenate([np.ones((m, 1)), X], axis=1), y, theta, 1) print('Cost at theta = [1, 1]:\t %f ' % J) print('This value should be about 303.993192)\n' % J) ``` After completing a part of the exercise, you can submit your solutions for grading by first adding the function you modified to the submission object, and then sending your function to Coursera for grading. The submission script will prompt you for your login e-mail and submission token. You can obtain a submission token from the web page for the assignment. You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration. *Execute the following cell to grade your solution to the first part of this exercise.* ``` grader[1] = linearRegCostFunction grader.grade() ``` <a id="section2"></a> ### 1.3 Regularized linear regression gradient Correspondingly, the partial derivative of the cost function for regularized linear regression is defined as: $$ \begin{align} & \frac{\partial J(\theta)}{\partial \theta_0} = \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left(x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} & \qquad \text{for } j = 0 \\ & \frac{\partial J(\theta)}{\partial \theta_j} = \left( \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left( x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} \right) + \frac{\lambda}{m} \theta_j & \qquad \text{for } j \ge 1 \end{align} $$ In the function [`linearRegCostFunction`](#linearRegCostFunction) above, add code to calculate the gradient, returning it in the variable `grad`. <font color='red'><b>Do not forget to re-execute the cell containing this function to update the function's definition.</b></font> When you are finished, use the next cell to run your gradient function using theta initialized at `[1, 1]`. You should expect to see a gradient of `[-15.30, 598.250]`. ``` theta = np.array([1, 1]) J, grad = linearRegCostFunction(np.concatenate([np.ones((m, 1)), X], axis=1), y, theta, 1) print('Gradient at theta = [1, 1]: [{:.6f}, {:.6f}] '.format(*grad)) print(' (this value should be about [-15.303016, 598.250744])\n') ``` *You should now submit your solutions.* ``` grader[2] = linearRegCostFunction grader.grade() ``` ### Fitting linear regression Once your cost function and gradient are working correctly, the next cell will run the code in `trainLinearReg` (found in the module `utils.py`) to compute the optimal values of $\theta$. This training function uses `scipy`'s optimization module to minimize the cost function. In this part, we set regularization parameter $\lambda$ to zero. Because our current implementation of linear regression is trying to fit a 2-dimensional $\theta$, regularization will not be incredibly helpful for a $\theta$ of such low dimension. In the later parts of the exercise, you will be using polynomial regression with regularization. Finally, the code in the next cell should also plot the best fit line, which should look like the figure below. ![](Figures/linear_fit.png) The best fit line tells us that the model is not a good fit to the data because the data has a non-linear pattern. While visualizing the best fit as shown is one possible way to debug your learning algorithm, it is not always easy to visualize the data and model. In the next section, you will implement a function to generate learning curves that can help you debug your learning algorithm even if it is not easy to visualize the data. ``` # add a columns of ones for the y-intercept X_aug = np.concatenate([np.ones((m, 1)), X], axis=1) theta = utils.trainLinearReg(linearRegCostFunction, X_aug, y, lambda_=0) # Plot fit over the data pyplot.plot(X, y, 'ro', ms=10, mec='k', mew=1.5) pyplot.xlabel('Change in water level (x)') pyplot.ylabel('Water flowing out of the dam (y)') pyplot.plot(X, np.dot(X_aug, theta), '--', lw=2); ``` <a id="section3"></a> ## 2 Bias-variance An important concept in machine learning is the bias-variance tradeoff. Models with high bias are not complex enough for the data and tend to underfit, while models with high variance overfit to the training data. In this part of the exercise, you will plot training and test errors on a learning curve to diagnose bias-variance problems. ### 2.1 Learning Curves You will now implement code to generate the learning curves that will be useful in debugging learning algorithms. Recall that a learning curve plots training and cross validation error as a function of training set size. Your job is to fill in the function `learningCurve` in the next cell, so that it returns a vector of errors for the training set and cross validation set. To plot the learning curve, we need a training and cross validation set error for different training set sizes. To obtain different training set sizes, you should use different subsets of the original training set `X`. Specifically, for a training set size of $i$, you should use the first $i$ examples (i.e., `X[:i, :]` and `y[:i]`). You can use the `trainLinearReg` function (by calling `utils.trainLinearReg(...)`) to find the $\theta$ parameters. Note that the `lambda_` is passed as a parameter to the `learningCurve` function. After learning the $\theta$ parameters, you should compute the error on the training and cross validation sets. Recall that the training error for a dataset is defined as $$ J_{\text{train}} = \frac{1}{2m} \left[ \sum_{i=1}^m \left(h_\theta \left( x^{(i)} \right) - y^{(i)} \right)^2 \right] $$ In particular, note that the training error does not include the regularization term. One way to compute the training error is to use your existing cost function and set $\lambda$ to 0 only when using it to compute the training error and cross validation error. When you are computing the training set error, make sure you compute it on the training subset (i.e., `X[:n,:]` and `y[:n]`) instead of the entire training set. However, for the cross validation error, you should compute it over the entire cross validation set. You should store the computed errors in the vectors error train and error val. <a id="func2"></a> ``` def learningCurve(X, y, Xval, yval, lambda_=0): """ Generates the train and cross validation set errors needed to plot a learning curve returns the train and cross validation set errors for a learning curve. In this function, you will compute the train and test errors for dataset sizes from 1 up to m. In practice, when working with larger datasets, you might want to do this in larger intervals. Parameters ---------- X : array_like The training dataset. Matrix with shape (m x n + 1) where m is the total number of examples, and n is the number of features before adding the bias term. y : array_like The functions values at each training datapoint. A vector of shape (m, ). Xval : array_like The validation dataset. Matrix with shape (m_val x n + 1) where m is the total number of examples, and n is the number of features before adding the bias term. yval : array_like The functions values at each validation datapoint. A vector of shape (m_val, ). lambda_ : float, optional The regularization parameter. Returns ------- error_train : array_like A vector of shape m. error_train[i] contains the training error for i examples. error_val : array_like A vecotr of shape m. error_val[i] contains the validation error for i training examples. Instructions ------------ Fill in this function to return training errors in error_train and the cross validation errors in error_val. i.e., error_train[i] and error_val[i] should give you the errors obtained after training on i examples. Notes ----- - You should evaluate the training error on the first i training examples (i.e., X[:i, :] and y[:i]). For the cross-validation error, you should instead evaluate on the _entire_ cross validation set (Xval and yval). - If you are using your cost function (linearRegCostFunction) to compute the training and cross validation error, you should call the function with the lambda argument set to 0. Do note that you will still need to use lambda when running the training to obtain the theta parameters. Hint ---- You can loop over the examples with the following: for i in range(1, m+1): # Compute train/cross validation errors using training examples # X[:i, :] and y[:i], storing the result in # error_train[i-1] and error_val[i-1] .... """ # Number of training examples m = y.size # You need to return these values correctly error_train = np.zeros(m) error_val = np.zeros(m) # ====================== YOUR CODE HERE ====================== # ============================================================= return error_train, error_val ``` When you are finished implementing the function `learningCurve`, executing the next cell prints the learning curves and produce a plot similar to the figure below. ![](Figures/learning_curve.png) In the learning curve figure, you can observe that both the train error and cross validation error are high when the number of training examples is increased. This reflects a high bias problem in the model - the linear regression model is too simple and is unable to fit our dataset well. In the next section, you will implement polynomial regression to fit a better model for this dataset. ``` X_aug = np.concatenate([np.ones((m, 1)), X], axis=1) Xval_aug = np.concatenate([np.ones((yval.size, 1)), Xval], axis=1) error_train, error_val = learningCurve(X_aug, y, Xval_aug, yval, lambda_=0) pyplot.plot(np.arange(1, m+1), error_train, np.arange(1, m+1), error_val, lw=2) pyplot.title('Learning curve for linear regression') pyplot.legend(['Train', 'Cross Validation']) pyplot.xlabel('Number of training examples') pyplot.ylabel('Error') pyplot.axis([0, 13, 0, 150]) print('# Training Examples\tTrain Error\tCross Validation Error') for i in range(m): print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i])) ``` *You should now submit your solutions.* ``` grader[3] = learningCurve grader.grade() ``` <a id="section4"></a> ## 3 Polynomial regression The problem with our linear model was that it was too simple for the data and resulted in underfitting (high bias). In this part of the exercise, you will address this problem by adding more features. For polynomial regression, our hypothesis has the form: $$ \begin{align} h_\theta(x) &= \theta_0 + \theta_1 \times (\text{waterLevel}) + \theta_2 \times (\text{waterLevel})^2 + \cdots + \theta_p \times (\text{waterLevel})^p \\ & = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \cdots + \theta_p x_p \end{align} $$ Notice that by defining $x_1 = (\text{waterLevel})$, $x_2 = (\text{waterLevel})^2$ , $\cdots$, $x_p = (\text{waterLevel})^p$, we obtain a linear regression model where the features are the various powers of the original value (waterLevel). Now, you will add more features using the higher powers of the existing feature $x$ in the dataset. Your task in this part is to complete the code in the function `polyFeatures` in the next cell. The function should map the original training set $X$ of size $m \times 1$ into its higher powers. Specifically, when a training set $X$ of size $m \times 1$ is passed into the function, the function should return a $m \times p$ matrix `X_poly`, where column 1 holds the original values of X, column 2 holds the values of $X^2$, column 3 holds the values of $X^3$, and so on. Note that you don’t have to account for the zero-eth power in this function. <a id="polyFeatures"></a> ``` def polyFeatures(X, p): """ Maps X (1D vector) into the p-th power. Parameters ---------- X : array_like A data vector of size m, where m is the number of examples. p : int The polynomial power to map the features. Returns ------- X_poly : array_like A matrix of shape (m x p) where p is the polynomial power and m is the number of examples. That is: X_poly[i, :] = [X[i], X[i]**2, X[i]**3 ... X[i]**p] Instructions ------------ Given a vector X, return a matrix X_poly where the p-th column of X contains the values of X to the p-th power. """ # You need to return the following variables correctly. X_poly = np.zeros((X.shape[0], p)) # ====================== YOUR CODE HERE ====================== # ============================================================ return X_poly ``` Now you have a function that will map features to a higher dimension. The next cell will apply it to the training set, the test set, and the cross validation set. ``` p = 8 # Map X onto Polynomial Features and Normalize X_poly = polyFeatures(X, p) X_poly, mu, sigma = utils.featureNormalize(X_poly) X_poly = np.concatenate([np.ones((m, 1)), X_poly], axis=1) # Map X_poly_test and normalize (using mu and sigma) X_poly_test = polyFeatures(Xtest, p) X_poly_test -= mu X_poly_test /= sigma X_poly_test = np.concatenate([np.ones((ytest.size, 1)), X_poly_test], axis=1) # Map X_poly_val and normalize (using mu and sigma) X_poly_val = polyFeatures(Xval, p) X_poly_val -= mu X_poly_val /= sigma X_poly_val = np.concatenate([np.ones((yval.size, 1)), X_poly_val], axis=1) print('Normalized Training Example 1:') X_poly[0, :] ``` *You should now submit your solutions.* ``` grader[4] = polyFeatures grader.grade() ``` ## 3.1 Learning Polynomial Regression After you have completed the function `polyFeatures`, we will proceed to train polynomial regression using your linear regression cost function. Keep in mind that even though we have polynomial terms in our feature vector, we are still solving a linear regression optimization problem. The polynomial terms have simply turned into features that we can use for linear regression. We are using the same cost function and gradient that you wrote for the earlier part of this exercise. For this part of the exercise, you will be using a polynomial of degree 8. It turns out that if we run the training directly on the projected data, will not work well as the features would be badly scaled (e.g., an example with $x = 40$ will now have a feature $x_8 = 40^8 = 6.5 \times 10^{12}$). Therefore, you will need to use feature normalization. Before learning the parameters $\theta$ for the polynomial regression, we first call `featureNormalize` and normalize the features of the training set, storing the mu, sigma parameters separately. We have already implemented this function for you (in `utils.py` module) and it is the same function from the first exercise. After learning the parameters $\theta$, you should see two plots generated for polynomial regression with $\lambda = 0$, which should be similar to the ones here: <table> <tr> <td><img src="Figures/polynomial_regression.png"></td> <td><img src="Figures/polynomial_learning_curve.png"></td> </tr> </table> You should see that the polynomial fit is able to follow the datapoints very well, thus, obtaining a low training error. The figure on the right shows that the training error essentially stays zero for all numbers of training samples. However, the polynomial fit is very complex and even drops off at the extremes. This is an indicator that the polynomial regression model is overfitting the training data and will not generalize well. To better understand the problems with the unregularized ($\lambda = 0$) model, you can see that the learning curve shows the same effect where the training error is low, but the cross validation error is high. There is a gap between the training and cross validation errors, indicating a high variance problem. ``` lambda_ = 0 theta = utils.trainLinearReg(linearRegCostFunction, X_poly, y, lambda_=lambda_, maxiter=55) # Plot training data and fit pyplot.plot(X, y, 'ro', ms=10, mew=1.5, mec='k') utils.plotFit(polyFeatures, np.min(X), np.max(X), mu, sigma, theta, p) pyplot.xlabel('Change in water level (x)') pyplot.ylabel('Water flowing out of the dam (y)') pyplot.title('Polynomial Regression Fit (lambda = %f)' % lambda_) pyplot.ylim([-20, 50]) pyplot.figure() error_train, error_val = learningCurve(X_poly, y, X_poly_val, yval, lambda_) pyplot.plot(np.arange(1, 1+m), error_train, np.arange(1, 1+m), error_val) pyplot.title('Polynomial Regression Learning Curve (lambda = %f)' % lambda_) pyplot.xlabel('Number of training examples') pyplot.ylabel('Error') pyplot.axis([0, 13, 0, 100]) pyplot.legend(['Train', 'Cross Validation']) print('Polynomial Regression (lambda = %f)\n' % lambda_) print('# Training Examples\tTrain Error\tCross Validation Error') for i in range(m): print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i])) ``` One way to combat the overfitting (high-variance) problem is to add regularization to the model. In the next section, you will get to try different $\lambda$ parameters to see how regularization can lead to a better model. ### 3.2 Optional (ungraded) exercise: Adjusting the regularization parameter In this section, you will get to observe how the regularization parameter affects the bias-variance of regularized polynomial regression. You should now modify the the lambda parameter and try $\lambda = 1, 100$. For each of these values, the script should generate a polynomial fit to the data and also a learning curve. For $\lambda = 1$, the generated plots should look like the the figure below. You should see a polynomial fit that follows the data trend well (left) and a learning curve (right) showing that both the cross validation and training error converge to a relatively low value. This shows the $\lambda = 1$ regularized polynomial regression model does not have the high-bias or high-variance problems. In effect, it achieves a good trade-off between bias and variance. <table> <tr> <td><img src="Figures/polynomial_regression_reg_1.png"></td> <td><img src="Figures/polynomial_learning_curve_reg_1.png"></td> </tr> </table> For $\lambda = 100$, you should see a polynomial fit (figure below) that does not follow the data well. In this case, there is too much regularization and the model is unable to fit the training data. ![](Figures/polynomial_regression_reg_100.png) *You do not need to submit any solutions for this optional (ungraded) exercise.* <a id="section5"></a> ### 3.3 Selecting $\lambda$ using a cross validation set From the previous parts of the exercise, you observed that the value of $\lambda$ can significantly affect the results of regularized polynomial regression on the training and cross validation set. In particular, a model without regularization ($\lambda = 0$) fits the training set well, but does not generalize. Conversely, a model with too much regularization ($\lambda = 100$) does not fit the training set and testing set well. A good choice of $\lambda$ (e.g., $\lambda = 1$) can provide a good fit to the data. In this section, you will implement an automated method to select the $\lambda$ parameter. Concretely, you will use a cross validation set to evaluate how good each $\lambda$ value is. After selecting the best $\lambda$ value using the cross validation set, we can then evaluate the model on the test set to estimate how well the model will perform on actual unseen data. Your task is to complete the code in the function `validationCurve`. Specifically, you should should use the `utils.trainLinearReg` function to train the model using different values of $\lambda$ and compute the training error and cross validation error. You should try $\lambda$ in the following range: {0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10}. <a id="validationCurve"></a> ``` def validationCurve(X, y, Xval, yval): """ Generate the train and validation errors needed to plot a validation curve that we can use to select lambda_. Parameters ---------- X : array_like The training dataset. Matrix with shape (m x n) where m is the total number of training examples, and n is the number of features including any polynomial features. y : array_like The functions values at each training datapoint. A vector of shape (m, ). Xval : array_like The validation dataset. Matrix with shape (m_val x n) where m is the total number of validation examples, and n is the number of features including any polynomial features. yval : array_like The functions values at each validation datapoint. A vector of shape (m_val, ). Returns ------- lambda_vec : list The values of the regularization parameters which were used in cross validation. error_train : list The training error computed at each value for the regularization parameter. error_val : list The validation error computed at each value for the regularization parameter. Instructions ------------ Fill in this function to return training errors in `error_train` and the validation errors in `error_val`. The vector `lambda_vec` contains the different lambda parameters to use for each calculation of the errors, i.e, `error_train[i]`, and `error_val[i]` should give you the errors obtained after training with `lambda_ = lambda_vec[i]`. Note ---- You can loop over lambda_vec with the following: for i in range(len(lambda_vec)) lambda = lambda_vec[i] # Compute train / val errors when training linear # regression with regularization parameter lambda_ # You should store the result in error_train[i] # and error_val[i] .... """ # Selected values of lambda (you should not change this) lambda_vec = [0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10] # You need to return these variables correctly. error_train = np.zeros(len(lambda_vec)) error_val = np.zeros(len(lambda_vec)) # ====================== YOUR CODE HERE ====================== # ============================================================ return lambda_vec, error_train, error_val ``` After you have completed the code, the next cell will run your function and plot a cross validation curve of error v.s. $\lambda$ that allows you select which $\lambda$ parameter to use. You should see a plot similar to the figure below. ![](Figures/cross_validation.png) In this figure, we can see that the best value of $\lambda$ is around 3. Due to randomness in the training and validation splits of the dataset, the cross validation error can sometimes be lower than the training error. ``` lambda_vec, error_train, error_val = validationCurve(X_poly, y, X_poly_val, yval) pyplot.plot(lambda_vec, error_train, '-o', lambda_vec, error_val, '-o', lw=2) pyplot.legend(['Train', 'Cross Validation']) pyplot.xlabel('lambda') pyplot.ylabel('Error') print('lambda\t\tTrain Error\tValidation Error') for i in range(len(lambda_vec)): print(' %f\t%f\t%f' % (lambda_vec[i], error_train[i], error_val[i])) ``` *You should now submit your solutions.* ``` grader[5] = validationCurve grader.grade() ``` ### 3.4 Optional (ungraded) exercise: Computing test set error In the previous part of the exercise, you implemented code to compute the cross validation error for various values of the regularization parameter $\lambda$. However, to get a better indication of the model’s performance in the real world, it is important to evaluate the “final” model on a test set that was not used in any part of training (that is, it was neither used to select the $\lambda$ parameters, nor to learn the model parameters $\theta$). For this optional (ungraded) exercise, you should compute the test error using the best value of $\lambda$ you found. In our cross validation, we obtained a test error of 3.8599 for $\lambda = 3$. *You do not need to submit any solutions for this optional (ungraded) exercise.* ### 3.5 Optional (ungraded) exercise: Plotting learning curves with randomly selected examples In practice, especially for small training sets, when you plot learning curves to debug your algorithms, it is often helpful to average across multiple sets of randomly selected examples to determine the training error and cross validation error. Concretely, to determine the training error and cross validation error for $i$ examples, you should first randomly select $i$ examples from the training set and $i$ examples from the cross validation set. You will then learn the parameters $\theta$ using the randomly chosen training set and evaluate the parameters $\theta$ on the randomly chosen training set and cross validation set. The above steps should then be repeated multiple times (say 50) and the averaged error should be used to determine the training error and cross validation error for $i$ examples. For this optional (ungraded) exercise, you should implement the above strategy for computing the learning curves. For reference, the figure below shows the learning curve we obtained for polynomial regression with $\lambda = 0.01$. Your figure may differ slightly due to the random selection of examples. ![](Figures/learning_curve_random.png) *You do not need to submit any solutions for this optional (ungraded) exercise.*
github_jupyter